Compare commits

...

77 Commits

Author SHA1 Message Date
Valeri
0b3d12be40 Fix check is thread current in GetThreadContext
Misplaced break made it only check for the first core.
2021-08-19 16:46:30 +03:00
bunnei
aa40084c24 Merge pull request #6832 from bunnei/scheduler-improvements
kernel: Various improvements to scheduler
2021-08-18 15:42:46 -07:00
Fernando S
521e6ac174 Merge pull request #6863 from spholz/fix-lan-play
Fix LAN Play
2021-08-16 17:10:00 +02:00
Sönke Holz
356dbf4d1d network_interface: correct formatting 2021-08-16 12:18:19 +02:00
spholz
dc47b5a5bf network_interface: fix mingw-w64 build 2021-08-16 12:06:35 +02:00
Sönke Holz
70419f7a17 network: retrieve subnet mask and gateway info 2021-08-16 10:32:25 +02:00
bunnei
87d63b858a Merge pull request #6861 from yzct12345/const-mempy-is-all-the-speed
decoders: Optimize memcpy for the other functions
2021-08-15 02:38:12 -07:00
bunnei
bdd617da03 Merge pull request #6868 from yzct12345/safe-threads-no-deadlocks
threadsafe_queue: Fix deadlock
2021-08-14 02:28:59 -07:00
bunnei
aef0ca6f0d core: hle: kernel: Disable dispatch count tracking on single core.
- This would have limited value, and would be a mess to handle properly.
2021-08-14 02:14:19 -07:00
yzct12345
0ba521e634 threadsafe_queue: Fix deadlock
This fixes a lost wakeup in SPSCQueue. If the reader is in just the right position, the writer's notification will be lost and this will be a problem if the writer then does something to wait on the reader.

This was discovered to affect my upcoming stacktrace PR. I don't think any performance decrease will be noticeable because an uncontended mutex is smart enough to skip the syscall. This PR might also resolve some rare deadlocks but I don't know of any examples.
2021-08-13 19:22:51 +00:00
Sönke Holz
068c66672d configuration: fix mingw-w64 build 2021-08-13 12:39:14 +02:00
spholz
deb65a5717 network: don't use reinterpret_cast in GetAvailableNetworkInterfaces 2021-08-13 11:58:34 +02:00
Sönke Holz
e660334a21 network: fix mingw-w64 build
The header "combaseapi.h" of mingw-w64 defines "interface" as "struct".
2021-08-13 11:23:50 +02:00
Sönke Holz
b18e1d031f network: don't use assert to check if no network interfaces are returned 2021-08-13 11:21:34 +02:00
bunnei
71d8d84b59 Merge pull request #6862 from german77/badsdl
input_common: Disable sdl raw input mode
2021-08-12 21:14:26 -07:00
Sönke Holz
0476751ee2 configuration: move network_interface include to source file 2021-08-13 02:48:39 +02:00
Sönke Holz
a0c4c1a23a network: use Common::BitCast instead of std::bit_cast 2021-08-13 01:28:14 +02:00
Sönke Holz
8513e59431 network: narrow down scope of "result" in win32 code for
GetAvailableNetworkInterfaces
2021-08-13 00:37:03 +02:00
Sönke Holz
04ec426201 configuration: use tr instead of QStringLiteral for "None" item in
network interface combobox
2021-08-13 00:34:04 +02:00
Sönke Holz
771de32af1 network: use explicit bool conversions in GetAvailableNetworkInterfaces 2021-08-13 00:31:33 +02:00
Sönke Holz
765e97c347 network: initialize ip_addr in GetHostIPv4Address() 2021-08-13 00:28:44 +02:00
Sönke Holz
acca8aca8c nifm: use operator*() instead of .value() to get value of std::optional 2021-08-13 00:24:33 +02:00
Sönke Holz
970d81abfc nifm: treat a missing host IP address as a non-critical error 2021-08-13 00:21:54 +02:00
spholz
78a8249593 Merge branch 'yuzu-emu:master' into fix-lan-play 2021-08-12 22:27:17 +02:00
Sönke Holz
21743daf38 network: correct formatting in network.cpp and network_interface.cpp 2021-08-12 22:15:48 +02:00
spholz
1e98e73828 configuration: add option to select network interface
This commit renames the "Services" tab to "Network" and adds a combobox that allows the user to select the network interface that yuzu should use. This new setting is now used to get the local IP address in Network::GetHostIPv4Address. This prevents yuzu from selecting the wrong network interface and thus using the wrong IP address. The return type of Network::GetHostIPv4Adress has also been changed.
2021-08-12 21:32:53 +02:00
bunnei
0509fe3377 Merge pull request #6838 from ameerj/sws-align
vic: Specify sws_scale height stride.
2021-08-12 11:28:33 -07:00
german77
2a2f0bfe9e input_common: Disable sdl raw input mode 2021-08-12 13:17:07 -05:00
yzct12345
430255caf8 decoders: Templates allow memcpy optimizations 2021-08-12 04:45:25 +00:00
Mai M
043904bae1 Merge pull request #6855 from german77/sdl16
externals: Update sdl2 to 2.0.16
2021-08-11 23:14:53 -04:00
Mai M
756d76d971 Merge pull request #6860 from lat9nq/ranged-settings-2
settings: Fix MSVC issues
2021-08-11 17:53:09 -04:00
lat9nq
5be2d6fd28 settings: Fix MSVC issues
According to https://stackoverflow.com/questions/469508, we run into a
MSVC bug (since VS 2005) when using diamond inheritance for
RangedSetting.

This explicitly implements those functions in RangedSetting. GetValue is
implemented as just calling the inherited version. The explicit
converson operator is reimplemented. I opted for this over ignoring the
warning with a pragma since this specifies the inherited behavior, and I
have now less faith in MSVC to pick the right one.

In addition, we mark destructors as virtual to silence what I believe is
a fair MSVC compilation error.
2021-08-11 17:12:14 -04:00
bunnei
e6b80c2cf8 Merge pull request #6776 from lat9nq/ranged-settings
settings: Implement settings ranges
2021-08-10 21:19:01 -07:00
german77
fe2e710003 externals: Update sdl2 to 2.0.16 2021-08-10 19:16:30 -05:00
Fernando S
6a082df427 Merge pull request #6820 from yzct12345/split-cache
texture_cache: Split out template definitions
2021-08-10 12:23:05 +02:00
Ameer J
9fbe188c01 Merge pull request #6837 from german77/no-pause-screenshot
main: Avoid stopping emulation when taking a screenshot
2021-08-09 23:49:48 -04:00
ameerj
a779cede7c vic: Specify sws_scale height stride.
Silences a sws_scale runtime warning about unaligned strides.
2021-08-09 23:24:16 -04:00
bunnei
7df790f1ae Merge pull request #6823 from yzct12345/memory-cleanup
memory: Clean up code
2021-08-09 17:09:56 -07:00
bunnei
3e3bd425c1 Merge pull request #6839 from ameerj/frame-cap-positon
configure_general: Swap positions of speed limit and frame limit options
2021-08-09 12:32:07 -07:00
yzct12345
c4eafcc861 texture_cache: Address ameerj's review 2021-08-08 11:02:51 +00:00
ameerj
8e0cc3e59a configure_general: Swap positions of speed limit and frame limit options 2021-08-08 01:00:40 -04:00
german77
acce512ae8 main: Avoid stopping emulation when taking a screenshot 2021-08-07 15:45:29 -05:00
bunnei
5060a97210 core: hle: kernel: k_thread: Mark KScopedDisableDispatch as nodiscard. 2021-08-07 12:33:31 -07:00
bunnei
9e3d1d865c core: cpu_manager: Use invalid core_id on init and simplify shutdown. 2021-08-07 12:33:07 -07:00
bunnei
99bc49e76e core: hle: service: buffer_queue: Improve management of KEvent. 2021-08-07 12:18:48 -07:00
bunnei
48a3496b93 core: hle: kernel: k_auto_object: Add GetName method.
- Useful purely for debugging.
2021-08-07 12:18:48 -07:00
bunnei
36cf96857e core: hle: service: nvflinger/vi: Improve management of KEvent. 2021-08-07 12:18:47 -07:00
bunnei
5051d3c415 core: hle: kernel: DisableDispatch on suspend threads. 2021-08-07 12:18:47 -07:00
bunnei
1798c3b6b0 core: hle: kernel: k_scheduler: Improve DisableScheduling and EnableScheduling. 2021-08-07 12:18:47 -07:00
bunnei
cbe4e32d38 core: cpu_manager: Use KScopedDisableDispatch. 2021-08-07 12:18:47 -07:00
bunnei
2dfb07388a core: hle: kernel: Use CurrentPhysicalCoreIndex as appropriate. 2021-08-07 12:18:47 -07:00
bunnei
d1c502720d core: hle: kernel: k_scheduler: Remove unnecessary MakeCurrentProcess. 2021-08-07 12:18:47 -07:00
bunnei
77ad64b97d core: hle: kernel: k_scheduler: Improve ScheduleImpl. 2021-08-07 12:18:47 -07:00
bunnei
bedcf19710 core: hle: kernel: k_scheduler: Improve Unload. 2021-08-07 12:18:47 -07:00
bunnei
7569d6774d core: hle: kernel: k_process: DisableDispatch on main thread. 2021-08-07 12:18:47 -07:00
bunnei
f2b0d28983 core: hle: kernel: k_handle_table: Use KScopedDisableDispatch as necessary. 2021-08-07 12:18:47 -07:00
bunnei
01af2f4162 core: hle: kernel: k_thread: Add KScopedDisableDispatch. 2021-08-07 12:18:47 -07:00
bunnei
2b9560428b core: hle: kernel: Ensure idle threads are closed before destroying scheduler. 2021-08-07 12:18:47 -07:00
bunnei
68eee94875 core: hle: kernel: Reflect non-emulated threads as core 3. 2021-08-07 12:18:47 -07:00
bunnei
5ea0d3629a core: cpu_manager: Use jthread. 2021-08-07 12:18:47 -07:00
yzct12345
5f97f74a9a memory: Address lioncash's review 2021-08-07 03:03:21 +00:00
yzct12345
70cc4c0f46 memory: Dedup Read and Write and fix logging bugs 2021-08-07 01:32:06 +00:00
yzct12345
e80323b8b0 texture_cache: Address ameerj's review 2021-08-07 01:27:47 +00:00
spholz
33ebe471e8 Merge branch 'yuzu-emu:master' into fix-lan-play 2021-08-07 02:55:19 +02:00
Sönke Holz
ddeb8d854e network: GetAndLogLastError: ignore Errno::AGAIN
If non-blocking sockets are used, they generate a lot of Errno::AGAIN errors when they didn't receive any data. These errors shouldn't be logged.
2021-08-07 02:54:25 +02:00
Sönke Holz
dd5c41b5a6 network: GetCurrentIpConfigInfo: return host IP address
Service::NIFM::IGeneralService::GetCurrentIpConfigInfo currently hardcodes 192.168.1.100 as the IP address, which prevents LAN play from working correctly.
2021-08-07 02:17:02 +02:00
Sönke Holz
652e5e3df0 network: fix fcntl cmds
F_SETFL/F_GETFL are the correct commands to set a socket to be non-blocking
2021-08-06 21:08:31 +02:00
yzct12345
e611f522c2 memory: Clean up CopyBlock too 2021-08-05 21:09:08 +00:00
yzct12345
02e98f6c93 texture_cache: Don't change copyright year 2021-08-05 20:52:12 +00:00
yzct12345
5566f3dbc0 texture_cache: Address ameerj's review 2021-08-05 20:46:24 +00:00
yzct12345
4edfa6ad8f memory: Address lioncash's review 2021-08-05 20:29:43 +00:00
yzct12345
6df9611059 memory: Clean up code 2021-08-05 20:11:14 +00:00
yzct12345
f9563c8f24 texture_cache: Split templates out 2021-08-05 13:52:30 +00:00
lat9nq
3862511a9a settings: Use std::clamp where possible
Addresses PR review

Co-authored-by: PixelyIon <pixelyion@protonmail.com>
2021-07-31 17:20:12 -04:00
lat9nq
e9cf08c241 settings: Remove unnecessary std::move usages
Addresses review feedback.

Co-authored-by: Mai M. <mathew1800@gmail.com>
2021-07-30 18:44:50 -04:00
lat9nq
7737bdfd1a settings: Fix function virtualization
Fixes a theoretical scenario where a Setting is using the BasicSetting's
GetValue function. In practice this probably only happens on yuzu-cmd,
where there is no need for a Setting's additional features. Need to fix
regardless.
2021-07-30 13:33:35 -04:00
lat9nq
a1f19b61f8 settings: Implement setting ranges
Clamps the setting's values against the specified minimum and maximum
values.
2021-07-30 13:33:21 -04:00
61 changed files with 1564 additions and 1328 deletions

View File

@@ -376,7 +376,7 @@ if (ENABLE_SDL2)
if (YUZU_USE_BUNDLED_SDL2)
# Detect toolchain and platform
if ((MSVC_VERSION GREATER_EQUAL 1910 AND MSVC_VERSION LESS 1930) AND ARCHITECTURE_x86_64)
set(SDL2_VER "SDL2-2.0.15-prerelease")
set(SDL2_VER "SDL2-2.0.16")
else()
message(FATAL_ERROR "No bundled SDL2 binaries for your toolchain. Disable YUZU_USE_BUNDLED_SDL2 and provide your own.")
endif()
@@ -396,7 +396,7 @@ if (ENABLE_SDL2)
elseif (YUZU_USE_EXTERNAL_SDL2)
message(STATUS "Using SDL2 from externals.")
else()
find_package(SDL2 2.0.15 REQUIRED)
find_package(SDL2 2.0.16 REQUIRED)
# Some installations don't set SDL2_LIBRARIES
if("${SDL2_LIBRARIES}" STREQUAL "")
@@ -701,7 +701,7 @@ if (APPLE)
elseif (WIN32)
# WSAPoll and SHGetKnownFolderPath (AppData/Roaming) didn't exist before WinNT 6.x (Vista)
add_definitions(-D_WIN32_WINNT=0x0600 -DWINVER=0x0600)
set(PLATFORM_LIBRARIES winmm ws2_32)
set(PLATFORM_LIBRARIES winmm ws2_32 iphlpapi)
if (MINGW)
# PSAPI is the Process Status API
set(PLATFORM_LIBRARIES ${PLATFORM_LIBRARIES} psapi imm32 version)

2
externals/SDL vendored

View File

@@ -4,6 +4,7 @@
#pragma once
#include <algorithm>
#include <array>
#include <atomic>
#include <chrono>
@@ -74,14 +75,14 @@ public:
*/
explicit BasicSetting(const Type& default_val, const std::string& name)
: default_value{default_val}, global{default_val}, label{name} {}
~BasicSetting() = default;
virtual ~BasicSetting() = default;
/**
* Returns a reference to the setting's value.
*
* @returns A reference to the setting
*/
[[nodiscard]] const Type& GetValue() const {
[[nodiscard]] virtual const Type& GetValue() const {
return global;
}
@@ -90,7 +91,7 @@ public:
*
* @param value The desired value
*/
void SetValue(const Type& value) {
virtual void SetValue(const Type& value) {
Type temp{value};
std::swap(global, temp);
}
@@ -120,7 +121,7 @@ public:
*
* @returns A reference to the setting
*/
const Type& operator=(const Type& value) {
virtual const Type& operator=(const Type& value) {
Type temp{value};
std::swap(global, temp);
return global;
@@ -131,7 +132,7 @@ public:
*
* @returns A reference to the setting
*/
explicit operator const Type&() const {
explicit virtual operator const Type&() const {
return global;
}
@@ -141,6 +142,51 @@ protected:
const std::string label{}; ///< The setting's label
};
/**
* BasicRangedSetting class is intended for use with quantifiable settings that need a more
* restrictive range than implicitly defined by its type. Implements a minimum and maximum that is
* simply used to sanitize SetValue and the assignment overload.
*/
template <typename Type>
class BasicRangedSetting : virtual public BasicSetting<Type> {
public:
/**
* Sets a default value, minimum value, maximum value, and label.
*
* @param default_val Intial value of the setting, and default value of the setting
* @param min_val Sets the minimum allowed value of the setting
* @param max_val Sets the maximum allowed value of the setting
* @param name Label for the setting
*/
explicit BasicRangedSetting(const Type& default_val, const Type& min_val, const Type& max_val,
const std::string& name)
: BasicSetting<Type>{default_val, name}, minimum{min_val}, maximum{max_val} {}
virtual ~BasicRangedSetting() = default;
/**
* Like BasicSetting's SetValue, except value is clamped to the range of the setting.
*
* @param value The desired value
*/
void SetValue(const Type& value) override {
this->global = std::clamp(value, minimum, maximum);
}
/**
* Like BasicSetting's assignment overload, except value is clamped to the range of the setting.
*
* @param value The desired value
* @returns A reference to the setting's value
*/
const Type& operator=(const Type& value) override {
this->global = std::clamp(value, minimum, maximum);
return this->global;
}
const Type minimum; ///< Minimum allowed value of the setting
const Type maximum; ///< Maximum allowed value of the setting
};
/**
* The Setting class is a slightly more complex version of the BasicSetting class. This adds a
* custom setting to switch to when a guest application specifically requires it. The effect is that
@@ -152,7 +198,7 @@ protected:
* Like the BasicSetting, this requires setting a default value and label to use.
*/
template <typename Type>
class Setting final : public BasicSetting<Type> {
class Setting : virtual public BasicSetting<Type> {
public:
/**
* Sets a default value, label, and setting value.
@@ -162,7 +208,7 @@ public:
*/
explicit Setting(const Type& default_val, const std::string& name)
: BasicSetting<Type>(default_val, name) {}
~Setting() = default;
virtual ~Setting() = default;
/**
* Tells this setting to represent either the global or custom setting when other member
@@ -191,7 +237,13 @@ public:
*
* @returns The required value of the setting
*/
[[nodiscard]] const Type& GetValue(bool need_global = false) const {
[[nodiscard]] virtual const Type& GetValue() const override {
if (use_global) {
return this->global;
}
return custom;
}
[[nodiscard]] virtual const Type& GetValue(bool need_global) const {
if (use_global || need_global) {
return this->global;
}
@@ -203,7 +255,7 @@ public:
*
* @param value The new value
*/
void SetValue(const Type& value) {
void SetValue(const Type& value) override {
Type temp{value};
if (use_global) {
std::swap(this->global, temp);
@@ -219,7 +271,7 @@ public:
*
* @returns A reference to the current setting value
*/
const Type& operator=(const Type& value) {
const Type& operator=(const Type& value) override {
Type temp{value};
if (use_global) {
std::swap(this->global, temp);
@@ -234,18 +286,87 @@ public:
*
* @returns A reference to the current setting value
*/
explicit operator const Type&() const {
virtual explicit operator const Type&() const override {
if (use_global) {
return this->global;
}
return custom;
}
private:
protected:
bool use_global{true}; ///< The setting's global state
Type custom{}; ///< The custom value of the setting
};
/**
* RangedSetting is a Setting that implements a maximum and minimum value for its setting. Intended
* for use with quantifiable settings.
*/
template <typename Type>
class RangedSetting final : public BasicRangedSetting<Type>, public Setting<Type> {
public:
/**
* Sets a default value, minimum value, maximum value, and label.
*
* @param default_val Intial value of the setting, and default value of the setting
* @param min_val Sets the minimum allowed value of the setting
* @param max_val Sets the maximum allowed value of the setting
* @param name Label for the setting
*/
explicit RangedSetting(const Type& default_val, const Type& min_val, const Type& max_val,
const std::string& name)
: BasicSetting<Type>{default_val, name},
BasicRangedSetting<Type>{default_val, min_val, max_val, name}, Setting<Type>{default_val,
name} {}
virtual ~RangedSetting() = default;
// The following are needed to avoid a MSVC bug
// (source: https://stackoverflow.com/questions/469508)
[[nodiscard]] const Type& GetValue() const override {
return Setting<Type>::GetValue();
}
[[nodiscard]] const Type& GetValue(bool need_global) const override {
return Setting<Type>::GetValue(need_global);
}
explicit operator const Type&() const override {
if (this->use_global) {
return this->global;
}
return this->custom;
}
/**
* Like BasicSetting's SetValue, except value is clamped to the range of the setting. Sets the
* appropriate value depending on the global state.
*
* @param value The desired value
*/
void SetValue(const Type& value) override {
const Type temp = std::clamp(value, this->minimum, this->maximum);
if (this->use_global) {
this->global = temp;
}
this->custom = temp;
}
/**
* Like BasicSetting's assignment overload, except value is clamped to the range of the setting.
* Uses the appropriate value depending on the global state.
*
* @param value The desired value
* @returns A reference to the setting's value
*/
const Type& operator=(const Type& value) override {
const Type temp = std::clamp(value, this->minimum, this->maximum);
if (this->use_global) {
this->global = temp;
return this->global;
}
this->custom = temp;
return this->custom;
}
};
/**
* The InputSetting class allows for getting a reference to either the global or custom members.
* This is required as we cannot easily modify the values of user-defined types within containers
@@ -289,13 +410,14 @@ struct Values {
BasicSetting<std::string> sink_id{"auto", "output_engine"};
BasicSetting<bool> audio_muted{false, "audio_muted"};
Setting<bool> enable_audio_stretching{true, "enable_audio_stretching"};
Setting<u8> volume{100, "volume"};
RangedSetting<u8> volume{100, 0, 100, "volume"};
// Core
Setting<bool> use_multi_core{true, "use_multi_core"};
// Cpu
Setting<CPUAccuracy> cpu_accuracy{CPUAccuracy::Auto, "cpu_accuracy"};
RangedSetting<CPUAccuracy> cpu_accuracy{CPUAccuracy::Auto, CPUAccuracy::Auto,
CPUAccuracy::Unsafe, "cpu_accuracy"};
// TODO: remove cpu_accuracy_first_time, migration setting added 8 July 2021
BasicSetting<bool> cpu_accuracy_first_time{true, "cpu_accuracy_first_time"};
BasicSetting<bool> cpu_debug_mode{false, "cpu_debug_mode"};
@@ -317,7 +439,8 @@ struct Values {
Setting<bool> cpuopt_unsafe_fastmem_check{true, "cpuopt_unsafe_fastmem_check"};
// Renderer
Setting<RendererBackend> renderer_backend{RendererBackend::OpenGL, "backend"};
RangedSetting<RendererBackend> renderer_backend{
RendererBackend::OpenGL, RendererBackend::OpenGL, RendererBackend::Vulkan, "backend"};
BasicSetting<bool> renderer_debug{false, "debug"};
BasicSetting<bool> renderer_shader_feedback{false, "shader_feedback"};
BasicSetting<bool> enable_nsight_aftermath{false, "nsight_aftermath"};
@@ -328,26 +451,28 @@ struct Values {
Setting<u16> resolution_factor{1, "resolution_factor"};
// *nix platforms may have issues with the borderless windowed fullscreen mode.
// Default to exclusive fullscreen on these platforms for now.
Setting<FullscreenMode> fullscreen_mode{
RangedSetting<FullscreenMode> fullscreen_mode{
#ifdef _WIN32
FullscreenMode::Borderless,
#else
FullscreenMode::Exclusive,
#endif
"fullscreen_mode"};
Setting<int> aspect_ratio{0, "aspect_ratio"};
Setting<int> max_anisotropy{0, "max_anisotropy"};
FullscreenMode::Borderless, FullscreenMode::Exclusive, "fullscreen_mode"};
RangedSetting<int> aspect_ratio{0, 0, 3, "aspect_ratio"};
RangedSetting<int> max_anisotropy{0, 0, 4, "max_anisotropy"};
Setting<bool> use_speed_limit{true, "use_speed_limit"};
Setting<u16> speed_limit{100, "speed_limit"};
RangedSetting<u16> speed_limit{100, 0, 9999, "speed_limit"};
Setting<bool> use_disk_shader_cache{true, "use_disk_shader_cache"};
Setting<GPUAccuracy> gpu_accuracy{GPUAccuracy::High, "gpu_accuracy"};
RangedSetting<GPUAccuracy> gpu_accuracy{GPUAccuracy::High, GPUAccuracy::Normal,
GPUAccuracy::Extreme, "gpu_accuracy"};
Setting<bool> use_asynchronous_gpu_emulation{true, "use_asynchronous_gpu_emulation"};
Setting<bool> use_nvdec_emulation{true, "use_nvdec_emulation"};
Setting<bool> accelerate_astc{true, "accelerate_astc"};
Setting<bool> use_vsync{true, "use_vsync"};
BasicSetting<u16> fps_cap{1000, "fps_cap"};
BasicRangedSetting<u16> fps_cap{1000, 1, 1000, "fps_cap"};
BasicSetting<bool> disable_fps_limit{false, "disable_fps_limit"};
Setting<ShaderBackend> shader_backend{ShaderBackend::GLASM, "shader_backend"};
RangedSetting<ShaderBackend> shader_backend{ShaderBackend::GLASM, ShaderBackend::GLSL,
ShaderBackend::SPIRV, "shader_backend"};
Setting<bool> use_asynchronous_shaders{false, "use_asynchronous_shaders"};
Setting<bool> use_fast_gpu_time{true, "use_fast_gpu_time"};
Setting<bool> use_caches_gc{false, "use_caches_gc"};
@@ -364,10 +489,10 @@ struct Values {
std::chrono::seconds custom_rtc_differential;
BasicSetting<s32> current_user{0, "current_user"};
Setting<s32> language_index{1, "language_index"};
Setting<s32> region_index{1, "region_index"};
Setting<s32> time_zone_index{0, "time_zone_index"};
Setting<s32> sound_index{1, "sound_index"};
RangedSetting<s32> language_index{1, 0, 16, "language_index"};
RangedSetting<s32> region_index{1, 0, 6, "region_index"};
RangedSetting<s32> time_zone_index{0, 0, 45, "time_zone_index"};
RangedSetting<s32> sound_index{1, 0, 2, "sound_index"};
// Controls
InputSetting<std::array<PlayerInput, 10>> players;
@@ -384,7 +509,7 @@ struct Values {
"udp_input_servers"};
BasicSetting<bool> mouse_panning{false, "mouse_panning"};
BasicSetting<u8> mouse_panning_sensitivity{10, "mouse_panning_sensitivity"};
BasicRangedSetting<u8> mouse_panning_sensitivity{10, 1, 100, "mouse_panning_sensitivity"};
BasicSetting<bool> mouse_enabled{false, "mouse_enabled"};
std::string mouse_device;
MouseButtonsRaw mouse_buttons;
@@ -433,9 +558,10 @@ struct Values {
BasicSetting<std::string> log_filter{"*:Info", "log_filter"};
BasicSetting<bool> use_dev_keys{false, "use_dev_keys"};
// Services
// Network
BasicSetting<std::string> bcat_backend{"none", "bcat_backend"};
BasicSetting<bool> bcat_boxcat_local{false, "bcat_boxcat_local"};
BasicSetting<std::string> network_interface{std::string(), "network_interface"};
// WebService
BasicSetting<bool> enable_telemetry{true, "enable_telemetry"};

View File

@@ -46,15 +46,13 @@ public:
ElementPtr* new_ptr = new ElementPtr();
write_ptr->next.store(new_ptr, std::memory_order_release);
write_ptr = new_ptr;
++size;
const size_t previous_size{size++};
// Acquire the mutex and then immediately release it as a fence.
// cv_mutex must be held or else there will be a missed wakeup if the other thread is in the
// line before cv.wait
// TODO(bunnei): This can be replaced with C++20 waitable atomics when properly supported.
// See discussion on https://github.com/yuzu-emu/yuzu/pull/3173 for details.
if (previous_size == 0) {
std::lock_guard lock{cv_mutex};
}
std::lock_guard lock{cv_mutex};
cv.notify_one();
}

View File

@@ -636,6 +636,8 @@ add_library(core STATIC
memory.h
network/network.cpp
network/network.h
network/network_interface.cpp
network/network_interface.h
network/sockets.h
perf_stats.cpp
perf_stats.h

View File

@@ -494,12 +494,6 @@ const ARM_Interface& System::CurrentArmInterface() const {
return impl->kernel.CurrentPhysicalCore().ArmInterface();
}
std::size_t System::CurrentCoreIndex() const {
std::size_t core = impl->kernel.GetCurrentHostThreadID();
ASSERT(core < Core::Hardware::NUM_CPU_CORES);
return core;
}
Kernel::PhysicalCore& System::CurrentPhysicalCore() {
return impl->kernel.CurrentPhysicalCore();
}

View File

@@ -205,9 +205,6 @@ public:
/// Gets an ARM interface to the CPU core that is currently running
[[nodiscard]] const ARM_Interface& CurrentArmInterface() const;
/// Gets the index of the currently running CPU core
[[nodiscard]] std::size_t CurrentCoreIndex() const;
/// Gets the physical core for the CPU core that is currently running
[[nodiscard]] Kernel::PhysicalCore& CurrentPhysicalCore();

View File

@@ -21,34 +21,25 @@ namespace Core {
CpuManager::CpuManager(System& system_) : system{system_} {}
CpuManager::~CpuManager() = default;
void CpuManager::ThreadStart(CpuManager& cpu_manager, std::size_t core) {
cpu_manager.RunThread(core);
void CpuManager::ThreadStart(std::stop_token stop_token, CpuManager& cpu_manager,
std::size_t core) {
cpu_manager.RunThread(stop_token, core);
}
void CpuManager::Initialize() {
running_mode = true;
if (is_multicore) {
for (std::size_t core = 0; core < Core::Hardware::NUM_CPU_CORES; core++) {
core_data[core].host_thread =
std::make_unique<std::thread>(ThreadStart, std::ref(*this), core);
core_data[core].host_thread = std::jthread(ThreadStart, std::ref(*this), core);
}
} else {
core_data[0].host_thread = std::make_unique<std::thread>(ThreadStart, std::ref(*this), 0);
core_data[0].host_thread = std::jthread(ThreadStart, std::ref(*this), 0);
}
}
void CpuManager::Shutdown() {
running_mode = false;
Pause(false);
if (is_multicore) {
for (auto& data : core_data) {
data.host_thread->join();
data.host_thread.reset();
}
} else {
core_data[0].host_thread->join();
core_data[0].host_thread.reset();
}
}
std::function<void(void*)> CpuManager::GetGuestThreadStartFunc() {
@@ -127,17 +118,18 @@ void CpuManager::MultiCoreRunGuestLoop() {
physical_core = &kernel.CurrentPhysicalCore();
}
system.ExitDynarmicProfile();
physical_core->ArmInterface().ClearExclusiveState();
kernel.CurrentScheduler()->RescheduleCurrentCore();
{
Kernel::KScopedDisableDispatch dd(kernel);
physical_core->ArmInterface().ClearExclusiveState();
}
}
}
void CpuManager::MultiCoreRunIdleThread() {
auto& kernel = system.Kernel();
while (true) {
auto& physical_core = kernel.CurrentPhysicalCore();
physical_core.Idle();
kernel.CurrentScheduler()->RescheduleCurrentCore();
Kernel::KScopedDisableDispatch dd(kernel);
kernel.CurrentPhysicalCore().Idle();
}
}
@@ -145,12 +137,12 @@ void CpuManager::MultiCoreRunSuspendThread() {
auto& kernel = system.Kernel();
kernel.CurrentScheduler()->OnThreadStart();
while (true) {
auto core = kernel.GetCurrentHostThreadID();
auto core = kernel.CurrentPhysicalCoreIndex();
auto& scheduler = *kernel.CurrentScheduler();
Kernel::KThread* current_thread = scheduler.GetCurrentThread();
Common::Fiber::YieldTo(current_thread->GetHostContext(), *core_data[core].host_context);
ASSERT(scheduler.ContextSwitchPending());
ASSERT(core == kernel.GetCurrentHostThreadID());
ASSERT(core == kernel.CurrentPhysicalCoreIndex());
scheduler.RescheduleCurrentCore();
}
}
@@ -317,7 +309,7 @@ void CpuManager::Pause(bool paused) {
}
}
void CpuManager::RunThread(std::size_t core) {
void CpuManager::RunThread(std::stop_token stop_token, std::size_t core) {
/// Initialization
system.RegisterCoreThread(core);
std::string name;
@@ -356,8 +348,8 @@ void CpuManager::RunThread(std::size_t core) {
sc_sync_first_use = false;
}
// Abort if emulation was killed before the session really starts
if (!system.IsPoweredOn()) {
// Emulation was stopped
if (stop_token.stop_requested()) {
return;
}

View File

@@ -78,9 +78,9 @@ private:
void SingleCoreRunSuspendThread();
void SingleCorePause(bool paused);
static void ThreadStart(CpuManager& cpu_manager, std::size_t core);
static void ThreadStart(std::stop_token stop_token, CpuManager& cpu_manager, std::size_t core);
void RunThread(std::size_t core);
void RunThread(std::stop_token stop_token, std::size_t core);
struct CoreData {
std::shared_ptr<Common::Fiber> host_context;
@@ -89,7 +89,7 @@ private:
std::atomic<bool> is_running;
std::atomic<bool> is_paused;
std::atomic<bool> initialized;
std::unique_ptr<std::thread> host_thread;
std::jthread host_thread;
};
std::atomic<bool> running_mode{};

View File

@@ -28,7 +28,7 @@ bool ReadFromUser(Core::System& system, s32* out, VAddr address) {
bool DecrementIfLessThan(Core::System& system, s32* out, VAddr address, s32 value) {
auto& monitor = system.Monitor();
const auto current_core = system.CurrentCoreIndex();
const auto current_core = system.Kernel().CurrentPhysicalCoreIndex();
// TODO(bunnei): We should disable interrupts here via KScopedInterruptDisable.
// TODO(bunnei): We should call CanAccessAtomic(..) here.
@@ -58,7 +58,7 @@ bool DecrementIfLessThan(Core::System& system, s32* out, VAddr address, s32 valu
bool UpdateIfEqual(Core::System& system, s32* out, VAddr address, s32 value, s32 new_value) {
auto& monitor = system.Monitor();
const auto current_core = system.CurrentCoreIndex();
const auto current_core = system.Kernel().CurrentPhysicalCoreIndex();
// TODO(bunnei): We should disable interrupts here via KScopedInterruptDisable.
// TODO(bunnei): We should call CanAccessAtomic(..) here.

View File

@@ -170,6 +170,10 @@ public:
}
}
const std::string& GetName() const {
return name;
}
private:
void RegisterWithKernel();
void UnregisterWithKernel();

View File

@@ -35,7 +35,7 @@ bool WriteToUser(Core::System& system, VAddr address, const u32* p) {
bool UpdateLockAtomic(Core::System& system, u32* out, VAddr address, u32 if_zero,
u32 new_orr_mask) {
auto& monitor = system.Monitor();
const auto current_core = system.CurrentCoreIndex();
const auto current_core = system.Kernel().CurrentPhysicalCoreIndex();
// Load the value from the address.
const auto expected = monitor.ExclusiveRead32(current_core, address);

View File

@@ -13,6 +13,7 @@ ResultCode KHandleTable::Finalize() {
// Get the table and clear our record of it.
u16 saved_table_size = 0;
{
KScopedDisableDispatch dd(kernel);
KScopedSpinLock lk(m_lock);
std::swap(m_table_size, saved_table_size);
@@ -43,6 +44,7 @@ bool KHandleTable::Remove(Handle handle) {
// Find the object and free the entry.
KAutoObject* obj = nullptr;
{
KScopedDisableDispatch dd(kernel);
KScopedSpinLock lk(m_lock);
if (this->IsValidHandle(handle)) {
@@ -61,6 +63,7 @@ bool KHandleTable::Remove(Handle handle) {
}
ResultCode KHandleTable::Add(Handle* out_handle, KAutoObject* obj, u16 type) {
KScopedDisableDispatch dd(kernel);
KScopedSpinLock lk(m_lock);
// Never exceed our capacity.
@@ -83,6 +86,7 @@ ResultCode KHandleTable::Add(Handle* out_handle, KAutoObject* obj, u16 type) {
}
ResultCode KHandleTable::Reserve(Handle* out_handle) {
KScopedDisableDispatch dd(kernel);
KScopedSpinLock lk(m_lock);
// Never exceed our capacity.
@@ -93,6 +97,7 @@ ResultCode KHandleTable::Reserve(Handle* out_handle) {
}
void KHandleTable::Unreserve(Handle handle) {
KScopedDisableDispatch dd(kernel);
KScopedSpinLock lk(m_lock);
// Unpack the handle.
@@ -111,6 +116,7 @@ void KHandleTable::Unreserve(Handle handle) {
}
void KHandleTable::Register(Handle handle, KAutoObject* obj, u16 type) {
KScopedDisableDispatch dd(kernel);
KScopedSpinLock lk(m_lock);
// Unpack the handle.

View File

@@ -69,6 +69,7 @@ public:
template <typename T = KAutoObject>
KScopedAutoObject<T> GetObjectWithoutPseudoHandle(Handle handle) const {
// Lock and look up in table.
KScopedDisableDispatch dd(kernel);
KScopedSpinLock lk(m_lock);
if constexpr (std::is_same_v<T, KAutoObject>) {
@@ -123,6 +124,7 @@ public:
size_t num_opened;
{
// Lock the table.
KScopedDisableDispatch dd(kernel);
KScopedSpinLock lk(m_lock);
for (num_opened = 0; num_opened < num_handles; num_opened++) {
// Get the current handle.

View File

@@ -59,6 +59,7 @@ void SetupMainThread(Core::System& system, KProcess& owner_process, u32 priority
thread->GetContext64().cpu_registers[0] = 0;
thread->GetContext32().cpu_registers[1] = thread_handle;
thread->GetContext64().cpu_registers[1] = thread_handle;
thread->DisableDispatch();
auto& kernel = system.Kernel();
// Threads by default are dormant, wake up the main thread so it runs when the scheduler fires

View File

@@ -376,20 +376,18 @@ void KScheduler::ClearSchedulerUpdateNeeded(KernelCore& kernel) {
}
void KScheduler::DisableScheduling(KernelCore& kernel) {
if (auto* scheduler = kernel.CurrentScheduler(); scheduler) {
ASSERT(scheduler->GetCurrentThread()->GetDisableDispatchCount() >= 0);
scheduler->GetCurrentThread()->DisableDispatch();
}
ASSERT(GetCurrentThreadPointer(kernel)->GetDisableDispatchCount() >= 0);
GetCurrentThreadPointer(kernel)->DisableDispatch();
}
void KScheduler::EnableScheduling(KernelCore& kernel, u64 cores_needing_scheduling) {
if (auto* scheduler = kernel.CurrentScheduler(); scheduler) {
ASSERT(scheduler->GetCurrentThread()->GetDisableDispatchCount() >= 1);
if (scheduler->GetCurrentThread()->GetDisableDispatchCount() >= 1) {
scheduler->GetCurrentThread()->EnableDispatch();
}
ASSERT(GetCurrentThreadPointer(kernel)->GetDisableDispatchCount() >= 1);
if (GetCurrentThreadPointer(kernel)->GetDisableDispatchCount() > 1) {
GetCurrentThreadPointer(kernel)->EnableDispatch();
} else {
RescheduleCores(kernel, cores_needing_scheduling);
}
RescheduleCores(kernel, cores_needing_scheduling);
}
u64 KScheduler::UpdateHighestPriorityThreads(KernelCore& kernel) {
@@ -617,13 +615,17 @@ KScheduler::KScheduler(Core::System& system_, s32 core_id_) : system{system_}, c
state.highest_priority_thread = nullptr;
}
KScheduler::~KScheduler() {
void KScheduler::Finalize() {
if (idle_thread) {
idle_thread->Close();
idle_thread = nullptr;
}
}
KScheduler::~KScheduler() {
ASSERT(!idle_thread);
}
KThread* KScheduler::GetCurrentThread() const {
if (auto result = current_thread.load(); result) {
return result;
@@ -642,10 +644,12 @@ void KScheduler::RescheduleCurrentCore() {
if (phys_core.IsInterrupted()) {
phys_core.ClearInterrupt();
}
guard.Lock();
if (state.needs_scheduling.load()) {
Schedule();
} else {
GetCurrentThread()->EnableDispatch();
guard.Unlock();
}
}
@@ -655,26 +659,33 @@ void KScheduler::OnThreadStart() {
}
void KScheduler::Unload(KThread* thread) {
ASSERT(thread);
LOG_TRACE(Kernel, "core {}, unload thread {}", core_id, thread ? thread->GetName() : "nullptr");
if (thread) {
if (thread->IsCallingSvc()) {
thread->ClearIsCallingSvc();
}
if (!thread->IsTerminationRequested()) {
prev_thread = thread;
Core::ARM_Interface& cpu_core = system.ArmInterface(core_id);
cpu_core.SaveContext(thread->GetContext32());
cpu_core.SaveContext(thread->GetContext64());
// Save the TPIDR_EL0 system register in case it was modified.
thread->SetTPIDR_EL0(cpu_core.GetTPIDR_EL0());
cpu_core.ClearExclusiveState();
} else {
prev_thread = nullptr;
}
thread->context_guard.Unlock();
if (thread->IsCallingSvc()) {
thread->ClearIsCallingSvc();
}
auto& physical_core = system.Kernel().PhysicalCore(core_id);
if (!physical_core.IsInitialized()) {
return;
}
Core::ARM_Interface& cpu_core = physical_core.ArmInterface();
cpu_core.SaveContext(thread->GetContext32());
cpu_core.SaveContext(thread->GetContext64());
// Save the TPIDR_EL0 system register in case it was modified.
thread->SetTPIDR_EL0(cpu_core.GetTPIDR_EL0());
cpu_core.ClearExclusiveState();
if (!thread->IsTerminationRequested() && thread->GetActiveCore() == core_id) {
prev_thread = thread;
} else {
prev_thread = nullptr;
}
thread->context_guard.Unlock();
}
void KScheduler::Reload(KThread* thread) {
@@ -683,11 +694,6 @@ void KScheduler::Reload(KThread* thread) {
if (thread) {
ASSERT_MSG(thread->GetState() == ThreadState::Runnable, "Thread must be runnable.");
auto* const thread_owner_process = thread->GetOwnerProcess();
if (thread_owner_process != nullptr) {
system.Kernel().MakeCurrentProcess(thread_owner_process);
}
Core::ARM_Interface& cpu_core = system.ArmInterface(core_id);
cpu_core.LoadContext(thread->GetContext32());
cpu_core.LoadContext(thread->GetContext64());
@@ -705,7 +711,7 @@ void KScheduler::SwitchContextStep2() {
}
void KScheduler::ScheduleImpl() {
KThread* previous_thread = current_thread.load();
KThread* previous_thread = GetCurrentThread();
KThread* next_thread = state.highest_priority_thread;
state.needs_scheduling = false;
@@ -717,10 +723,15 @@ void KScheduler::ScheduleImpl() {
// If we're not actually switching thread, there's nothing to do.
if (next_thread == current_thread.load()) {
previous_thread->EnableDispatch();
guard.Unlock();
return;
}
if (next_thread->GetCurrentCore() != core_id) {
next_thread->SetCurrentCore(core_id);
}
current_thread.store(next_thread);
KProcess* const previous_process = system.Kernel().CurrentProcess();
@@ -731,11 +742,7 @@ void KScheduler::ScheduleImpl() {
Unload(previous_thread);
std::shared_ptr<Common::Fiber>* old_context;
if (previous_thread != nullptr) {
old_context = &previous_thread->GetHostContext();
} else {
old_context = &idle_thread->GetHostContext();
}
old_context = &previous_thread->GetHostContext();
guard.Unlock();
Common::Fiber::YieldTo(*old_context, *switch_fiber);

View File

@@ -33,6 +33,8 @@ public:
explicit KScheduler(Core::System& system_, s32 core_id_);
~KScheduler();
void Finalize();
/// Reschedules to the next available thread (call after current thread is suspended)
void RescheduleCurrentCore();

View File

@@ -14,6 +14,7 @@
#include "common/fiber.h"
#include "common/logging/log.h"
#include "common/scope_exit.h"
#include "common/settings.h"
#include "common/thread_queue_list.h"
#include "core/core.h"
#include "core/cpu_manager.h"
@@ -188,7 +189,7 @@ ResultCode KThread::Initialize(KThreadFunction func, uintptr_t arg, VAddr user_s
// Setup the stack parameters.
StackParameters& sp = GetStackParameters();
sp.cur_thread = this;
sp.disable_count = 1;
sp.disable_count = 0;
SetInExceptionHandler();
// Set thread ID.
@@ -215,9 +216,10 @@ ResultCode KThread::InitializeThread(KThread* thread, KThreadFunction func, uint
// Initialize the thread.
R_TRY(thread->Initialize(func, arg, user_stack_top, prio, core, owner, type));
// Initialize host context.
// Initialize emulation parameters.
thread->host_context =
std::make_shared<Common::Fiber>(std::move(init_func), init_func_parameter);
thread->is_single_core = !Settings::values.use_multi_core.GetValue();
return ResultSuccess;
}
@@ -970,6 +972,9 @@ ResultCode KThread::Run() {
// Set our state and finish.
SetState(ThreadState::Runnable);
DisableDispatch();
return ResultSuccess;
}
}
@@ -1054,4 +1059,16 @@ s32 GetCurrentCoreId(KernelCore& kernel) {
return GetCurrentThread(kernel).GetCurrentCore();
}
KScopedDisableDispatch::~KScopedDisableDispatch() {
if (GetCurrentThread(kernel).GetDisableDispatchCount() <= 1) {
auto scheduler = kernel.CurrentScheduler();
if (scheduler) {
scheduler->RescheduleCurrentCore();
}
} else {
GetCurrentThread(kernel).EnableDispatch();
}
}
} // namespace Kernel

View File

@@ -450,16 +450,39 @@ public:
sleeping_queue = q;
}
[[nodiscard]] bool IsKernelThread() const {
return GetActiveCore() == 3;
}
[[nodiscard]] bool IsDispatchTrackingDisabled() const {
return is_single_core || IsKernelThread();
}
[[nodiscard]] s32 GetDisableDispatchCount() const {
if (IsDispatchTrackingDisabled()) {
// TODO(bunnei): Until kernel threads are emulated, we cannot enable/disable dispatch.
return 1;
}
return this->GetStackParameters().disable_count;
}
void DisableDispatch() {
if (IsDispatchTrackingDisabled()) {
// TODO(bunnei): Until kernel threads are emulated, we cannot enable/disable dispatch.
return;
}
ASSERT(GetCurrentThread(kernel).GetDisableDispatchCount() >= 0);
this->GetStackParameters().disable_count++;
}
void EnableDispatch() {
if (IsDispatchTrackingDisabled()) {
// TODO(bunnei): Until kernel threads are emulated, we cannot enable/disable dispatch.
return;
}
ASSERT(GetCurrentThread(kernel).GetDisableDispatchCount() > 0);
this->GetStackParameters().disable_count--;
}
@@ -708,6 +731,7 @@ private:
// For emulation
std::shared_ptr<Common::Fiber> host_context{};
bool is_single_core{};
// For debugging
std::vector<KSynchronizationObject*> wait_objects_for_debugging;
@@ -752,4 +776,16 @@ public:
}
};
class KScopedDisableDispatch {
public:
[[nodiscard]] explicit KScopedDisableDispatch(KernelCore& kernel_) : kernel{kernel_} {
GetCurrentThread(kernel).DisableDispatch();
}
~KScopedDisableDispatch();
private:
KernelCore& kernel;
};
} // namespace Kernel

View File

@@ -85,8 +85,9 @@ struct KernelCore::Impl {
}
void InitializeCores() {
for (auto& core : cores) {
core.Initialize(current_process->Is64BitProcess());
for (u32 core_id = 0; core_id < Core::Hardware::NUM_CPU_CORES; core_id++) {
cores[core_id].Initialize(current_process->Is64BitProcess());
system.Memory().SetCurrentPageTable(*current_process, core_id);
}
}
@@ -131,15 +132,6 @@ struct KernelCore::Impl {
next_user_process_id = KProcess::ProcessIDMin;
next_thread_id = 1;
for (u32 core_id = 0; core_id < Core::Hardware::NUM_CPU_CORES; core_id++) {
if (suspend_threads[core_id]) {
suspend_threads[core_id]->Close();
suspend_threads[core_id] = nullptr;
}
schedulers[core_id].reset();
}
cores.clear();
global_handle_table->Finalize();
@@ -167,6 +159,16 @@ struct KernelCore::Impl {
CleanupObject(time_shared_mem);
CleanupObject(system_resource_limit);
for (u32 core_id = 0; core_id < Core::Hardware::NUM_CPU_CORES; core_id++) {
if (suspend_threads[core_id]) {
suspend_threads[core_id]->Close();
suspend_threads[core_id] = nullptr;
}
schedulers[core_id]->Finalize();
schedulers[core_id].reset();
}
// Next host thead ID to use, 0-3 IDs represent core threads, >3 represent others
next_host_thread_id = Core::Hardware::NUM_CPU_CORES;
@@ -257,14 +259,6 @@ struct KernelCore::Impl {
void MakeCurrentProcess(KProcess* process) {
current_process = process;
if (process == nullptr) {
return;
}
const u32 core_id = GetCurrentHostThreadID();
if (core_id < Core::Hardware::NUM_CPU_CORES) {
system.Memory().SetCurrentPageTable(*process, core_id);
}
}
/// Creates a new host thread ID, should only be called by GetHostThreadId
@@ -824,16 +818,20 @@ const Kernel::PhysicalCore& KernelCore::PhysicalCore(std::size_t id) const {
return impl->cores[id];
}
size_t KernelCore::CurrentPhysicalCoreIndex() const {
const u32 core_id = impl->GetCurrentHostThreadID();
if (core_id >= Core::Hardware::NUM_CPU_CORES) {
return Core::Hardware::NUM_CPU_CORES - 1;
}
return core_id;
}
Kernel::PhysicalCore& KernelCore::CurrentPhysicalCore() {
u32 core_id = impl->GetCurrentHostThreadID();
ASSERT(core_id < Core::Hardware::NUM_CPU_CORES);
return impl->cores[core_id];
return impl->cores[CurrentPhysicalCoreIndex()];
}
const Kernel::PhysicalCore& KernelCore::CurrentPhysicalCore() const {
u32 core_id = impl->GetCurrentHostThreadID();
ASSERT(core_id < Core::Hardware::NUM_CPU_CORES);
return impl->cores[core_id];
return impl->cores[CurrentPhysicalCoreIndex()];
}
Kernel::KScheduler* KernelCore::CurrentScheduler() {
@@ -1026,6 +1024,9 @@ void KernelCore::Suspend(bool in_suspention) {
impl->suspend_threads[core_id]->SetState(state);
impl->suspend_threads[core_id]->SetWaitReasonForDebugging(
ThreadWaitReasonForDebugging::Suspended);
if (!should_suspend) {
impl->suspend_threads[core_id]->DisableDispatch();
}
}
}
}
@@ -1040,13 +1041,11 @@ void KernelCore::ExceptionalExit() {
}
void KernelCore::EnterSVCProfile() {
std::size_t core = impl->GetCurrentHostThreadID();
impl->svc_ticks[core] = MicroProfileEnter(MICROPROFILE_TOKEN(Kernel_SVC));
impl->svc_ticks[CurrentPhysicalCoreIndex()] = MicroProfileEnter(MICROPROFILE_TOKEN(Kernel_SVC));
}
void KernelCore::ExitSVCProfile() {
std::size_t core = impl->GetCurrentHostThreadID();
MicroProfileLeave(MICROPROFILE_TOKEN(Kernel_SVC), impl->svc_ticks[core]);
MicroProfileLeave(MICROPROFILE_TOKEN(Kernel_SVC), impl->svc_ticks[CurrentPhysicalCoreIndex()]);
}
std::weak_ptr<Kernel::ServiceThread> KernelCore::CreateServiceThread(const std::string& name) {

View File

@@ -146,6 +146,9 @@ public:
/// Gets the an instance of the respective physical CPU core.
const Kernel::PhysicalCore& PhysicalCore(std::size_t id) const;
/// Gets the current physical core index for the running host thread.
std::size_t CurrentPhysicalCoreIndex() const;
/// Gets the sole instance of the Scheduler at the current running core.
Kernel::KScheduler* CurrentScheduler();

View File

@@ -877,7 +877,7 @@ static ResultCode GetInfo(Core::System& system, u64* result, u64 info_id, Handle
const u64 thread_ticks = current_thread->GetCpuTime();
out_ticks = thread_ticks + (core_timing.GetCPUTicks() - prev_ctx_ticks);
} else if (same_thread && info_sub_id == system.CurrentCoreIndex()) {
} else if (same_thread && info_sub_id == system.Kernel().CurrentPhysicalCoreIndex()) {
out_ticks = core_timing.GetCPUTicks() - prev_ctx_ticks;
}
@@ -1078,8 +1078,8 @@ static ResultCode GetThreadContext(Core::System& system, VAddr out_context, Hand
for (auto i = 0; i < static_cast<s32>(Core::Hardware::NUM_CPU_CORES); ++i) {
if (thread.GetPointerUnsafe() == kernel.Scheduler(i).GetCurrentThread()) {
current = true;
break;
}
break;
}
// If the thread is current, retry until it isn't.

View File

@@ -11,6 +11,7 @@
#include "core/hle/service/nifm/nifm.h"
#include "core/hle/service/service.h"
#include "core/network/network.h"
#include "core/network/network_interface.h"
namespace Service::NIFM {
@@ -179,10 +180,10 @@ private:
IPC::ResponseBuilder rb{ctx, 3};
rb.Push(ResultSuccess);
if (Settings::values.bcat_backend.GetValue() == "none") {
rb.PushEnum(RequestState::NotSubmitted);
} else {
if (Network::GetHostIPv4Address().has_value()) {
rb.PushEnum(RequestState::Connected);
} else {
rb.PushEnum(RequestState::NotSubmitted);
}
}
@@ -322,12 +323,15 @@ private:
void GetCurrentIpAddress(Kernel::HLERequestContext& ctx) {
LOG_WARNING(Service_NIFM, "(STUBBED) called");
const auto [ipv4, error] = Network::GetHostIPv4Address();
UNIMPLEMENTED_IF(error != Network::Errno::SUCCESS);
auto ipv4 = Network::GetHostIPv4Address();
if (!ipv4) {
LOG_ERROR(Service_NIFM, "Couldn't get host IPv4 address, defaulting to 0.0.0.0");
ipv4.emplace(Network::IPv4Address{0, 0, 0, 0});
}
IPC::ResponseBuilder rb{ctx, 3};
rb.Push(ResultSuccess);
rb.PushRaw(ipv4);
rb.PushRaw(*ipv4);
}
void CreateTemporaryNetworkProfile(Kernel::HLERequestContext& ctx) {
LOG_DEBUG(Service_NIFM, "called");
@@ -354,10 +358,10 @@ private:
static_assert(sizeof(IpConfigInfo) == sizeof(IpAddressSetting) + sizeof(DnsSetting),
"IpConfigInfo has incorrect size.");
const IpConfigInfo ip_config_info{
IpConfigInfo ip_config_info{
.ip_address_setting{
.is_automatic{true},
.current_address{192, 168, 1, 100},
.current_address{0, 0, 0, 0},
.subnet_mask{255, 255, 255, 0},
.gateway{192, 168, 1, 1},
},
@@ -368,6 +372,19 @@ private:
},
};
const auto iface = Network::GetSelectedNetworkInterface();
if (iface) {
ip_config_info.ip_address_setting =
IpAddressSetting{.is_automatic{true},
.current_address{Network::TranslateIPv4(iface->ip_address)},
.subnet_mask{Network::TranslateIPv4(iface->subnet_mask)},
.gateway{Network::TranslateIPv4(iface->gateway)}};
} else {
LOG_ERROR(Service_NIFM,
"Couldn't get host network configuration info, using default values");
}
IPC::ResponseBuilder rb{ctx, 2 + (sizeof(IpConfigInfo) + 3) / sizeof(u32)};
rb.Push(ResultSuccess);
rb.PushRaw<IpConfigInfo>(ip_config_info);
@@ -384,10 +401,10 @@ private:
IPC::ResponseBuilder rb{ctx, 3};
rb.Push(ResultSuccess);
if (Settings::values.bcat_backend.GetValue() == "none") {
rb.Push<u8>(0);
} else {
if (Network::GetHostIPv4Address().has_value()) {
rb.Push<u8>(1);
} else {
rb.Push<u8>(0);
}
}
void IsAnyInternetRequestAccepted(Kernel::HLERequestContext& ctx) {
@@ -395,10 +412,10 @@ private:
IPC::ResponseBuilder rb{ctx, 3};
rb.Push(ResultSuccess);
if (Settings::values.bcat_backend.GetValue() == "none") {
rb.Push<u8>(0);
} else {
if (Network::GetHostIPv4Address().has_value()) {
rb.Push<u8>(1);
} else {
rb.Push<u8>(0);
}
}
};

View File

@@ -9,17 +9,20 @@
#include "core/core.h"
#include "core/hle/kernel/k_writable_event.h"
#include "core/hle/kernel/kernel.h"
#include "core/hle/service/kernel_helpers.h"
#include "core/hle/service/nvflinger/buffer_queue.h"
namespace Service::NVFlinger {
BufferQueue::BufferQueue(Kernel::KernelCore& kernel, u32 id_, u64 layer_id_)
: id(id_), layer_id(layer_id_), buffer_wait_event{kernel} {
Kernel::KAutoObject::Create(std::addressof(buffer_wait_event));
buffer_wait_event.Initialize("BufferQueue:WaitEvent");
BufferQueue::BufferQueue(Kernel::KernelCore& kernel, u32 id_, u64 layer_id_,
KernelHelpers::ServiceContext& service_context_)
: id(id_), layer_id(layer_id_), service_context{service_context_} {
buffer_wait_event = service_context.CreateEvent("BufferQueue:WaitEvent");
}
BufferQueue::~BufferQueue() = default;
BufferQueue::~BufferQueue() {
service_context.CloseEvent(buffer_wait_event);
}
void BufferQueue::SetPreallocatedBuffer(u32 slot, const IGBPBuffer& igbp_buffer) {
ASSERT(slot < buffer_slots);
@@ -41,7 +44,7 @@ void BufferQueue::SetPreallocatedBuffer(u32 slot, const IGBPBuffer& igbp_buffer)
.multi_fence = {},
};
buffer_wait_event.GetWritableEvent().Signal();
buffer_wait_event->GetWritableEvent().Signal();
}
std::optional<std::pair<u32, Service::Nvidia::MultiFence*>> BufferQueue::DequeueBuffer(u32 width,
@@ -119,7 +122,7 @@ void BufferQueue::CancelBuffer(u32 slot, const Service::Nvidia::MultiFence& mult
}
free_buffers_condition.notify_one();
buffer_wait_event.GetWritableEvent().Signal();
buffer_wait_event->GetWritableEvent().Signal();
}
std::optional<std::reference_wrapper<const BufferQueue::Buffer>> BufferQueue::AcquireBuffer() {
@@ -154,7 +157,7 @@ void BufferQueue::ReleaseBuffer(u32 slot) {
}
free_buffers_condition.notify_one();
buffer_wait_event.GetWritableEvent().Signal();
buffer_wait_event->GetWritableEvent().Signal();
}
void BufferQueue::Connect() {
@@ -169,7 +172,7 @@ void BufferQueue::Disconnect() {
std::unique_lock lock{queue_sequence_mutex};
queue_sequence.clear();
}
buffer_wait_event.GetWritableEvent().Signal();
buffer_wait_event->GetWritableEvent().Signal();
is_connect = false;
free_buffers_condition.notify_one();
}
@@ -189,11 +192,11 @@ u32 BufferQueue::Query(QueryType type) {
}
Kernel::KWritableEvent& BufferQueue::GetWritableBufferWaitEvent() {
return buffer_wait_event.GetWritableEvent();
return buffer_wait_event->GetWritableEvent();
}
Kernel::KReadableEvent& BufferQueue::GetBufferWaitEvent() {
return buffer_wait_event.GetReadableEvent();
return buffer_wait_event->GetReadableEvent();
}
} // namespace Service::NVFlinger

View File

@@ -24,6 +24,10 @@ class KReadableEvent;
class KWritableEvent;
} // namespace Kernel
namespace Service::KernelHelpers {
class ServiceContext;
} // namespace Service::KernelHelpers
namespace Service::NVFlinger {
constexpr u32 buffer_slots = 0x40;
@@ -54,7 +58,8 @@ public:
NativeWindowFormat = 2,
};
explicit BufferQueue(Kernel::KernelCore& kernel, u32 id_, u64 layer_id_);
explicit BufferQueue(Kernel::KernelCore& kernel, u32 id_, u64 layer_id_,
KernelHelpers::ServiceContext& service_context_);
~BufferQueue();
enum class BufferTransformFlags : u32 {
@@ -130,12 +135,14 @@ private:
std::list<u32> free_buffers;
std::array<Buffer, buffer_slots> buffers;
std::list<u32> queue_sequence;
Kernel::KEvent buffer_wait_event;
Kernel::KEvent* buffer_wait_event{};
std::mutex free_buffers_mutex;
std::condition_variable free_buffers_condition;
std::mutex queue_sequence_mutex;
KernelHelpers::ServiceContext& service_context;
};
} // namespace Service::NVFlinger

View File

@@ -61,12 +61,13 @@ void NVFlinger::SplitVSync() {
}
}
NVFlinger::NVFlinger(Core::System& system_) : system(system_) {
displays.emplace_back(0, "Default", system);
displays.emplace_back(1, "External", system);
displays.emplace_back(2, "Edid", system);
displays.emplace_back(3, "Internal", system);
displays.emplace_back(4, "Null", system);
NVFlinger::NVFlinger(Core::System& system_)
: system(system_), service_context(system_, "nvflinger") {
displays.emplace_back(0, "Default", service_context, system);
displays.emplace_back(1, "External", service_context, system);
displays.emplace_back(2, "Edid", service_context, system);
displays.emplace_back(3, "Internal", service_context, system);
displays.emplace_back(4, "Null", service_context, system);
guard = std::make_shared<std::mutex>();
// Schedule the screen composition events
@@ -146,7 +147,7 @@ std::optional<u64> NVFlinger::CreateLayer(u64 display_id) {
void NVFlinger::CreateLayerAtId(VI::Display& display, u64 layer_id) {
const u32 buffer_queue_id = next_buffer_queue_id++;
buffer_queues.emplace_back(
std::make_unique<BufferQueue>(system.Kernel(), buffer_queue_id, layer_id));
std::make_unique<BufferQueue>(system.Kernel(), buffer_queue_id, layer_id, service_context));
display.CreateLayer(layer_id, *buffer_queues.back());
}

View File

@@ -15,6 +15,7 @@
#include <vector>
#include "common/common_types.h"
#include "core/hle/service/kernel_helpers.h"
namespace Common {
class Event;
@@ -135,6 +136,8 @@ private:
std::unique_ptr<std::thread> vsync_thread;
std::unique_ptr<Common::Event> wait_event;
std::atomic<bool> is_running{};
KernelHelpers::ServiceContext service_context;
};
} // namespace Service::NVFlinger

View File

@@ -12,18 +12,21 @@
#include "core/hle/kernel/k_event.h"
#include "core/hle/kernel/k_readable_event.h"
#include "core/hle/kernel/k_writable_event.h"
#include "core/hle/service/kernel_helpers.h"
#include "core/hle/service/vi/display/vi_display.h"
#include "core/hle/service/vi/layer/vi_layer.h"
namespace Service::VI {
Display::Display(u64 id, std::string name_, Core::System& system)
: display_id{id}, name{std::move(name_)}, vsync_event{system.Kernel()} {
Kernel::KAutoObject::Create(std::addressof(vsync_event));
vsync_event.Initialize(fmt::format("Display VSync Event {}", id));
Display::Display(u64 id, std::string name_, KernelHelpers::ServiceContext& service_context_,
Core::System& system_)
: display_id{id}, name{std::move(name_)}, service_context{service_context_} {
vsync_event = service_context.CreateEvent(fmt::format("Display VSync Event {}", id));
}
Display::~Display() = default;
Display::~Display() {
service_context.CloseEvent(vsync_event);
}
Layer& Display::GetLayer(std::size_t index) {
return *layers.at(index);
@@ -34,11 +37,11 @@ const Layer& Display::GetLayer(std::size_t index) const {
}
Kernel::KReadableEvent& Display::GetVSyncEvent() {
return vsync_event.GetReadableEvent();
return vsync_event->GetReadableEvent();
}
void Display::SignalVSyncEvent() {
vsync_event.GetWritableEvent().Signal();
vsync_event->GetWritableEvent().Signal();
}
void Display::CreateLayer(u64 layer_id, NVFlinger::BufferQueue& buffer_queue) {

View File

@@ -18,6 +18,9 @@ class KEvent;
namespace Service::NVFlinger {
class BufferQueue;
}
namespace Service::KernelHelpers {
class ServiceContext;
} // namespace Service::KernelHelpers
namespace Service::VI {
@@ -31,10 +34,13 @@ class Display {
public:
/// Constructs a display with a given unique ID and name.
///
/// @param id The unique ID for this display.
/// @param id The unique ID for this display.
/// @param service_context_ The ServiceContext for the owning service.
/// @param name_ The name for this display.
/// @param system_ The global system instance.
///
Display(u64 id, std::string name_, Core::System& system);
Display(u64 id, std::string name_, KernelHelpers::ServiceContext& service_context_,
Core::System& system_);
~Display();
/// Gets the unique ID assigned to this display.
@@ -98,9 +104,10 @@ public:
private:
u64 display_id;
std::string name;
KernelHelpers::ServiceContext& service_context;
std::vector<std::shared_ptr<Layer>> layers;
Kernel::KEvent vsync_event;
Kernel::KEvent* vsync_event{};
};
} // namespace Service::VI

View File

@@ -4,8 +4,6 @@
#include <algorithm>
#include <cstring>
#include <optional>
#include <utility>
#include "common/assert.h"
#include "common/atomic_ops.h"
@@ -14,12 +12,10 @@
#include "common/page_table.h"
#include "common/settings.h"
#include "common/swap.h"
#include "core/arm/arm_interface.h"
#include "core/core.h"
#include "core/device_memory.h"
#include "core/hle/kernel/k_page_table.h"
#include "core/hle/kernel/k_process.h"
#include "core/hle/kernel/physical_memory.h"
#include "core/memory.h"
#include "video_core/gpu.h"
@@ -62,17 +58,7 @@ struct Memory::Impl {
}
}
bool IsValidVirtualAddress(const Kernel::KProcess& process, const VAddr vaddr) const {
const auto& page_table = process.PageTable().PageTableImpl();
const auto [pointer, type] = page_table.pointers[vaddr >> PAGE_BITS].PointerType();
return pointer != nullptr || type == Common::PageType::RasterizerCachedMemory;
}
bool IsValidVirtualAddress(VAddr vaddr) const {
return IsValidVirtualAddress(*system.CurrentProcess(), vaddr);
}
u8* GetPointerFromRasterizerCachedMemory(VAddr vaddr) const {
[[nodiscard]] u8* GetPointerFromRasterizerCachedMemory(VAddr vaddr) const {
const PAddr paddr{current_page_table->backing_addr[vaddr >> PAGE_BITS]};
if (!paddr) {
@@ -82,18 +68,6 @@ struct Memory::Impl {
return system.DeviceMemory().GetPointer(paddr) + vaddr;
}
u8* GetPointer(const VAddr vaddr) const {
const uintptr_t raw_pointer = current_page_table->pointers[vaddr >> PAGE_BITS].Raw();
if (u8* const pointer = Common::PageTable::PageInfo::ExtractPointer(raw_pointer)) {
return pointer + vaddr;
}
const auto type = Common::PageTable::PageInfo::ExtractType(raw_pointer);
if (type == Common::PageType::RasterizerCachedMemory) {
return GetPointerFromRasterizerCachedMemory(vaddr);
}
return nullptr;
}
u8 Read8(const VAddr addr) {
return Read<u8>(addr);
}
@@ -179,7 +153,7 @@ struct Memory::Impl {
std::string string;
string.reserve(max_length);
for (std::size_t i = 0; i < max_length; ++i) {
const char c = Read8(vaddr);
const char c = Read<s8>(vaddr);
if (c == '\0') {
break;
}
@@ -190,15 +164,14 @@ struct Memory::Impl {
return string;
}
void ReadBlock(const Kernel::KProcess& process, const VAddr src_addr, void* dest_buffer,
const std::size_t size) {
void WalkBlock(const Kernel::KProcess& process, const VAddr addr, const std::size_t size,
auto on_unmapped, auto on_memory, auto on_rasterizer, auto increment) {
const auto& page_table = process.PageTable().PageTableImpl();
std::size_t remaining_size = size;
std::size_t page_index = src_addr >> PAGE_BITS;
std::size_t page_offset = src_addr & PAGE_MASK;
std::size_t page_index = addr >> PAGE_BITS;
std::size_t page_offset = addr & PAGE_MASK;
while (remaining_size > 0) {
while (remaining_size) {
const std::size_t copy_amount =
std::min(static_cast<std::size_t>(PAGE_SIZE) - page_offset, remaining_size);
const auto current_vaddr = static_cast<VAddr>((page_index << PAGE_BITS) + page_offset);
@@ -206,22 +179,18 @@ struct Memory::Impl {
const auto [pointer, type] = page_table.pointers[page_index].PointerType();
switch (type) {
case Common::PageType::Unmapped: {
LOG_ERROR(HW_Memory,
"Unmapped ReadBlock @ 0x{:016X} (start address = 0x{:016X}, size = {})",
current_vaddr, src_addr, size);
std::memset(dest_buffer, 0, copy_amount);
on_unmapped(copy_amount, current_vaddr);
break;
}
case Common::PageType::Memory: {
DEBUG_ASSERT(pointer);
const u8* const src_ptr = pointer + page_offset + (page_index << PAGE_BITS);
std::memcpy(dest_buffer, src_ptr, copy_amount);
u8* mem_ptr = pointer + page_offset + (page_index << PAGE_BITS);
on_memory(copy_amount, mem_ptr);
break;
}
case Common::PageType::RasterizerCachedMemory: {
const u8* const host_ptr{GetPointerFromRasterizerCachedMemory(current_vaddr)};
system.GPU().FlushRegion(current_vaddr, copy_amount);
std::memcpy(dest_buffer, host_ptr, copy_amount);
u8* const host_ptr{GetPointerFromRasterizerCachedMemory(current_vaddr)};
on_rasterizer(current_vaddr, copy_amount, host_ptr);
break;
}
default:
@@ -230,248 +199,122 @@ struct Memory::Impl {
page_index++;
page_offset = 0;
dest_buffer = static_cast<u8*>(dest_buffer) + copy_amount;
increment(copy_amount);
remaining_size -= copy_amount;
}
}
void ReadBlockUnsafe(const Kernel::KProcess& process, const VAddr src_addr, void* dest_buffer,
const std::size_t size) {
const auto& page_table = process.PageTable().PageTableImpl();
std::size_t remaining_size = size;
std::size_t page_index = src_addr >> PAGE_BITS;
std::size_t page_offset = src_addr & PAGE_MASK;
while (remaining_size > 0) {
const std::size_t copy_amount =
std::min(static_cast<std::size_t>(PAGE_SIZE) - page_offset, remaining_size);
const auto current_vaddr = static_cast<VAddr>((page_index << PAGE_BITS) + page_offset);
const auto [pointer, type] = page_table.pointers[page_index].PointerType();
switch (type) {
case Common::PageType::Unmapped: {
template <bool UNSAFE>
void ReadBlockImpl(const Kernel::KProcess& process, const VAddr src_addr, void* dest_buffer,
const std::size_t size) {
WalkBlock(
process, src_addr, size,
[src_addr, size, &dest_buffer](const std::size_t copy_amount,
const VAddr current_vaddr) {
LOG_ERROR(HW_Memory,
"Unmapped ReadBlock @ 0x{:016X} (start address = 0x{:016X}, size = {})",
current_vaddr, src_addr, size);
std::memset(dest_buffer, 0, copy_amount);
break;
}
case Common::PageType::Memory: {
DEBUG_ASSERT(pointer);
const u8* const src_ptr = pointer + page_offset + (page_index << PAGE_BITS);
},
[&dest_buffer](const std::size_t copy_amount, const u8* const src_ptr) {
std::memcpy(dest_buffer, src_ptr, copy_amount);
break;
}
case Common::PageType::RasterizerCachedMemory: {
const u8* const host_ptr{GetPointerFromRasterizerCachedMemory(current_vaddr)};
},
[&system = system, &dest_buffer](const VAddr current_vaddr,
const std::size_t copy_amount,
const u8* const host_ptr) {
if constexpr (!UNSAFE) {
system.GPU().FlushRegion(current_vaddr, copy_amount);
}
std::memcpy(dest_buffer, host_ptr, copy_amount);
break;
}
default:
UNREACHABLE();
}
page_index++;
page_offset = 0;
dest_buffer = static_cast<u8*>(dest_buffer) + copy_amount;
remaining_size -= copy_amount;
}
},
[&dest_buffer](const std::size_t copy_amount) {
dest_buffer = static_cast<u8*>(dest_buffer) + copy_amount;
});
}
void ReadBlock(const VAddr src_addr, void* dest_buffer, const std::size_t size) {
ReadBlock(*system.CurrentProcess(), src_addr, dest_buffer, size);
ReadBlockImpl<false>(*system.CurrentProcess(), src_addr, dest_buffer, size);
}
void ReadBlockUnsafe(const VAddr src_addr, void* dest_buffer, const std::size_t size) {
ReadBlockUnsafe(*system.CurrentProcess(), src_addr, dest_buffer, size);
ReadBlockImpl<true>(*system.CurrentProcess(), src_addr, dest_buffer, size);
}
void WriteBlock(const Kernel::KProcess& process, const VAddr dest_addr, const void* src_buffer,
const std::size_t size) {
const auto& page_table = process.PageTable().PageTableImpl();
std::size_t remaining_size = size;
std::size_t page_index = dest_addr >> PAGE_BITS;
std::size_t page_offset = dest_addr & PAGE_MASK;
while (remaining_size > 0) {
const std::size_t copy_amount =
std::min(static_cast<std::size_t>(PAGE_SIZE) - page_offset, remaining_size);
const auto current_vaddr = static_cast<VAddr>((page_index << PAGE_BITS) + page_offset);
const auto [pointer, type] = page_table.pointers[page_index].PointerType();
switch (type) {
case Common::PageType::Unmapped: {
template <bool UNSAFE>
void WriteBlockImpl(const Kernel::KProcess& process, const VAddr dest_addr,
const void* src_buffer, const std::size_t size) {
WalkBlock(
process, dest_addr, size,
[dest_addr, size](const std::size_t copy_amount, const VAddr current_vaddr) {
LOG_ERROR(HW_Memory,
"Unmapped WriteBlock @ 0x{:016X} (start address = 0x{:016X}, size = {})",
current_vaddr, dest_addr, size);
break;
}
case Common::PageType::Memory: {
DEBUG_ASSERT(pointer);
u8* const dest_ptr = pointer + page_offset + (page_index << PAGE_BITS);
},
[&src_buffer](const std::size_t copy_amount, u8* const dest_ptr) {
std::memcpy(dest_ptr, src_buffer, copy_amount);
break;
}
case Common::PageType::RasterizerCachedMemory: {
u8* const host_ptr{GetPointerFromRasterizerCachedMemory(current_vaddr)};
system.GPU().InvalidateRegion(current_vaddr, copy_amount);
},
[&system = system, &src_buffer](const VAddr current_vaddr,
const std::size_t copy_amount, u8* const host_ptr) {
if constexpr (!UNSAFE) {
system.GPU().InvalidateRegion(current_vaddr, copy_amount);
}
std::memcpy(host_ptr, src_buffer, copy_amount);
break;
}
default:
UNREACHABLE();
}
page_index++;
page_offset = 0;
src_buffer = static_cast<const u8*>(src_buffer) + copy_amount;
remaining_size -= copy_amount;
}
}
void WriteBlockUnsafe(const Kernel::KProcess& process, const VAddr dest_addr,
const void* src_buffer, const std::size_t size) {
const auto& page_table = process.PageTable().PageTableImpl();
std::size_t remaining_size = size;
std::size_t page_index = dest_addr >> PAGE_BITS;
std::size_t page_offset = dest_addr & PAGE_MASK;
while (remaining_size > 0) {
const std::size_t copy_amount =
std::min(static_cast<std::size_t>(PAGE_SIZE) - page_offset, remaining_size);
const auto current_vaddr = static_cast<VAddr>((page_index << PAGE_BITS) + page_offset);
const auto [pointer, type] = page_table.pointers[page_index].PointerType();
switch (type) {
case Common::PageType::Unmapped: {
LOG_ERROR(HW_Memory,
"Unmapped WriteBlock @ 0x{:016X} (start address = 0x{:016X}, size = {})",
current_vaddr, dest_addr, size);
break;
}
case Common::PageType::Memory: {
DEBUG_ASSERT(pointer);
u8* const dest_ptr = pointer + page_offset + (page_index << PAGE_BITS);
std::memcpy(dest_ptr, src_buffer, copy_amount);
break;
}
case Common::PageType::RasterizerCachedMemory: {
u8* const host_ptr{GetPointerFromRasterizerCachedMemory(current_vaddr)};
std::memcpy(host_ptr, src_buffer, copy_amount);
break;
}
default:
UNREACHABLE();
}
page_index++;
page_offset = 0;
src_buffer = static_cast<const u8*>(src_buffer) + copy_amount;
remaining_size -= copy_amount;
}
},
[&src_buffer](const std::size_t copy_amount) {
src_buffer = static_cast<const u8*>(src_buffer) + copy_amount;
});
}
void WriteBlock(const VAddr dest_addr, const void* src_buffer, const std::size_t size) {
WriteBlock(*system.CurrentProcess(), dest_addr, src_buffer, size);
WriteBlockImpl<false>(*system.CurrentProcess(), dest_addr, src_buffer, size);
}
void WriteBlockUnsafe(const VAddr dest_addr, const void* src_buffer, const std::size_t size) {
WriteBlockUnsafe(*system.CurrentProcess(), dest_addr, src_buffer, size);
WriteBlockImpl<true>(*system.CurrentProcess(), dest_addr, src_buffer, size);
}
void ZeroBlock(const Kernel::KProcess& process, const VAddr dest_addr, const std::size_t size) {
const auto& page_table = process.PageTable().PageTableImpl();
std::size_t remaining_size = size;
std::size_t page_index = dest_addr >> PAGE_BITS;
std::size_t page_offset = dest_addr & PAGE_MASK;
while (remaining_size > 0) {
const std::size_t copy_amount =
std::min(static_cast<std::size_t>(PAGE_SIZE) - page_offset, remaining_size);
const auto current_vaddr = static_cast<VAddr>((page_index << PAGE_BITS) + page_offset);
const auto [pointer, type] = page_table.pointers[page_index].PointerType();
switch (type) {
case Common::PageType::Unmapped: {
WalkBlock(
process, dest_addr, size,
[dest_addr, size](const std::size_t copy_amount, const VAddr current_vaddr) {
LOG_ERROR(HW_Memory,
"Unmapped ZeroBlock @ 0x{:016X} (start address = 0x{:016X}, size = {})",
current_vaddr, dest_addr, size);
break;
}
case Common::PageType::Memory: {
DEBUG_ASSERT(pointer);
u8* const dest_ptr = pointer + page_offset + (page_index << PAGE_BITS);
},
[](const std::size_t copy_amount, u8* const dest_ptr) {
std::memset(dest_ptr, 0, copy_amount);
break;
}
case Common::PageType::RasterizerCachedMemory: {
u8* const host_ptr{GetPointerFromRasterizerCachedMemory(current_vaddr)};
},
[&system = system](const VAddr current_vaddr, const std::size_t copy_amount,
u8* const host_ptr) {
system.GPU().InvalidateRegion(current_vaddr, copy_amount);
std::memset(host_ptr, 0, copy_amount);
break;
}
default:
UNREACHABLE();
}
page_index++;
page_offset = 0;
remaining_size -= copy_amount;
}
}
void ZeroBlock(const VAddr dest_addr, const std::size_t size) {
ZeroBlock(*system.CurrentProcess(), dest_addr, size);
},
[](const std::size_t copy_amount) {});
}
void CopyBlock(const Kernel::KProcess& process, VAddr dest_addr, VAddr src_addr,
const std::size_t size) {
const auto& page_table = process.PageTable().PageTableImpl();
std::size_t remaining_size = size;
std::size_t page_index = src_addr >> PAGE_BITS;
std::size_t page_offset = src_addr & PAGE_MASK;
while (remaining_size > 0) {
const std::size_t copy_amount =
std::min(static_cast<std::size_t>(PAGE_SIZE) - page_offset, remaining_size);
const auto current_vaddr = static_cast<VAddr>((page_index << PAGE_BITS) + page_offset);
const auto [pointer, type] = page_table.pointers[page_index].PointerType();
switch (type) {
case Common::PageType::Unmapped: {
WalkBlock(
process, dest_addr, size,
[this, &process, &dest_addr, &src_addr, size](const std::size_t copy_amount,
const VAddr current_vaddr) {
LOG_ERROR(HW_Memory,
"Unmapped CopyBlock @ 0x{:016X} (start address = 0x{:016X}, size = {})",
current_vaddr, src_addr, size);
ZeroBlock(process, dest_addr, copy_amount);
break;
}
case Common::PageType::Memory: {
DEBUG_ASSERT(pointer);
const u8* src_ptr = pointer + page_offset + (page_index << PAGE_BITS);
WriteBlock(process, dest_addr, src_ptr, copy_amount);
break;
}
case Common::PageType::RasterizerCachedMemory: {
const u8* const host_ptr{GetPointerFromRasterizerCachedMemory(current_vaddr)};
},
[this, &process, &dest_addr](const std::size_t copy_amount, const u8* const src_ptr) {
WriteBlockImpl<false>(process, dest_addr, src_ptr, copy_amount);
},
[this, &system = system, &process, &dest_addr](
const VAddr current_vaddr, const std::size_t copy_amount, u8* const host_ptr) {
system.GPU().FlushRegion(current_vaddr, copy_amount);
WriteBlock(process, dest_addr, host_ptr, copy_amount);
break;
}
default:
UNREACHABLE();
}
page_index++;
page_offset = 0;
dest_addr += static_cast<VAddr>(copy_amount);
src_addr += static_cast<VAddr>(copy_amount);
remaining_size -= copy_amount;
}
}
void CopyBlock(VAddr dest_addr, VAddr src_addr, std::size_t size) {
return CopyBlock(*system.CurrentProcess(), dest_addr, src_addr, size);
WriteBlockImpl<false>(process, dest_addr, host_ptr, copy_amount);
},
[&dest_addr, &src_addr](const std::size_t copy_amount) {
dest_addr += static_cast<VAddr>(copy_amount);
src_addr += static_cast<VAddr>(copy_amount);
});
}
void RasterizerMarkRegionCached(VAddr vaddr, u64 size, bool cached) {
@@ -514,7 +357,7 @@ struct Memory::Impl {
} else {
// Switch page type to uncached if now uncached
switch (page_type) {
case Common::PageType::Unmapped:
case Common::PageType::Unmapped: // NOLINT(bugprone-branch-clone)
// It is not necessary for a process to have this region mapped into its address
// space, for example, a system module need not have a VRAM mapping.
break;
@@ -597,6 +440,44 @@ struct Memory::Impl {
}
}
[[nodiscard]] u8* GetPointerImpl(VAddr vaddr, auto on_unmapped, auto on_rasterizer) const {
// AARCH64 masks the upper 16 bit of all memory accesses
vaddr &= 0xffffffffffffLL;
if (vaddr >= 1uLL << current_page_table->GetAddressSpaceBits()) {
on_unmapped();
return nullptr;
}
// Avoid adding any extra logic to this fast-path block
const uintptr_t raw_pointer = current_page_table->pointers[vaddr >> PAGE_BITS].Raw();
if (u8* const pointer = Common::PageTable::PageInfo::ExtractPointer(raw_pointer)) {
return &pointer[vaddr];
}
switch (Common::PageTable::PageInfo::ExtractType(raw_pointer)) {
case Common::PageType::Unmapped:
on_unmapped();
return nullptr;
case Common::PageType::Memory:
ASSERT_MSG(false, "Mapped memory page without a pointer @ 0x{:016X}", vaddr);
return nullptr;
case Common::PageType::RasterizerCachedMemory: {
u8* const host_ptr{GetPointerFromRasterizerCachedMemory(vaddr)};
on_rasterizer();
return host_ptr;
}
default:
UNREACHABLE();
}
return nullptr;
}
[[nodiscard]] u8* GetPointer(const VAddr vaddr) const {
return GetPointerImpl(
vaddr, [vaddr]() { LOG_ERROR(HW_Memory, "Unmapped GetPointer @ 0x{:016X}", vaddr); },
[]() {});
}
/**
* Reads a particular data type out of memory at the given virtual address.
*
@@ -610,39 +491,17 @@ struct Memory::Impl {
*/
template <typename T>
T Read(VAddr vaddr) {
// AARCH64 masks the upper 16 bit of all memory accesses
vaddr &= 0xffffffffffffLL;
if (vaddr >= 1uLL << current_page_table->GetAddressSpaceBits()) {
LOG_ERROR(HW_Memory, "Unmapped Read{} @ 0x{:08X}", sizeof(T) * 8, vaddr);
return 0;
T result = 0;
const u8* const ptr = GetPointerImpl(
vaddr,
[vaddr]() {
LOG_ERROR(HW_Memory, "Unmapped Read{} @ 0x{:016X}", sizeof(T) * 8, vaddr);
},
[&system = system, vaddr]() { system.GPU().FlushRegion(vaddr, sizeof(T)); });
if (ptr) {
std::memcpy(&result, ptr, sizeof(T));
}
// Avoid adding any extra logic to this fast-path block
const uintptr_t raw_pointer = current_page_table->pointers[vaddr >> PAGE_BITS].Raw();
if (const u8* const pointer = Common::PageTable::PageInfo::ExtractPointer(raw_pointer)) {
T value;
std::memcpy(&value, &pointer[vaddr], sizeof(T));
return value;
}
switch (Common::PageTable::PageInfo::ExtractType(raw_pointer)) {
case Common::PageType::Unmapped:
LOG_ERROR(HW_Memory, "Unmapped Read{} @ 0x{:08X}", sizeof(T) * 8, vaddr);
return 0;
case Common::PageType::Memory:
ASSERT_MSG(false, "Mapped memory page without a pointer @ {:016X}", vaddr);
break;
case Common::PageType::RasterizerCachedMemory: {
const u8* const host_ptr{GetPointerFromRasterizerCachedMemory(vaddr)};
system.GPU().FlushRegion(vaddr, sizeof(T));
T value;
std::memcpy(&value, host_ptr, sizeof(T));
return value;
}
default:
UNREACHABLE();
}
return {};
return result;
}
/**
@@ -656,110 +515,46 @@ struct Memory::Impl {
*/
template <typename T>
void Write(VAddr vaddr, const T data) {
// AARCH64 masks the upper 16 bit of all memory accesses
vaddr &= 0xffffffffffffLL;
if (vaddr >= 1uLL << current_page_table->GetAddressSpaceBits()) {
LOG_ERROR(HW_Memory, "Unmapped Write{} 0x{:08X} @ 0x{:016X}", sizeof(data) * 8,
static_cast<u32>(data), vaddr);
return;
}
// Avoid adding any extra logic to this fast-path block
const uintptr_t raw_pointer = current_page_table->pointers[vaddr >> PAGE_BITS].Raw();
if (u8* const pointer = Common::PageTable::PageInfo::ExtractPointer(raw_pointer)) {
std::memcpy(&pointer[vaddr], &data, sizeof(T));
return;
}
switch (Common::PageTable::PageInfo::ExtractType(raw_pointer)) {
case Common::PageType::Unmapped:
LOG_ERROR(HW_Memory, "Unmapped Write{} 0x{:08X} @ 0x{:016X}", sizeof(data) * 8,
static_cast<u32>(data), vaddr);
return;
case Common::PageType::Memory:
ASSERT_MSG(false, "Mapped memory page without a pointer @ {:016X}", vaddr);
break;
case Common::PageType::RasterizerCachedMemory: {
u8* const host_ptr{GetPointerFromRasterizerCachedMemory(vaddr)};
system.GPU().InvalidateRegion(vaddr, sizeof(T));
std::memcpy(host_ptr, &data, sizeof(T));
break;
}
default:
UNREACHABLE();
u8* const ptr = GetPointerImpl(
vaddr,
[vaddr, data]() {
LOG_ERROR(HW_Memory, "Unmapped Write{} @ 0x{:016X} = 0x{:016X}", sizeof(T) * 8,
vaddr, static_cast<u64>(data));
},
[&system = system, vaddr]() { system.GPU().InvalidateRegion(vaddr, sizeof(T)); });
if (ptr) {
std::memcpy(ptr, &data, sizeof(T));
}
}
template <typename T>
bool WriteExclusive(VAddr vaddr, const T data, const T expected) {
// AARCH64 masks the upper 16 bit of all memory accesses
vaddr &= 0xffffffffffffLL;
if (vaddr >= 1uLL << current_page_table->GetAddressSpaceBits()) {
LOG_ERROR(HW_Memory, "Unmapped Write{} 0x{:08X} @ 0x{:016X}", sizeof(data) * 8,
static_cast<u32>(data), vaddr);
return true;
}
const uintptr_t raw_pointer = current_page_table->pointers[vaddr >> PAGE_BITS].Raw();
if (u8* const pointer = Common::PageTable::PageInfo::ExtractPointer(raw_pointer)) {
// NOTE: Avoid adding any extra logic to this fast-path block
const auto volatile_pointer = reinterpret_cast<volatile T*>(&pointer[vaddr]);
u8* const ptr = GetPointerImpl(
vaddr,
[vaddr, data]() {
LOG_ERROR(HW_Memory, "Unmapped WriteExclusive{} @ 0x{:016X} = 0x{:016X}",
sizeof(T) * 8, vaddr, static_cast<u64>(data));
},
[&system = system, vaddr]() { system.GPU().InvalidateRegion(vaddr, sizeof(T)); });
if (ptr) {
const auto volatile_pointer = reinterpret_cast<volatile T*>(ptr);
return Common::AtomicCompareAndSwap(volatile_pointer, data, expected);
}
switch (Common::PageTable::PageInfo::ExtractType(raw_pointer)) {
case Common::PageType::Unmapped:
LOG_ERROR(HW_Memory, "Unmapped Write{} 0x{:08X} @ 0x{:016X}", sizeof(data) * 8,
static_cast<u32>(data), vaddr);
return true;
case Common::PageType::Memory:
ASSERT_MSG(false, "Mapped memory page without a pointer @ {:016X}", vaddr);
break;
case Common::PageType::RasterizerCachedMemory: {
u8* host_ptr{GetPointerFromRasterizerCachedMemory(vaddr)};
system.GPU().InvalidateRegion(vaddr, sizeof(T));
auto* pointer = reinterpret_cast<volatile T*>(&host_ptr);
return Common::AtomicCompareAndSwap(pointer, data, expected);
}
default:
UNREACHABLE();
}
return true;
}
bool WriteExclusive128(VAddr vaddr, const u128 data, const u128 expected) {
// AARCH64 masks the upper 16 bit of all memory accesses
vaddr &= 0xffffffffffffLL;
if (vaddr >= 1uLL << current_page_table->GetAddressSpaceBits()) {
LOG_ERROR(HW_Memory, "Unmapped Write{} 0x{:08X} @ 0x{:016X}", sizeof(data) * 8,
static_cast<u32>(data[0]), vaddr);
return true;
}
const uintptr_t raw_pointer = current_page_table->pointers[vaddr >> PAGE_BITS].Raw();
if (u8* const pointer = Common::PageTable::PageInfo::ExtractPointer(raw_pointer)) {
// NOTE: Avoid adding any extra logic to this fast-path block
const auto volatile_pointer = reinterpret_cast<volatile u64*>(&pointer[vaddr]);
u8* const ptr = GetPointerImpl(
vaddr,
[vaddr, data]() {
LOG_ERROR(HW_Memory, "Unmapped WriteExclusive128 @ 0x{:016X} = 0x{:016X}{:016X}",
vaddr, static_cast<u64>(data[1]), static_cast<u64>(data[0]));
},
[&system = system, vaddr]() { system.GPU().InvalidateRegion(vaddr, sizeof(u128)); });
if (ptr) {
const auto volatile_pointer = reinterpret_cast<volatile u64*>(ptr);
return Common::AtomicCompareAndSwap(volatile_pointer, data, expected);
}
switch (Common::PageTable::PageInfo::ExtractType(raw_pointer)) {
case Common::PageType::Unmapped:
LOG_ERROR(HW_Memory, "Unmapped Write{} 0x{:08X} @ 0x{:016X}{:016X}", sizeof(data) * 8,
static_cast<u64>(data[1]), static_cast<u64>(data[0]), vaddr);
return true;
case Common::PageType::Memory:
ASSERT_MSG(false, "Mapped memory page without a pointer @ {:016X}", vaddr);
break;
case Common::PageType::RasterizerCachedMemory: {
u8* host_ptr{GetPointerFromRasterizerCachedMemory(vaddr)};
system.GPU().InvalidateRegion(vaddr, sizeof(u128));
auto* pointer = reinterpret_cast<volatile u64*>(&host_ptr);
return Common::AtomicCompareAndSwap(pointer, data, expected);
}
default:
UNREACHABLE();
}
return true;
}
@@ -789,12 +584,11 @@ void Memory::UnmapRegion(Common::PageTable& page_table, VAddr base, u64 size) {
impl->UnmapRegion(page_table, base, size);
}
bool Memory::IsValidVirtualAddress(const Kernel::KProcess& process, const VAddr vaddr) const {
return impl->IsValidVirtualAddress(process, vaddr);
}
bool Memory::IsValidVirtualAddress(const VAddr vaddr) const {
return impl->IsValidVirtualAddress(vaddr);
const Kernel::KProcess& process = *system.CurrentProcess();
const auto& page_table = process.PageTable().PageTableImpl();
const auto [pointer, type] = page_table.pointers[vaddr >> PAGE_BITS].PointerType();
return pointer != nullptr || type == Common::PageType::RasterizerCachedMemory;
}
u8* Memory::GetPointer(VAddr vaddr) {
@@ -863,64 +657,38 @@ std::string Memory::ReadCString(VAddr vaddr, std::size_t max_length) {
void Memory::ReadBlock(const Kernel::KProcess& process, const VAddr src_addr, void* dest_buffer,
const std::size_t size) {
impl->ReadBlock(process, src_addr, dest_buffer, size);
impl->ReadBlockImpl<false>(process, src_addr, dest_buffer, size);
}
void Memory::ReadBlock(const VAddr src_addr, void* dest_buffer, const std::size_t size) {
impl->ReadBlock(src_addr, dest_buffer, size);
}
void Memory::ReadBlockUnsafe(const Kernel::KProcess& process, const VAddr src_addr,
void* dest_buffer, const std::size_t size) {
impl->ReadBlockUnsafe(process, src_addr, dest_buffer, size);
}
void Memory::ReadBlockUnsafe(const VAddr src_addr, void* dest_buffer, const std::size_t size) {
impl->ReadBlockUnsafe(src_addr, dest_buffer, size);
}
void Memory::WriteBlock(const Kernel::KProcess& process, VAddr dest_addr, const void* src_buffer,
std::size_t size) {
impl->WriteBlock(process, dest_addr, src_buffer, size);
impl->WriteBlockImpl<false>(process, dest_addr, src_buffer, size);
}
void Memory::WriteBlock(const VAddr dest_addr, const void* src_buffer, const std::size_t size) {
impl->WriteBlock(dest_addr, src_buffer, size);
}
void Memory::WriteBlockUnsafe(const Kernel::KProcess& process, VAddr dest_addr,
const void* src_buffer, std::size_t size) {
impl->WriteBlockUnsafe(process, dest_addr, src_buffer, size);
}
void Memory::WriteBlockUnsafe(const VAddr dest_addr, const void* src_buffer,
const std::size_t size) {
impl->WriteBlockUnsafe(dest_addr, src_buffer, size);
}
void Memory::ZeroBlock(const Kernel::KProcess& process, VAddr dest_addr, std::size_t size) {
impl->ZeroBlock(process, dest_addr, size);
}
void Memory::ZeroBlock(VAddr dest_addr, std::size_t size) {
impl->ZeroBlock(dest_addr, size);
}
void Memory::CopyBlock(const Kernel::KProcess& process, VAddr dest_addr, VAddr src_addr,
const std::size_t size) {
impl->CopyBlock(process, dest_addr, src_addr, size);
}
void Memory::CopyBlock(VAddr dest_addr, VAddr src_addr, std::size_t size) {
impl->CopyBlock(dest_addr, src_addr, size);
}
void Memory::RasterizerMarkRegionCached(VAddr vaddr, u64 size, bool cached) {
impl->RasterizerMarkRegionCached(vaddr, size, cached);
}
bool IsKernelVirtualAddress(const VAddr vaddr) {
return KERNEL_REGION_VADDR <= vaddr && vaddr < KERNEL_REGION_END;
}
} // namespace Core::Memory

View File

@@ -39,11 +39,6 @@ enum : VAddr {
/// Application stack
DEFAULT_STACK_SIZE = 0x100000,
/// Kernel Virtual Address Range
KERNEL_REGION_VADDR = 0xFFFFFF8000000000,
KERNEL_REGION_SIZE = 0x7FFFE00000,
KERNEL_REGION_END = KERNEL_REGION_VADDR + KERNEL_REGION_SIZE,
};
/// Central class that handles all memory operations and state.
@@ -56,7 +51,7 @@ public:
Memory& operator=(const Memory&) = delete;
Memory(Memory&&) = default;
Memory& operator=(Memory&&) = default;
Memory& operator=(Memory&&) = delete;
/**
* Resets the state of the Memory system.
@@ -90,17 +85,6 @@ public:
*/
void UnmapRegion(Common::PageTable& page_table, VAddr base, u64 size);
/**
* Checks whether or not the supplied address is a valid virtual
* address for the given process.
*
* @param process The emulated process to check the address against.
* @param vaddr The virtual address to check the validity of.
*
* @returns True if the given virtual address is valid, false otherwise.
*/
bool IsValidVirtualAddress(const Kernel::KProcess& process, VAddr vaddr) const;
/**
* Checks whether or not the supplied address is a valid virtual
* address for the current process.
@@ -109,7 +93,7 @@ public:
*
* @returns True if the given virtual address is valid, false otherwise.
*/
bool IsValidVirtualAddress(VAddr vaddr) const;
[[nodiscard]] bool IsValidVirtualAddress(VAddr vaddr) const;
/**
* Gets a pointer to the given address.
@@ -134,7 +118,7 @@ public:
* @returns The pointer to the given address, if the address is valid.
* If the address is not valid, nullptr will be returned.
*/
const u8* GetPointer(VAddr vaddr) const;
[[nodiscard]] const u8* GetPointer(VAddr vaddr) const;
template <typename T>
const T* GetPointer(VAddr vaddr) const {
@@ -327,27 +311,6 @@ public:
void ReadBlock(const Kernel::KProcess& process, VAddr src_addr, void* dest_buffer,
std::size_t size);
/**
* Reads a contiguous block of bytes from a specified process' address space.
* This unsafe version does not trigger GPU flushing.
*
* @param process The process to read the data from.
* @param src_addr The virtual address to begin reading from.
* @param dest_buffer The buffer to place the read bytes into.
* @param size The amount of data to read, in bytes.
*
* @note If a size of 0 is specified, then this function reads nothing and
* no attempts to access memory are made at all.
*
* @pre dest_buffer must be at least size bytes in length, otherwise a
* buffer overrun will occur.
*
* @post The range [dest_buffer, size) contains the read bytes from the
* process' address space.
*/
void ReadBlockUnsafe(const Kernel::KProcess& process, VAddr src_addr, void* dest_buffer,
std::size_t size);
/**
* Reads a contiguous block of bytes from the current process' address space.
*
@@ -408,26 +371,6 @@ public:
void WriteBlock(const Kernel::KProcess& process, VAddr dest_addr, const void* src_buffer,
std::size_t size);
/**
* Writes a range of bytes into a given process' address space at the specified
* virtual address.
* This unsafe version does not invalidate GPU Memory.
*
* @param process The process to write data into the address space of.
* @param dest_addr The destination virtual address to begin writing the data at.
* @param src_buffer The data to write into the process' address space.
* @param size The size of the data to write, in bytes.
*
* @post The address range [dest_addr, size) in the process' address space
* contains the data that was within src_buffer.
*
* @post If an attempt is made to write into an unmapped region of memory, the writes
* will be ignored and an error will be logged.
*
*/
void WriteBlockUnsafe(const Kernel::KProcess& process, VAddr dest_addr, const void* src_buffer,
std::size_t size);
/**
* Writes a range of bytes into the current process' address space at the specified
* virtual address.
@@ -467,29 +410,6 @@ public:
*/
void WriteBlockUnsafe(VAddr dest_addr, const void* src_buffer, std::size_t size);
/**
* Fills the specified address range within a process' address space with zeroes.
*
* @param process The process that will have a portion of its memory zeroed out.
* @param dest_addr The starting virtual address of the range to zero out.
* @param size The size of the address range to zero out, in bytes.
*
* @post The range [dest_addr, size) within the process' address space is
* filled with zeroes.
*/
void ZeroBlock(const Kernel::KProcess& process, VAddr dest_addr, std::size_t size);
/**
* Fills the specified address range within the current process' address space with zeroes.
*
* @param dest_addr The starting virtual address of the range to zero out.
* @param size The size of the address range to zero out, in bytes.
*
* @post The range [dest_addr, size) within the current process' address space is
* filled with zeroes.
*/
void ZeroBlock(VAddr dest_addr, std::size_t size);
/**
* Copies data within a process' address space to another location within the
* same address space.
@@ -505,19 +425,6 @@ public:
void CopyBlock(const Kernel::KProcess& process, VAddr dest_addr, VAddr src_addr,
std::size_t size);
/**
* Copies data within the current process' address space to another location within the
* same address space.
*
* @param dest_addr The destination virtual address to begin copying the data into.
* @param src_addr The source virtual address to begin copying the data from.
* @param size The size of the data to copy, in bytes.
*
* @post The range [dest_addr, size) within the current process' address space
* contains the same data within the range [src_addr, size).
*/
void CopyBlock(VAddr dest_addr, VAddr src_addr, std::size_t size);
/**
* Marks each page within the specified address range as cached or uncached.
*
@@ -535,7 +442,4 @@ private:
std::unique_ptr<Impl> impl;
};
/// Determines if the given VAddr is a kernel address
bool IsKernelVirtualAddress(VAddr vaddr);
} // namespace Core::Memory

View File

@@ -10,9 +10,10 @@
#include "common/common_funcs.h"
#ifdef _WIN32
#define _WINSOCK_DEPRECATED_NO_WARNINGS // gethostname
#include <winsock2.h>
#include <ws2tcpip.h>
#elif YUZU_UNIX
#include <arpa/inet.h>
#include <errno.h>
#include <fcntl.h>
#include <netdb.h>
@@ -27,7 +28,9 @@
#include "common/assert.h"
#include "common/common_types.h"
#include "common/logging/log.h"
#include "common/settings.h"
#include "core/network/network.h"
#include "core/network/network_interface.h"
#include "core/network/sockets.h"
namespace Network {
@@ -47,11 +50,6 @@ void Finalize() {
WSACleanup();
}
constexpr IPv4Address TranslateIPv4(in_addr addr) {
auto& bytes = addr.S_un.S_un_b;
return IPv4Address{bytes.s_b1, bytes.s_b2, bytes.s_b3, bytes.s_b4};
}
sockaddr TranslateFromSockAddrIn(SockAddrIn input) {
sockaddr_in result;
@@ -138,12 +136,6 @@ void Initialize() {}
void Finalize() {}
constexpr IPv4Address TranslateIPv4(in_addr addr) {
const u32 bytes = addr.s_addr;
return IPv4Address{static_cast<u8>(bytes), static_cast<u8>(bytes >> 8),
static_cast<u8>(bytes >> 16), static_cast<u8>(bytes >> 24)};
}
sockaddr TranslateFromSockAddrIn(SockAddrIn input) {
sockaddr_in result;
@@ -182,7 +174,7 @@ linger MakeLinger(bool enable, u32 linger_value) {
}
bool EnableNonBlock(int fd, bool enable) {
int flags = fcntl(fd, F_GETFD);
int flags = fcntl(fd, F_GETFL);
if (flags == -1) {
return false;
}
@@ -191,7 +183,7 @@ bool EnableNonBlock(int fd, bool enable) {
} else {
flags &= ~O_NONBLOCK;
}
return fcntl(fd, F_SETFD, flags) == 0;
return fcntl(fd, F_SETFL, flags) == 0;
}
Errno TranslateNativeError(int e) {
@@ -227,8 +219,12 @@ Errno GetAndLogLastError() {
#else
int e = errno;
#endif
const Errno err = TranslateNativeError(e);
if (err == Errno::AGAIN) {
return err;
}
LOG_ERROR(Network, "Socket operation error: {}", NativeErrorToString(e));
return TranslateNativeError(e);
return err;
}
int TranslateDomain(Domain domain) {
@@ -353,27 +349,29 @@ NetworkInstance::~NetworkInstance() {
Finalize();
}
std::pair<IPv4Address, Errno> GetHostIPv4Address() {
std::array<char, 256> name{};
if (gethostname(name.data(), static_cast<int>(name.size()) - 1) == SOCKET_ERROR) {
return {IPv4Address{}, GetAndLogLastError()};
std::optional<IPv4Address> GetHostIPv4Address() {
const std::string& selected_network_interface = Settings::values.network_interface.GetValue();
const auto network_interfaces = Network::GetAvailableNetworkInterfaces();
if (network_interfaces.size() == 0) {
LOG_ERROR(Network, "GetAvailableNetworkInterfaces returned no interfaces");
return {};
}
hostent* const ent = gethostbyname(name.data());
if (!ent) {
return {IPv4Address{}, GetAndLogLastError()};
}
if (ent->h_addr_list == nullptr) {
UNIMPLEMENTED_MSG("No addr provided in hostent->h_addr_list");
return {IPv4Address{}, Errno::SUCCESS};
}
if (ent->h_length != sizeof(in_addr)) {
UNIMPLEMENTED_MSG("Unexpected size={} in hostent->h_length", ent->h_length);
}
const auto res =
std::ranges::find_if(network_interfaces, [&selected_network_interface](const auto& iface) {
return iface.name == selected_network_interface;
});
in_addr addr;
std::memcpy(&addr, ent->h_addr_list[0], sizeof(addr));
return {TranslateIPv4(addr), Errno::SUCCESS};
if (res != network_interfaces.end()) {
char ip_addr[16] = {};
ASSERT(inet_ntop(AF_INET, &res->ip_address, ip_addr, sizeof(ip_addr)) != nullptr);
LOG_INFO(Network, "IP address: {}", ip_addr);
return TranslateIPv4(res->ip_address);
} else {
LOG_ERROR(Network, "Couldn't find selected interface \"{}\"", selected_network_interface);
return {};
}
}
std::pair<s32, Errno> Poll(std::vector<PollFD>& pollfds, s32 timeout) {

View File

@@ -5,11 +5,18 @@
#pragma once
#include <array>
#include <optional>
#include <utility>
#include "common/common_funcs.h"
#include "common/common_types.h"
#ifdef _WIN32
#include <winsock2.h>
#elif YUZU_UNIX
#include <netinet/in.h>
#endif
namespace Network {
class Socket;
@@ -92,8 +99,21 @@ public:
~NetworkInstance();
};
#ifdef _WIN32
constexpr IPv4Address TranslateIPv4(in_addr addr) {
auto& bytes = addr.S_un.S_un_b;
return IPv4Address{bytes.s_b1, bytes.s_b2, bytes.s_b3, bytes.s_b4};
}
#elif YUZU_UNIX
constexpr IPv4Address TranslateIPv4(in_addr addr) {
const u32 bytes = addr.s_addr;
return IPv4Address{static_cast<u8>(bytes), static_cast<u8>(bytes >> 8),
static_cast<u8>(bytes >> 16), static_cast<u8>(bytes >> 24)};
}
#endif
/// @brief Returns host's IPv4 address
/// @return Pair of an array of human ordered IPv4 address (e.g. 192.168.0.1) and an error code
std::pair<IPv4Address, Errno> GetHostIPv4Address();
/// @return human ordered IPv4 address (e.g. 192.168.0.1) as an array
std::optional<IPv4Address> GetHostIPv4Address();
} // namespace Network

View File

@@ -0,0 +1,203 @@
// Copyright 2021 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include <algorithm>
#include <fstream>
#include <sstream>
#include <vector>
#include "common/bit_cast.h"
#include "common/common_types.h"
#include "common/logging/log.h"
#include "common/settings.h"
#include "common/string_util.h"
#include "core/network/network_interface.h"
#ifdef _WIN32
#include <iphlpapi.h>
#else
#include <cerrno>
#include <ifaddrs.h>
#include <net/if.h>
#endif
namespace Network {
#ifdef _WIN32
std::vector<NetworkInterface> GetAvailableNetworkInterfaces() {
std::vector<IP_ADAPTER_ADDRESSES> adapter_addresses;
DWORD ret = ERROR_BUFFER_OVERFLOW;
DWORD buf_size = 0;
// retry up to 5 times
for (int i = 0; i < 5 && ret == ERROR_BUFFER_OVERFLOW; i++) {
ret = GetAdaptersAddresses(
AF_INET, GAA_FLAG_SKIP_MULTICAST | GAA_FLAG_SKIP_DNS_SERVER | GAA_FLAG_INCLUDE_GATEWAYS,
nullptr, adapter_addresses.data(), &buf_size);
if (ret == ERROR_BUFFER_OVERFLOW) {
adapter_addresses.resize((buf_size / sizeof(IP_ADAPTER_ADDRESSES)) + 1);
} else {
break;
}
}
if (ret == NO_ERROR) {
std::vector<NetworkInterface> result;
for (auto current_address = adapter_addresses.data(); current_address != nullptr;
current_address = current_address->Next) {
if (current_address->FirstUnicastAddress == nullptr ||
current_address->FirstUnicastAddress->Address.lpSockaddr == nullptr) {
continue;
}
if (current_address->OperStatus != IfOperStatusUp) {
continue;
}
const auto ip_addr = Common::BitCast<struct sockaddr_in>(
*current_address->FirstUnicastAddress->Address.lpSockaddr)
.sin_addr;
ULONG mask = 0;
if (ConvertLengthToIpv4Mask(current_address->FirstUnicastAddress->OnLinkPrefixLength,
&mask) != NO_ERROR) {
LOG_ERROR(Network, "Failed to convert IPv4 prefix length to subnet mask");
continue;
}
struct in_addr gateway = {.S_un{.S_addr{0}}};
if (current_address->FirstGatewayAddress != nullptr &&
current_address->FirstGatewayAddress->Address.lpSockaddr != nullptr) {
gateway = Common::BitCast<struct sockaddr_in>(
*current_address->FirstGatewayAddress->Address.lpSockaddr)
.sin_addr;
}
result.push_back(NetworkInterface{
.name{Common::UTF16ToUTF8(std::wstring{current_address->FriendlyName})},
.ip_address{ip_addr},
.subnet_mask = in_addr{.S_un{.S_addr{mask}}},
.gateway = gateway});
}
return result;
} else {
LOG_ERROR(Network, "Failed to get network interfaces with GetAdaptersAddresses");
return {};
}
}
#else
std::vector<NetworkInterface> GetAvailableNetworkInterfaces() {
std::vector<NetworkInterface> result;
struct ifaddrs* ifaddr = nullptr;
if (getifaddrs(&ifaddr) != 0) {
LOG_ERROR(Network, "Failed to get network interfaces with getifaddrs: {}",
std::strerror(errno));
return result;
}
for (auto ifa = ifaddr; ifa != nullptr; ifa = ifa->ifa_next) {
if (ifa->ifa_addr == nullptr || ifa->ifa_netmask == nullptr) {
continue;
}
if (ifa->ifa_addr->sa_family != AF_INET) {
continue;
}
if ((ifa->ifa_flags & IFF_UP) == 0 || (ifa->ifa_flags & IFF_LOOPBACK) != 0) {
continue;
}
std::uint32_t gateway{0};
std::ifstream file{"/proc/net/route"};
if (file.is_open()) {
// ignore header
file.ignore(std::numeric_limits<std::streamsize>::max(), '\n');
bool gateway_found = false;
for (std::string line; std::getline(file, line);) {
std::istringstream iss{line};
std::string iface_name{};
iss >> iface_name;
if (iface_name != ifa->ifa_name) {
continue;
}
iss >> std::hex;
std::uint32_t dest{0};
iss >> dest;
if (dest != 0) {
// not the default route
continue;
}
iss >> gateway;
std::uint16_t flags{0};
iss >> flags;
// flag RTF_GATEWAY (defined in <linux/route.h>)
if ((flags & 0x2) == 0) {
continue;
}
gateway_found = true;
break;
}
if (!gateway_found) {
gateway = 0;
}
} else {
LOG_ERROR(Network, "Failed to open \"/proc/net/route\"");
}
result.push_back(NetworkInterface{
.name{ifa->ifa_name},
.ip_address{Common::BitCast<struct sockaddr_in>(*ifa->ifa_addr).sin_addr},
.subnet_mask{Common::BitCast<struct sockaddr_in>(*ifa->ifa_netmask).sin_addr},
.gateway{in_addr{.s_addr = gateway}}});
}
freeifaddrs(ifaddr);
return result;
}
#endif
std::optional<NetworkInterface> GetSelectedNetworkInterface() {
const std::string& selected_network_interface = Settings::values.network_interface.GetValue();
const auto network_interfaces = Network::GetAvailableNetworkInterfaces();
if (network_interfaces.size() == 0) {
LOG_ERROR(Network, "GetAvailableNetworkInterfaces returned no interfaces");
return {};
}
const auto res =
std::ranges::find_if(network_interfaces, [&selected_network_interface](const auto& iface) {
return iface.name == selected_network_interface;
});
if (res != network_interfaces.end()) {
return *res;
} else {
LOG_ERROR(Network, "Couldn't find selected interface \"{}\"", selected_network_interface);
return {};
}
}
} // namespace Network

View File

@@ -0,0 +1,29 @@
// Copyright 2021 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <optional>
#include <string>
#include <vector>
#ifdef _WIN32
#include <winsock2.h>
#else
#include <netinet/in.h>
#endif
namespace Network {
struct NetworkInterface {
std::string name;
struct in_addr ip_address;
struct in_addr subnet_mask;
struct in_addr gateway;
};
std::vector<NetworkInterface> GetAvailableNetworkInterfaces();
std::optional<NetworkInterface> GetSelectedNetworkInterface();
} // namespace Network

View File

@@ -889,6 +889,9 @@ SDLState::SDLState() {
RegisterFactory<VibrationDevice>("sdl", vibration_factory);
RegisterFactory<MotionDevice>("sdl", motion_factory);
// Disable raw input. When enabled this setting causes SDL to die when a web applet opens
SDL_SetHint(SDL_HINT_JOYSTICK_RAWINPUT, "0");
// Enable HIDAPI rumble. This prevents SDL from disabling motion on PS4 and PS5 controllers
SDL_SetHint(SDL_HINT_JOYSTICK_HIDAPI_PS4_RUMBLE, "1");
SDL_SetHint(SDL_HINT_JOYSTICK_HIDAPI_PS5_RUMBLE, "1");

View File

@@ -97,6 +97,7 @@ add_library(video_core STATIC
renderer_opengl/gl_stream_buffer.h
renderer_opengl/gl_texture_cache.cpp
renderer_opengl/gl_texture_cache.h
renderer_opengl/gl_texture_cache_base.cpp
renderer_opengl/gl_query_cache.cpp
renderer_opengl/gl_query_cache.h
renderer_opengl/maxwell_to_gl.h
@@ -155,6 +156,7 @@ add_library(video_core STATIC
renderer_vulkan/vk_swapchain.h
renderer_vulkan/vk_texture_cache.cpp
renderer_vulkan/vk_texture_cache.h
renderer_vulkan/vk_texture_cache_base.cpp
renderer_vulkan/vk_update_descriptor.cpp
renderer_vulkan/vk_update_descriptor.h
shader_cache.cpp
@@ -186,6 +188,7 @@ add_library(video_core STATIC
texture_cache/samples_helper.h
texture_cache/slot_vector.h
texture_cache/texture_cache.h
texture_cache/texture_cache_base.h
texture_cache/types.h
texture_cache/util.cpp
texture_cache/util.h

View File

@@ -96,12 +96,11 @@ void Vic::Execute() {
if (!converted_frame_buffer) {
converted_frame_buffer = AVMallocPtr{static_cast<u8*>(av_malloc(linear_size)), av_free};
}
const int converted_stride{frame->width * 4};
const std::array<int, 4> converted_stride{frame->width * 4, frame->height * 4, 0, 0};
u8* const converted_frame_buf_addr{converted_frame_buffer.get()};
sws_scale(scaler_ctx, frame->data, frame->linesize, 0, frame->height,
&converted_frame_buf_addr, &converted_stride);
&converted_frame_buf_addr, converted_stride.data());
const u32 blk_kind = static_cast<u32>(config.block_linear_kind);
if (blk_kind != 0) {

View File

@@ -15,7 +15,7 @@
#include "video_core/renderer_opengl/gl_shader_util.h"
#include "video_core/renderer_opengl/gl_state_tracker.h"
#include "video_core/shader_notify.h"
#include "video_core/texture_cache/texture_cache.h"
#include "video_core/texture_cache/texture_cache_base.h"
#if defined(_MSC_VER) && defined(NDEBUG)
#define LAMBDA_FORCEINLINE [[msvc::forceinline]]

View File

@@ -32,7 +32,7 @@
#include "video_core/renderer_opengl/maxwell_to_gl.h"
#include "video_core/renderer_opengl/renderer_opengl.h"
#include "video_core/shader_cache.h"
#include "video_core/texture_cache/texture_cache.h"
#include "video_core/texture_cache/texture_cache_base.h"
namespace OpenGL {

View File

@@ -18,10 +18,8 @@
#include "video_core/renderer_opengl/maxwell_to_gl.h"
#include "video_core/renderer_opengl/util_shaders.h"
#include "video_core/surface.h"
#include "video_core/texture_cache/format_lookup_table.h"
#include "video_core/texture_cache/formatter.h"
#include "video_core/texture_cache/samples_helper.h"
#include "video_core/texture_cache/texture_cache.h"
#include "video_core/textures/decoders.h"
namespace OpenGL {
namespace {

View File

@@ -12,7 +12,7 @@
#include "shader_recompiler/shader_info.h"
#include "video_core/renderer_opengl/gl_resource_manager.h"
#include "video_core/renderer_opengl/util_shaders.h"
#include "video_core/texture_cache/texture_cache.h"
#include "video_core/texture_cache/texture_cache_base.h"
namespace OpenGL {

View File

@@ -0,0 +1,10 @@
// Copyright 2021 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include "video_core/renderer_opengl/gl_texture_cache.h"
#include "video_core/texture_cache/texture_cache.h"
namespace VideoCommon {
template class VideoCommon::TextureCache<OpenGL::TextureCacheParams>;
}

View File

@@ -32,7 +32,7 @@
#include "video_core/renderer_vulkan/vk_texture_cache.h"
#include "video_core/renderer_vulkan/vk_update_descriptor.h"
#include "video_core/shader_cache.h"
#include "video_core/texture_cache/texture_cache.h"
#include "video_core/texture_cache/texture_cache_base.h"
#include "video_core/vulkan_common/vulkan_device.h"
#include "video_core/vulkan_common/vulkan_wrapper.h"

View File

@@ -19,6 +19,8 @@
#include "video_core/renderer_vulkan/vk_scheduler.h"
#include "video_core/renderer_vulkan/vk_staging_buffer_pool.h"
#include "video_core/renderer_vulkan/vk_texture_cache.h"
#include "video_core/texture_cache/formatter.h"
#include "video_core/texture_cache/samples_helper.h"
#include "video_core/vulkan_common/vulkan_device.h"
#include "video_core/vulkan_common/vulkan_memory_allocator.h"
#include "video_core/vulkan_common/vulkan_wrapper.h"

View File

@@ -9,7 +9,7 @@
#include "shader_recompiler/shader_info.h"
#include "video_core/renderer_vulkan/vk_staging_buffer_pool.h"
#include "video_core/texture_cache/texture_cache.h"
#include "video_core/texture_cache/texture_cache_base.h"
#include "video_core/vulkan_common/vulkan_memory_allocator.h"
#include "video_core/vulkan_common/vulkan_wrapper.h"

View File

@@ -0,0 +1,10 @@
// Copyright 2021 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include "video_core/renderer_vulkan/vk_texture_cache.h"
#include "video_core/texture_cache/texture_cache.h"
namespace VideoCommon {
template class VideoCommon::TextureCache<Vulkan::TextureCacheParams>;
}

View File

@@ -6,7 +6,7 @@
#include "common/assert.h"
#include "video_core/texture_cache/image_view_info.h"
#include "video_core/texture_cache/texture_cache.h"
#include "video_core/texture_cache/texture_cache_base.h"
#include "video_core/texture_cache/types.h"
#include "video_core/textures/texture.h"
@@ -14,6 +14,8 @@ namespace VideoCommon {
namespace {
using Tegra::Texture::TextureType;
constexpr u8 RENDER_TARGET_SWIZZLE = std::numeric_limits<u8>::max();
[[nodiscard]] u8 CastSwizzle(SwizzleSource source) {

View File

@@ -4,48 +4,11 @@
#pragma once
#include <algorithm>
#include <array>
#include <bit>
#include <memory>
#include <mutex>
#include <optional>
#include <span>
#include <type_traits>
#include <unordered_map>
#include <unordered_set>
#include <utility>
#include <vector>
#include <boost/container/small_vector.hpp>
#include "common/alignment.h"
#include "common/common_types.h"
#include "common/literals.h"
#include "common/logging/log.h"
#include "common/settings.h"
#include "video_core/compatible_formats.h"
#include "video_core/delayed_destruction_ring.h"
#include "video_core/dirty_flags.h"
#include "video_core/engines/fermi_2d.h"
#include "video_core/engines/kepler_compute.h"
#include "video_core/engines/maxwell_3d.h"
#include "video_core/memory_manager.h"
#include "video_core/rasterizer_interface.h"
#include "video_core/surface.h"
#include "video_core/texture_cache/descriptor_table.h"
#include "video_core/texture_cache/format_lookup_table.h"
#include "video_core/texture_cache/formatter.h"
#include "video_core/texture_cache/image_base.h"
#include "video_core/texture_cache/image_info.h"
#include "video_core/texture_cache/image_view_base.h"
#include "video_core/texture_cache/image_view_info.h"
#include "video_core/texture_cache/render_targets.h"
#include "video_core/texture_cache/samples_helper.h"
#include "video_core/texture_cache/slot_vector.h"
#include "video_core/texture_cache/types.h"
#include "video_core/texture_cache/util.h"
#include "video_core/textures/texture.h"
#include "video_core/texture_cache/texture_cache_base.h"
namespace VideoCommon {
@@ -61,352 +24,6 @@ using VideoCore::Surface::PixelFormatFromRenderTargetFormat;
using VideoCore::Surface::SurfaceType;
using namespace Common::Literals;
template <class P>
class TextureCache {
/// Address shift for caching images into a hash table
static constexpr u64 PAGE_BITS = 20;
/// Enables debugging features to the texture cache
static constexpr bool ENABLE_VALIDATION = P::ENABLE_VALIDATION;
/// Implement blits as copies between framebuffers
static constexpr bool FRAMEBUFFER_BLITS = P::FRAMEBUFFER_BLITS;
/// True when some copies have to be emulated
static constexpr bool HAS_EMULATED_COPIES = P::HAS_EMULATED_COPIES;
/// True when the API can provide info about the memory of the device.
static constexpr bool HAS_DEVICE_MEMORY_INFO = P::HAS_DEVICE_MEMORY_INFO;
/// Image view ID for null descriptors
static constexpr ImageViewId NULL_IMAGE_VIEW_ID{0};
/// Sampler ID for bugged sampler ids
static constexpr SamplerId NULL_SAMPLER_ID{0};
static constexpr u64 DEFAULT_EXPECTED_MEMORY = 1_GiB;
static constexpr u64 DEFAULT_CRITICAL_MEMORY = 2_GiB;
using Runtime = typename P::Runtime;
using Image = typename P::Image;
using ImageAlloc = typename P::ImageAlloc;
using ImageView = typename P::ImageView;
using Sampler = typename P::Sampler;
using Framebuffer = typename P::Framebuffer;
struct BlitImages {
ImageId dst_id;
ImageId src_id;
PixelFormat dst_format;
PixelFormat src_format;
};
template <typename T>
struct IdentityHash {
[[nodiscard]] size_t operator()(T value) const noexcept {
return static_cast<size_t>(value);
}
};
public:
explicit TextureCache(Runtime&, VideoCore::RasterizerInterface&, Tegra::Engines::Maxwell3D&,
Tegra::Engines::KeplerCompute&, Tegra::MemoryManager&);
/// Notify the cache that a new frame has been queued
void TickFrame();
/// Return a constant reference to the given image view id
[[nodiscard]] const ImageView& GetImageView(ImageViewId id) const noexcept;
/// Return a reference to the given image view id
[[nodiscard]] ImageView& GetImageView(ImageViewId id) noexcept;
/// Mark an image as modified from the GPU
void MarkModification(ImageId id) noexcept;
/// Fill image_view_ids with the graphics images in indices
void FillGraphicsImageViews(std::span<const u32> indices,
std::span<ImageViewId> image_view_ids);
/// Fill image_view_ids with the compute images in indices
void FillComputeImageViews(std::span<const u32> indices, std::span<ImageViewId> image_view_ids);
/// Get the sampler from the graphics descriptor table in the specified index
Sampler* GetGraphicsSampler(u32 index);
/// Get the sampler from the compute descriptor table in the specified index
Sampler* GetComputeSampler(u32 index);
/// Refresh the state for graphics image view and sampler descriptors
void SynchronizeGraphicsDescriptors();
/// Refresh the state for compute image view and sampler descriptors
void SynchronizeComputeDescriptors();
/// Update bound render targets and upload memory if necessary
/// @param is_clear True when the render targets are being used for clears
void UpdateRenderTargets(bool is_clear);
/// Find a framebuffer with the currently bound render targets
/// UpdateRenderTargets should be called before this
Framebuffer* GetFramebuffer();
/// Mark images in a range as modified from the CPU
void WriteMemory(VAddr cpu_addr, size_t size);
/// Download contents of host images to guest memory in a region
void DownloadMemory(VAddr cpu_addr, size_t size);
/// Remove images in a region
void UnmapMemory(VAddr cpu_addr, size_t size);
/// Remove images in a region
void UnmapGPUMemory(GPUVAddr gpu_addr, size_t size);
/// Blit an image with the given parameters
void BlitImage(const Tegra::Engines::Fermi2D::Surface& dst,
const Tegra::Engines::Fermi2D::Surface& src,
const Tegra::Engines::Fermi2D::Config& copy);
/// Invalidate the contents of the color buffer index
/// These contents become unspecified, the cache can assume aggressive optimizations.
void InvalidateColorBuffer(size_t index);
/// Invalidate the contents of the depth buffer
/// These contents become unspecified, the cache can assume aggressive optimizations.
void InvalidateDepthBuffer();
/// Try to find a cached image view in the given CPU address
[[nodiscard]] ImageView* TryFindFramebufferImageView(VAddr cpu_addr);
/// Return true when there are uncommitted images to be downloaded
[[nodiscard]] bool HasUncommittedFlushes() const noexcept;
/// Return true when the caller should wait for async downloads
[[nodiscard]] bool ShouldWaitAsyncFlushes() const noexcept;
/// Commit asynchronous downloads
void CommitAsyncFlushes();
/// Pop asynchronous downloads
void PopAsyncFlushes();
/// Return true when a CPU region is modified from the GPU
[[nodiscard]] bool IsRegionGpuModified(VAddr addr, size_t size);
std::mutex mutex;
private:
/// Iterate over all page indices in a range
template <typename Func>
static void ForEachCPUPage(VAddr addr, size_t size, Func&& func) {
static constexpr bool RETURNS_BOOL = std::is_same_v<std::invoke_result<Func, u64>, bool>;
const u64 page_end = (addr + size - 1) >> PAGE_BITS;
for (u64 page = addr >> PAGE_BITS; page <= page_end; ++page) {
if constexpr (RETURNS_BOOL) {
if (func(page)) {
break;
}
} else {
func(page);
}
}
}
template <typename Func>
static void ForEachGPUPage(GPUVAddr addr, size_t size, Func&& func) {
static constexpr bool RETURNS_BOOL = std::is_same_v<std::invoke_result<Func, u64>, bool>;
const u64 page_end = (addr + size - 1) >> PAGE_BITS;
for (u64 page = addr >> PAGE_BITS; page <= page_end; ++page) {
if constexpr (RETURNS_BOOL) {
if (func(page)) {
break;
}
} else {
func(page);
}
}
}
/// Runs the Garbage Collector.
void RunGarbageCollector();
/// Fills image_view_ids in the image views in indices
void FillImageViews(DescriptorTable<TICEntry>& table,
std::span<ImageViewId> cached_image_view_ids, std::span<const u32> indices,
std::span<ImageViewId> image_view_ids);
/// Find or create an image view in the guest descriptor table
ImageViewId VisitImageView(DescriptorTable<TICEntry>& table,
std::span<ImageViewId> cached_image_view_ids, u32 index);
/// Find or create a framebuffer with the given render target parameters
FramebufferId GetFramebufferId(const RenderTargets& key);
/// Refresh the contents (pixel data) of an image
void RefreshContents(Image& image, ImageId image_id);
/// Upload data from guest to an image
template <typename StagingBuffer>
void UploadImageContents(Image& image, StagingBuffer& staging_buffer);
/// Find or create an image view from a guest descriptor
[[nodiscard]] ImageViewId FindImageView(const TICEntry& config);
/// Create a new image view from a guest descriptor
[[nodiscard]] ImageViewId CreateImageView(const TICEntry& config);
/// Find or create an image from the given parameters
[[nodiscard]] ImageId FindOrInsertImage(const ImageInfo& info, GPUVAddr gpu_addr,
RelaxedOptions options = RelaxedOptions{});
/// Find an image from the given parameters
[[nodiscard]] ImageId FindImage(const ImageInfo& info, GPUVAddr gpu_addr,
RelaxedOptions options);
/// Create an image from the given parameters
[[nodiscard]] ImageId InsertImage(const ImageInfo& info, GPUVAddr gpu_addr,
RelaxedOptions options);
/// Create a new image and join perfectly matching existing images
/// Remove joined images from the cache
[[nodiscard]] ImageId JoinImages(const ImageInfo& info, GPUVAddr gpu_addr, VAddr cpu_addr);
/// Return a blit image pair from the given guest blit parameters
[[nodiscard]] BlitImages GetBlitImages(const Tegra::Engines::Fermi2D::Surface& dst,
const Tegra::Engines::Fermi2D::Surface& src);
/// Find or create a sampler from a guest descriptor sampler
[[nodiscard]] SamplerId FindSampler(const TSCEntry& config);
/// Find or create an image view for the given color buffer index
[[nodiscard]] ImageViewId FindColorBuffer(size_t index, bool is_clear);
/// Find or create an image view for the depth buffer
[[nodiscard]] ImageViewId FindDepthBuffer(bool is_clear);
/// Find or create a view for a render target with the given image parameters
[[nodiscard]] ImageViewId FindRenderTargetView(const ImageInfo& info, GPUVAddr gpu_addr,
bool is_clear);
/// Iterates over all the images in a region calling func
template <typename Func>
void ForEachImageInRegion(VAddr cpu_addr, size_t size, Func&& func);
template <typename Func>
void ForEachImageInRegionGPU(GPUVAddr gpu_addr, size_t size, Func&& func);
template <typename Func>
void ForEachSparseImageInRegion(GPUVAddr gpu_addr, size_t size, Func&& func);
/// Iterates over all the images in a region calling func
template <typename Func>
void ForEachSparseSegment(ImageBase& image, Func&& func);
/// Find or create an image view in the given image with the passed parameters
[[nodiscard]] ImageViewId FindOrEmplaceImageView(ImageId image_id, const ImageViewInfo& info);
/// Register image in the page table
void RegisterImage(ImageId image);
/// Unregister image from the page table
void UnregisterImage(ImageId image);
/// Track CPU reads and writes for image
void TrackImage(ImageBase& image, ImageId image_id);
/// Stop tracking CPU reads and writes for image
void UntrackImage(ImageBase& image, ImageId image_id);
/// Delete image from the cache
void DeleteImage(ImageId image);
/// Remove image views references from the cache
void RemoveImageViewReferences(std::span<const ImageViewId> removed_views);
/// Remove framebuffers using the given image views from the cache
void RemoveFramebuffers(std::span<const ImageViewId> removed_views);
/// Mark an image as modified from the GPU
void MarkModification(ImageBase& image) noexcept;
/// Synchronize image aliases, copying data if needed
void SynchronizeAliases(ImageId image_id);
/// Prepare an image to be used
void PrepareImage(ImageId image_id, bool is_modification, bool invalidate);
/// Prepare an image view to be used
void PrepareImageView(ImageViewId image_view_id, bool is_modification, bool invalidate);
/// Execute copies from one image to the other, even if they are incompatible
void CopyImage(ImageId dst_id, ImageId src_id, std::span<const ImageCopy> copies);
/// Bind an image view as render target, downloading resources preemtively if needed
void BindRenderTarget(ImageViewId* old_id, ImageViewId new_id);
/// Create a render target from a given image and image view parameters
[[nodiscard]] std::pair<FramebufferId, ImageViewId> RenderTargetFromImage(
ImageId, const ImageViewInfo& view_info);
/// Returns true if the current clear parameters clear the whole image of a given image view
[[nodiscard]] bool IsFullClear(ImageViewId id);
Runtime& runtime;
VideoCore::RasterizerInterface& rasterizer;
Tegra::Engines::Maxwell3D& maxwell3d;
Tegra::Engines::KeplerCompute& kepler_compute;
Tegra::MemoryManager& gpu_memory;
DescriptorTable<TICEntry> graphics_image_table{gpu_memory};
DescriptorTable<TSCEntry> graphics_sampler_table{gpu_memory};
std::vector<SamplerId> graphics_sampler_ids;
std::vector<ImageViewId> graphics_image_view_ids;
DescriptorTable<TICEntry> compute_image_table{gpu_memory};
DescriptorTable<TSCEntry> compute_sampler_table{gpu_memory};
std::vector<SamplerId> compute_sampler_ids;
std::vector<ImageViewId> compute_image_view_ids;
RenderTargets render_targets;
std::unordered_map<TICEntry, ImageViewId> image_views;
std::unordered_map<TSCEntry, SamplerId> samplers;
std::unordered_map<RenderTargets, FramebufferId> framebuffers;
std::unordered_map<u64, std::vector<ImageMapId>, IdentityHash<u64>> page_table;
std::unordered_map<u64, std::vector<ImageId>, IdentityHash<u64>> gpu_page_table;
std::unordered_map<u64, std::vector<ImageId>, IdentityHash<u64>> sparse_page_table;
std::unordered_map<ImageId, std::vector<ImageViewId>> sparse_views;
VAddr virtual_invalid_space{};
bool has_deleted_images = false;
u64 total_used_memory = 0;
u64 minimum_memory;
u64 expected_memory;
u64 critical_memory;
SlotVector<Image> slot_images;
SlotVector<ImageMapView> slot_map_views;
SlotVector<ImageView> slot_image_views;
SlotVector<ImageAlloc> slot_image_allocs;
SlotVector<Sampler> slot_samplers;
SlotVector<Framebuffer> slot_framebuffers;
// TODO: This data structure is not optimal and it should be reworked
std::vector<ImageId> uncommitted_downloads;
std::queue<std::vector<ImageId>> committed_downloads;
static constexpr size_t TICKS_TO_DESTROY = 6;
DelayedDestructionRing<Image, TICKS_TO_DESTROY> sentenced_images;
DelayedDestructionRing<ImageView, TICKS_TO_DESTROY> sentenced_image_view;
DelayedDestructionRing<Framebuffer, TICKS_TO_DESTROY> sentenced_framebuffers;
std::unordered_map<GPUVAddr, ImageAllocId> image_allocs_table;
u64 modification_tick = 0;
u64 frame_tick = 0;
typename SlotVector<Image>::Iterator deletion_iterator;
};
template <class P>
TextureCache<P>::TextureCache(Runtime& runtime_, VideoCore::RasterizerInterface& rasterizer_,
Tegra::Engines::Maxwell3D& maxwell3d_,
@@ -820,40 +437,6 @@ void TextureCache<P>::BlitImage(const Tegra::Engines::Fermi2D::Surface& dst,
}
}
template <class P>
void TextureCache<P>::InvalidateColorBuffer(size_t index) {
ImageViewId& color_buffer_id = render_targets.color_buffer_ids[index];
color_buffer_id = FindColorBuffer(index, false);
if (!color_buffer_id) {
LOG_ERROR(HW_GPU, "Invalidating invalid color buffer in index={}", index);
return;
}
// When invalidating a color buffer, the old contents are no longer relevant
ImageView& color_buffer = slot_image_views[color_buffer_id];
Image& image = slot_images[color_buffer.image_id];
image.flags &= ~ImageFlagBits::CpuModified;
image.flags &= ~ImageFlagBits::GpuModified;
runtime.InvalidateColorBuffer(color_buffer, index);
}
template <class P>
void TextureCache<P>::InvalidateDepthBuffer() {
ImageViewId& depth_buffer_id = render_targets.depth_buffer_id;
depth_buffer_id = FindDepthBuffer(false);
if (!depth_buffer_id) {
LOG_ERROR(HW_GPU, "Invalidating invalid depth buffer");
return;
}
// When invalidating the depth buffer, the old contents are no longer relevant
ImageBase& image = slot_images[slot_image_views[depth_buffer_id].image_id];
image.flags &= ~ImageFlagBits::CpuModified;
image.flags &= ~ImageFlagBits::GpuModified;
ImageView& depth_buffer = slot_image_views[depth_buffer_id];
runtime.InvalidateDepthBuffer(depth_buffer);
}
template <class P>
typename P::ImageView* TextureCache<P>::TryFindFramebufferImageView(VAddr cpu_addr) {
// TODO: Properly implement this

View File

@@ -0,0 +1,385 @@
// Copyright 2019 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <array>
#include <mutex>
#include <span>
#include <type_traits>
#include <unordered_map>
#include <unordered_set>
#include <vector>
#include "common/common_types.h"
#include "common/literals.h"
#include "video_core/compatible_formats.h"
#include "video_core/delayed_destruction_ring.h"
#include "video_core/engines/fermi_2d.h"
#include "video_core/engines/kepler_compute.h"
#include "video_core/engines/maxwell_3d.h"
#include "video_core/memory_manager.h"
#include "video_core/rasterizer_interface.h"
#include "video_core/surface.h"
#include "video_core/texture_cache/descriptor_table.h"
#include "video_core/texture_cache/image_base.h"
#include "video_core/texture_cache/image_info.h"
#include "video_core/texture_cache/image_view_info.h"
#include "video_core/texture_cache/render_targets.h"
#include "video_core/texture_cache/slot_vector.h"
#include "video_core/texture_cache/types.h"
#include "video_core/texture_cache/util.h"
#include "video_core/textures/texture.h"
namespace VideoCommon {
using Tegra::Texture::SwizzleSource;
using Tegra::Texture::TICEntry;
using Tegra::Texture::TSCEntry;
using VideoCore::Surface::GetFormatType;
using VideoCore::Surface::IsCopyCompatible;
using VideoCore::Surface::PixelFormat;
using VideoCore::Surface::PixelFormatFromDepthFormat;
using VideoCore::Surface::PixelFormatFromRenderTargetFormat;
using namespace Common::Literals;
template <class P>
class TextureCache {
/// Address shift for caching images into a hash table
static constexpr u64 PAGE_BITS = 20;
/// Enables debugging features to the texture cache
static constexpr bool ENABLE_VALIDATION = P::ENABLE_VALIDATION;
/// Implement blits as copies between framebuffers
static constexpr bool FRAMEBUFFER_BLITS = P::FRAMEBUFFER_BLITS;
/// True when some copies have to be emulated
static constexpr bool HAS_EMULATED_COPIES = P::HAS_EMULATED_COPIES;
/// True when the API can provide info about the memory of the device.
static constexpr bool HAS_DEVICE_MEMORY_INFO = P::HAS_DEVICE_MEMORY_INFO;
/// Image view ID for null descriptors
static constexpr ImageViewId NULL_IMAGE_VIEW_ID{0};
/// Sampler ID for bugged sampler ids
static constexpr SamplerId NULL_SAMPLER_ID{0};
static constexpr u64 DEFAULT_EXPECTED_MEMORY = 1_GiB;
static constexpr u64 DEFAULT_CRITICAL_MEMORY = 2_GiB;
using Runtime = typename P::Runtime;
using Image = typename P::Image;
using ImageAlloc = typename P::ImageAlloc;
using ImageView = typename P::ImageView;
using Sampler = typename P::Sampler;
using Framebuffer = typename P::Framebuffer;
struct BlitImages {
ImageId dst_id;
ImageId src_id;
PixelFormat dst_format;
PixelFormat src_format;
};
template <typename T>
struct IdentityHash {
[[nodiscard]] size_t operator()(T value) const noexcept {
return static_cast<size_t>(value);
}
};
public:
explicit TextureCache(Runtime&, VideoCore::RasterizerInterface&, Tegra::Engines::Maxwell3D&,
Tegra::Engines::KeplerCompute&, Tegra::MemoryManager&);
/// Notify the cache that a new frame has been queued
void TickFrame();
/// Return a constant reference to the given image view id
[[nodiscard]] const ImageView& GetImageView(ImageViewId id) const noexcept;
/// Return a reference to the given image view id
[[nodiscard]] ImageView& GetImageView(ImageViewId id) noexcept;
/// Mark an image as modified from the GPU
void MarkModification(ImageId id) noexcept;
/// Fill image_view_ids with the graphics images in indices
void FillGraphicsImageViews(std::span<const u32> indices,
std::span<ImageViewId> image_view_ids);
/// Fill image_view_ids with the compute images in indices
void FillComputeImageViews(std::span<const u32> indices, std::span<ImageViewId> image_view_ids);
/// Get the sampler from the graphics descriptor table in the specified index
Sampler* GetGraphicsSampler(u32 index);
/// Get the sampler from the compute descriptor table in the specified index
Sampler* GetComputeSampler(u32 index);
/// Refresh the state for graphics image view and sampler descriptors
void SynchronizeGraphicsDescriptors();
/// Refresh the state for compute image view and sampler descriptors
void SynchronizeComputeDescriptors();
/// Update bound render targets and upload memory if necessary
/// @param is_clear True when the render targets are being used for clears
void UpdateRenderTargets(bool is_clear);
/// Find a framebuffer with the currently bound render targets
/// UpdateRenderTargets should be called before this
Framebuffer* GetFramebuffer();
/// Mark images in a range as modified from the CPU
void WriteMemory(VAddr cpu_addr, size_t size);
/// Download contents of host images to guest memory in a region
void DownloadMemory(VAddr cpu_addr, size_t size);
/// Remove images in a region
void UnmapMemory(VAddr cpu_addr, size_t size);
/// Remove images in a region
void UnmapGPUMemory(GPUVAddr gpu_addr, size_t size);
/// Blit an image with the given parameters
void BlitImage(const Tegra::Engines::Fermi2D::Surface& dst,
const Tegra::Engines::Fermi2D::Surface& src,
const Tegra::Engines::Fermi2D::Config& copy);
/// Try to find a cached image view in the given CPU address
[[nodiscard]] ImageView* TryFindFramebufferImageView(VAddr cpu_addr);
/// Return true when there are uncommitted images to be downloaded
[[nodiscard]] bool HasUncommittedFlushes() const noexcept;
/// Return true when the caller should wait for async downloads
[[nodiscard]] bool ShouldWaitAsyncFlushes() const noexcept;
/// Commit asynchronous downloads
void CommitAsyncFlushes();
/// Pop asynchronous downloads
void PopAsyncFlushes();
/// Return true when a CPU region is modified from the GPU
[[nodiscard]] bool IsRegionGpuModified(VAddr addr, size_t size);
std::mutex mutex;
private:
/// Iterate over all page indices in a range
template <typename Func>
static void ForEachCPUPage(VAddr addr, size_t size, Func&& func) {
static constexpr bool RETURNS_BOOL = std::is_same_v<std::invoke_result<Func, u64>, bool>;
const u64 page_end = (addr + size - 1) >> PAGE_BITS;
for (u64 page = addr >> PAGE_BITS; page <= page_end; ++page) {
if constexpr (RETURNS_BOOL) {
if (func(page)) {
break;
}
} else {
func(page);
}
}
}
template <typename Func>
static void ForEachGPUPage(GPUVAddr addr, size_t size, Func&& func) {
static constexpr bool RETURNS_BOOL = std::is_same_v<std::invoke_result<Func, u64>, bool>;
const u64 page_end = (addr + size - 1) >> PAGE_BITS;
for (u64 page = addr >> PAGE_BITS; page <= page_end; ++page) {
if constexpr (RETURNS_BOOL) {
if (func(page)) {
break;
}
} else {
func(page);
}
}
}
/// Runs the Garbage Collector.
void RunGarbageCollector();
/// Fills image_view_ids in the image views in indices
void FillImageViews(DescriptorTable<TICEntry>& table,
std::span<ImageViewId> cached_image_view_ids, std::span<const u32> indices,
std::span<ImageViewId> image_view_ids);
/// Find or create an image view in the guest descriptor table
ImageViewId VisitImageView(DescriptorTable<TICEntry>& table,
std::span<ImageViewId> cached_image_view_ids, u32 index);
/// Find or create a framebuffer with the given render target parameters
FramebufferId GetFramebufferId(const RenderTargets& key);
/// Refresh the contents (pixel data) of an image
void RefreshContents(Image& image, ImageId image_id);
/// Upload data from guest to an image
template <typename StagingBuffer>
void UploadImageContents(Image& image, StagingBuffer& staging_buffer);
/// Find or create an image view from a guest descriptor
[[nodiscard]] ImageViewId FindImageView(const TICEntry& config);
/// Create a new image view from a guest descriptor
[[nodiscard]] ImageViewId CreateImageView(const TICEntry& config);
/// Find or create an image from the given parameters
[[nodiscard]] ImageId FindOrInsertImage(const ImageInfo& info, GPUVAddr gpu_addr,
RelaxedOptions options = RelaxedOptions{});
/// Find an image from the given parameters
[[nodiscard]] ImageId FindImage(const ImageInfo& info, GPUVAddr gpu_addr,
RelaxedOptions options);
/// Create an image from the given parameters
[[nodiscard]] ImageId InsertImage(const ImageInfo& info, GPUVAddr gpu_addr,
RelaxedOptions options);
/// Create a new image and join perfectly matching existing images
/// Remove joined images from the cache
[[nodiscard]] ImageId JoinImages(const ImageInfo& info, GPUVAddr gpu_addr, VAddr cpu_addr);
/// Return a blit image pair from the given guest blit parameters
[[nodiscard]] BlitImages GetBlitImages(const Tegra::Engines::Fermi2D::Surface& dst,
const Tegra::Engines::Fermi2D::Surface& src);
/// Find or create a sampler from a guest descriptor sampler
[[nodiscard]] SamplerId FindSampler(const TSCEntry& config);
/// Find or create an image view for the given color buffer index
[[nodiscard]] ImageViewId FindColorBuffer(size_t index, bool is_clear);
/// Find or create an image view for the depth buffer
[[nodiscard]] ImageViewId FindDepthBuffer(bool is_clear);
/// Find or create a view for a render target with the given image parameters
[[nodiscard]] ImageViewId FindRenderTargetView(const ImageInfo& info, GPUVAddr gpu_addr,
bool is_clear);
/// Iterates over all the images in a region calling func
template <typename Func>
void ForEachImageInRegion(VAddr cpu_addr, size_t size, Func&& func);
template <typename Func>
void ForEachImageInRegionGPU(GPUVAddr gpu_addr, size_t size, Func&& func);
template <typename Func>
void ForEachSparseImageInRegion(GPUVAddr gpu_addr, size_t size, Func&& func);
/// Iterates over all the images in a region calling func
template <typename Func>
void ForEachSparseSegment(ImageBase& image, Func&& func);
/// Find or create an image view in the given image with the passed parameters
[[nodiscard]] ImageViewId FindOrEmplaceImageView(ImageId image_id, const ImageViewInfo& info);
/// Register image in the page table
void RegisterImage(ImageId image);
/// Unregister image from the page table
void UnregisterImage(ImageId image);
/// Track CPU reads and writes for image
void TrackImage(ImageBase& image, ImageId image_id);
/// Stop tracking CPU reads and writes for image
void UntrackImage(ImageBase& image, ImageId image_id);
/// Delete image from the cache
void DeleteImage(ImageId image);
/// Remove image views references from the cache
void RemoveImageViewReferences(std::span<const ImageViewId> removed_views);
/// Remove framebuffers using the given image views from the cache
void RemoveFramebuffers(std::span<const ImageViewId> removed_views);
/// Mark an image as modified from the GPU
void MarkModification(ImageBase& image) noexcept;
/// Synchronize image aliases, copying data if needed
void SynchronizeAliases(ImageId image_id);
/// Prepare an image to be used
void PrepareImage(ImageId image_id, bool is_modification, bool invalidate);
/// Prepare an image view to be used
void PrepareImageView(ImageViewId image_view_id, bool is_modification, bool invalidate);
/// Execute copies from one image to the other, even if they are incompatible
void CopyImage(ImageId dst_id, ImageId src_id, std::span<const ImageCopy> copies);
/// Bind an image view as render target, downloading resources preemtively if needed
void BindRenderTarget(ImageViewId* old_id, ImageViewId new_id);
/// Create a render target from a given image and image view parameters
[[nodiscard]] std::pair<FramebufferId, ImageViewId> RenderTargetFromImage(
ImageId, const ImageViewInfo& view_info);
/// Returns true if the current clear parameters clear the whole image of a given image view
[[nodiscard]] bool IsFullClear(ImageViewId id);
Runtime& runtime;
VideoCore::RasterizerInterface& rasterizer;
Tegra::Engines::Maxwell3D& maxwell3d;
Tegra::Engines::KeplerCompute& kepler_compute;
Tegra::MemoryManager& gpu_memory;
DescriptorTable<TICEntry> graphics_image_table{gpu_memory};
DescriptorTable<TSCEntry> graphics_sampler_table{gpu_memory};
std::vector<SamplerId> graphics_sampler_ids;
std::vector<ImageViewId> graphics_image_view_ids;
DescriptorTable<TICEntry> compute_image_table{gpu_memory};
DescriptorTable<TSCEntry> compute_sampler_table{gpu_memory};
std::vector<SamplerId> compute_sampler_ids;
std::vector<ImageViewId> compute_image_view_ids;
RenderTargets render_targets;
std::unordered_map<TICEntry, ImageViewId> image_views;
std::unordered_map<TSCEntry, SamplerId> samplers;
std::unordered_map<RenderTargets, FramebufferId> framebuffers;
std::unordered_map<u64, std::vector<ImageMapId>, IdentityHash<u64>> page_table;
std::unordered_map<u64, std::vector<ImageId>, IdentityHash<u64>> gpu_page_table;
std::unordered_map<u64, std::vector<ImageId>, IdentityHash<u64>> sparse_page_table;
std::unordered_map<ImageId, std::vector<ImageViewId>> sparse_views;
VAddr virtual_invalid_space{};
bool has_deleted_images = false;
u64 total_used_memory = 0;
u64 minimum_memory;
u64 expected_memory;
u64 critical_memory;
SlotVector<Image> slot_images;
SlotVector<ImageMapView> slot_map_views;
SlotVector<ImageView> slot_image_views;
SlotVector<ImageAlloc> slot_image_allocs;
SlotVector<Sampler> slot_samplers;
SlotVector<Framebuffer> slot_framebuffers;
// TODO: This data structure is not optimal and it should be reworked
std::vector<ImageId> uncommitted_downloads;
std::queue<std::vector<ImageId>> committed_downloads;
static constexpr size_t TICKS_TO_DESTROY = 6;
DelayedDestructionRing<Image, TICKS_TO_DESTROY> sentenced_images;
DelayedDestructionRing<ImageView, TICKS_TO_DESTROY> sentenced_image_view;
DelayedDestructionRing<Framebuffer, TICKS_TO_DESTROY> sentenced_framebuffers;
std::unordered_map<GPUVAddr, ImageAllocId> image_allocs_table;
u64 modification_tick = 0;
u64 frame_tick = 0;
typename SlotVector<Image>::Iterator deletion_iterator;
};
} // namespace VideoCommon

View File

@@ -84,34 +84,107 @@ template <bool TO_LINEAR>
void Swizzle(std::span<u8> output, std::span<const u8> input, u32 bytes_per_pixel, u32 width,
u32 height, u32 depth, u32 block_height, u32 block_depth, u32 stride_alignment) {
switch (bytes_per_pixel) {
case 1:
return SwizzleImpl<TO_LINEAR, 1>(output, input, width, height, depth, block_height,
#define BPP_CASE(x) \
case x: \
return SwizzleImpl<TO_LINEAR, x>(output, input, width, height, depth, block_height, \
block_depth, stride_alignment);
case 2:
return SwizzleImpl<TO_LINEAR, 2>(output, input, width, height, depth, block_height,
block_depth, stride_alignment);
case 3:
return SwizzleImpl<TO_LINEAR, 3>(output, input, width, height, depth, block_height,
block_depth, stride_alignment);
case 4:
return SwizzleImpl<TO_LINEAR, 4>(output, input, width, height, depth, block_height,
block_depth, stride_alignment);
case 6:
return SwizzleImpl<TO_LINEAR, 6>(output, input, width, height, depth, block_height,
block_depth, stride_alignment);
case 8:
return SwizzleImpl<TO_LINEAR, 8>(output, input, width, height, depth, block_height,
block_depth, stride_alignment);
case 12:
return SwizzleImpl<TO_LINEAR, 12>(output, input, width, height, depth, block_height,
block_depth, stride_alignment);
case 16:
return SwizzleImpl<TO_LINEAR, 16>(output, input, width, height, depth, block_height,
block_depth, stride_alignment);
BPP_CASE(1)
BPP_CASE(2)
BPP_CASE(3)
BPP_CASE(4)
BPP_CASE(6)
BPP_CASE(8)
BPP_CASE(12)
BPP_CASE(16)
#undef BPP_CASE
default:
UNREACHABLE_MSG("Invalid bytes_per_pixel={}", bytes_per_pixel);
}
}
template <u32 BYTES_PER_PIXEL>
void SwizzleSubrect(u32 subrect_width, u32 subrect_height, u32 source_pitch, u32 swizzled_width,
u8* swizzled_data, const u8* unswizzled_data, u32 block_height_bit,
u32 offset_x, u32 offset_y) {
const u32 block_height = 1U << block_height_bit;
const u32 image_width_in_gobs =
(swizzled_width * BYTES_PER_PIXEL + (GOB_SIZE_X - 1)) / GOB_SIZE_X;
for (u32 line = 0; line < subrect_height; ++line) {
const u32 dst_y = line + offset_y;
const u32 gob_address_y =
(dst_y / (GOB_SIZE_Y * block_height)) * GOB_SIZE * block_height * image_width_in_gobs +
((dst_y % (GOB_SIZE_Y * block_height)) / GOB_SIZE_Y) * GOB_SIZE;
const auto& table = SWIZZLE_TABLE[dst_y % GOB_SIZE_Y];
for (u32 x = 0; x < subrect_width; ++x) {
const u32 dst_x = x + offset_x;
const u32 gob_address =
gob_address_y + (dst_x * BYTES_PER_PIXEL / GOB_SIZE_X) * GOB_SIZE * block_height;
const u32 swizzled_offset = gob_address + table[(dst_x * BYTES_PER_PIXEL) % GOB_SIZE_X];
const u32 unswizzled_offset = line * source_pitch + x * BYTES_PER_PIXEL;
const u8* const source_line = unswizzled_data + unswizzled_offset;
u8* const dest_addr = swizzled_data + swizzled_offset;
std::memcpy(dest_addr, source_line, BYTES_PER_PIXEL);
}
}
}
template <u32 BYTES_PER_PIXEL>
void UnswizzleSubrect(u32 line_length_in, u32 line_count, u32 pitch, u32 width, u32 block_height,
u32 origin_x, u32 origin_y, u8* output, const u8* input) {
const u32 stride = width * BYTES_PER_PIXEL;
const u32 gobs_in_x = (stride + GOB_SIZE_X - 1) / GOB_SIZE_X;
const u32 block_size = gobs_in_x << (GOB_SIZE_SHIFT + block_height);
const u32 block_height_mask = (1U << block_height) - 1;
const u32 x_shift = GOB_SIZE_SHIFT + block_height;
for (u32 line = 0; line < line_count; ++line) {
const u32 src_y = line + origin_y;
const auto& table = SWIZZLE_TABLE[src_y % GOB_SIZE_Y];
const u32 block_y = src_y >> GOB_SIZE_Y_SHIFT;
const u32 src_offset_y = (block_y >> block_height) * block_size +
((block_y & block_height_mask) << GOB_SIZE_SHIFT);
for (u32 column = 0; column < line_length_in; ++column) {
const u32 src_x = (column + origin_x) * BYTES_PER_PIXEL;
const u32 src_offset_x = (src_x >> GOB_SIZE_X_SHIFT) << x_shift;
const u32 swizzled_offset = src_offset_y + src_offset_x + table[src_x % GOB_SIZE_X];
const u32 unswizzled_offset = line * pitch + column * BYTES_PER_PIXEL;
std::memcpy(output + unswizzled_offset, input + swizzled_offset, BYTES_PER_PIXEL);
}
}
}
template <u32 BYTES_PER_PIXEL>
void SwizzleSliceToVoxel(u32 line_length_in, u32 line_count, u32 pitch, u32 width, u32 height,
u32 block_height, u32 block_depth, u32 origin_x, u32 origin_y, u8* output,
const u8* input) {
UNIMPLEMENTED_IF(origin_x > 0);
UNIMPLEMENTED_IF(origin_y > 0);
const u32 stride = width * BYTES_PER_PIXEL;
const u32 gobs_in_x = (stride + GOB_SIZE_X - 1) / GOB_SIZE_X;
const u32 block_size = gobs_in_x << (GOB_SIZE_SHIFT + block_height + block_depth);
const u32 block_height_mask = (1U << block_height) - 1;
const u32 x_shift = static_cast<u32>(GOB_SIZE_SHIFT) + block_height + block_depth;
for (u32 line = 0; line < line_count; ++line) {
const auto& table = SWIZZLE_TABLE[line % GOB_SIZE_Y];
const u32 block_y = line / GOB_SIZE_Y;
const u32 dst_offset_y =
(block_y >> block_height) * block_size + (block_y & block_height_mask) * GOB_SIZE;
for (u32 x = 0; x < line_length_in; ++x) {
const u32 dst_offset =
((x / GOB_SIZE_X) << x_shift) + dst_offset_y + table[x % GOB_SIZE_X];
const u32 src_offset = x * BYTES_PER_PIXEL + line * pitch;
std::memcpy(output + dst_offset, input + src_offset, BYTES_PER_PIXEL);
}
}
}
} // Anonymous namespace
void UnswizzleTexture(std::span<u8> output, std::span<const u8> input, u32 bytes_per_pixel,
@@ -131,81 +204,67 @@ void SwizzleTexture(std::span<u8> output, std::span<const u8> input, u32 bytes_p
void SwizzleSubrect(u32 subrect_width, u32 subrect_height, u32 source_pitch, u32 swizzled_width,
u32 bytes_per_pixel, u8* swizzled_data, const u8* unswizzled_data,
u32 block_height_bit, u32 offset_x, u32 offset_y) {
const u32 block_height = 1U << block_height_bit;
const u32 image_width_in_gobs =
(swizzled_width * bytes_per_pixel + (GOB_SIZE_X - 1)) / GOB_SIZE_X;
for (u32 line = 0; line < subrect_height; ++line) {
const u32 dst_y = line + offset_y;
const u32 gob_address_y =
(dst_y / (GOB_SIZE_Y * block_height)) * GOB_SIZE * block_height * image_width_in_gobs +
((dst_y % (GOB_SIZE_Y * block_height)) / GOB_SIZE_Y) * GOB_SIZE;
const auto& table = SWIZZLE_TABLE[dst_y % GOB_SIZE_Y];
for (u32 x = 0; x < subrect_width; ++x) {
const u32 dst_x = x + offset_x;
const u32 gob_address =
gob_address_y + (dst_x * bytes_per_pixel / GOB_SIZE_X) * GOB_SIZE * block_height;
const u32 swizzled_offset = gob_address + table[(dst_x * bytes_per_pixel) % GOB_SIZE_X];
const u32 unswizzled_offset = line * source_pitch + x * bytes_per_pixel;
const u8* const source_line = unswizzled_data + unswizzled_offset;
u8* const dest_addr = swizzled_data + swizzled_offset;
std::memcpy(dest_addr, source_line, bytes_per_pixel);
}
switch (bytes_per_pixel) {
#define BPP_CASE(x) \
case x: \
return SwizzleSubrect<x>(subrect_width, subrect_height, source_pitch, swizzled_width, \
swizzled_data, unswizzled_data, block_height_bit, offset_x, \
offset_y);
BPP_CASE(1)
BPP_CASE(2)
BPP_CASE(3)
BPP_CASE(4)
BPP_CASE(6)
BPP_CASE(8)
BPP_CASE(12)
BPP_CASE(16)
#undef BPP_CASE
default:
UNREACHABLE_MSG("Invalid bytes_per_pixel={}", bytes_per_pixel);
}
}
void UnswizzleSubrect(u32 line_length_in, u32 line_count, u32 pitch, u32 width, u32 bytes_per_pixel,
u32 block_height, u32 origin_x, u32 origin_y, u8* output, const u8* input) {
const u32 stride = width * bytes_per_pixel;
const u32 gobs_in_x = (stride + GOB_SIZE_X - 1) / GOB_SIZE_X;
const u32 block_size = gobs_in_x << (GOB_SIZE_SHIFT + block_height);
const u32 block_height_mask = (1U << block_height) - 1;
const u32 x_shift = GOB_SIZE_SHIFT + block_height;
for (u32 line = 0; line < line_count; ++line) {
const u32 src_y = line + origin_y;
const auto& table = SWIZZLE_TABLE[src_y % GOB_SIZE_Y];
const u32 block_y = src_y >> GOB_SIZE_Y_SHIFT;
const u32 src_offset_y = (block_y >> block_height) * block_size +
((block_y & block_height_mask) << GOB_SIZE_SHIFT);
for (u32 column = 0; column < line_length_in; ++column) {
const u32 src_x = (column + origin_x) * bytes_per_pixel;
const u32 src_offset_x = (src_x >> GOB_SIZE_X_SHIFT) << x_shift;
const u32 swizzled_offset = src_offset_y + src_offset_x + table[src_x % GOB_SIZE_X];
const u32 unswizzled_offset = line * pitch + column * bytes_per_pixel;
std::memcpy(output + unswizzled_offset, input + swizzled_offset, bytes_per_pixel);
}
switch (bytes_per_pixel) {
#define BPP_CASE(x) \
case x: \
return UnswizzleSubrect<x>(line_length_in, line_count, pitch, width, block_height, \
origin_x, origin_y, output, input);
BPP_CASE(1)
BPP_CASE(2)
BPP_CASE(3)
BPP_CASE(4)
BPP_CASE(6)
BPP_CASE(8)
BPP_CASE(12)
BPP_CASE(16)
#undef BPP_CASE
default:
UNREACHABLE_MSG("Invalid bytes_per_pixel={}", bytes_per_pixel);
}
}
void SwizzleSliceToVoxel(u32 line_length_in, u32 line_count, u32 pitch, u32 width, u32 height,
u32 bytes_per_pixel, u32 block_height, u32 block_depth, u32 origin_x,
u32 origin_y, u8* output, const u8* input) {
UNIMPLEMENTED_IF(origin_x > 0);
UNIMPLEMENTED_IF(origin_y > 0);
const u32 stride = width * bytes_per_pixel;
const u32 gobs_in_x = (stride + GOB_SIZE_X - 1) / GOB_SIZE_X;
const u32 block_size = gobs_in_x << (GOB_SIZE_SHIFT + block_height + block_depth);
const u32 block_height_mask = (1U << block_height) - 1;
const u32 x_shift = static_cast<u32>(GOB_SIZE_SHIFT) + block_height + block_depth;
for (u32 line = 0; line < line_count; ++line) {
const auto& table = SWIZZLE_TABLE[line % GOB_SIZE_Y];
const u32 block_y = line / GOB_SIZE_Y;
const u32 dst_offset_y =
(block_y >> block_height) * block_size + (block_y & block_height_mask) * GOB_SIZE;
for (u32 x = 0; x < line_length_in; ++x) {
const u32 dst_offset =
((x / GOB_SIZE_X) << x_shift) + dst_offset_y + table[x % GOB_SIZE_X];
const u32 src_offset = x * bytes_per_pixel + line * pitch;
std::memcpy(output + dst_offset, input + src_offset, bytes_per_pixel);
}
switch (bytes_per_pixel) {
#define BPP_CASE(x) \
case x: \
return SwizzleSliceToVoxel<x>(line_length_in, line_count, pitch, width, height, \
block_height, block_depth, origin_x, origin_y, output, \
input);
BPP_CASE(1)
BPP_CASE(2)
BPP_CASE(3)
BPP_CASE(4)
BPP_CASE(6)
BPP_CASE(8)
BPP_CASE(12)
BPP_CASE(16)
#undef BPP_CASE
default:
UNREACHABLE_MSG("Invalid bytes_per_pixel={}", bytes_per_pixel);
}
}
@@ -228,7 +287,7 @@ void SwizzleKepler(const u32 width, const u32 height, const u32 dst_x, const u32
u8* dest_addr = swizzle_data + swizzled_offset;
count++;
std::memcpy(dest_addr, source_line, 1);
*dest_addr = *source_line;
}
}
}

View File

@@ -102,9 +102,9 @@ add_executable(yuzu
configuration/configure_profile_manager.cpp
configuration/configure_profile_manager.h
configuration/configure_profile_manager.ui
configuration/configure_service.cpp
configuration/configure_service.h
configuration/configure_service.ui
configuration/configure_network.cpp
configuration/configure_network.h
configuration/configure_network.ui
configuration/configure_system.cpp
configuration/configure_system.h
configuration/configure_system.ui

View File

@@ -692,6 +692,7 @@ void Config::ReadServiceValues() {
qt_config->beginGroup(QStringLiteral("Services"));
ReadBasicSetting(Settings::values.bcat_backend);
ReadBasicSetting(Settings::values.bcat_boxcat_local);
ReadBasicSetting(Settings::values.network_interface);
qt_config->endGroup();
}
@@ -1144,7 +1145,7 @@ void Config::SaveValues() {
SaveDataStorageValues();
SaveDebuggingValues();
SaveDisabledAddOnValues();
SaveServiceValues();
SaveNetworkValues();
SaveUIValues();
SaveWebServiceValues();
SaveMiscellaneousValues();
@@ -1238,11 +1239,12 @@ void Config::SaveDebuggingValues() {
qt_config->endGroup();
}
void Config::SaveServiceValues() {
void Config::SaveNetworkValues() {
qt_config->beginGroup(QStringLiteral("Services"));
WriteBasicSetting(Settings::values.bcat_backend);
WriteBasicSetting(Settings::values.bcat_boxcat_local);
WriteBasicSetting(Settings::values.network_interface);
qt_config->endGroup();
}

View File

@@ -88,7 +88,7 @@ private:
void SaveCoreValues();
void SaveDataStorageValues();
void SaveDebuggingValues();
void SaveServiceValues();
void SaveNetworkValues();
void SaveDisabledAddOnValues();
void SaveMiscellaneousValues();
void SavePathValues();

View File

@@ -147,12 +147,12 @@
<string>Web</string>
</attribute>
</widget>
<widget class="ConfigureService" name="serviceTab">
<widget class="ConfigureNetwork" name="networkTab">
<property name="accessibleName">
<string>Services</string>
<string>Network</string>
</property>
<attribute name="title">
<string>Services</string>
<string>Network</string>
</attribute>
</widget>
</widget>
@@ -242,9 +242,9 @@
<container>1</container>
</customwidget>
<customwidget>
<class>ConfigureService</class>
<class>ConfigureNetwork</class>
<extends>QWidget</extends>
<header>configuration/configure_service.h</header>
<header>configuration/configure_network.h</header>
<container>1</container>
</customwidget>
<customwidget>

View File

@@ -67,7 +67,7 @@ void ConfigureDialog::ApplyConfiguration() {
ui->audioTab->ApplyConfiguration();
ui->debugTab->ApplyConfiguration();
ui->webTab->ApplyConfiguration();
ui->serviceTab->ApplyConfiguration();
ui->networkTab->ApplyConfiguration();
Core::System::GetInstance().ApplySettings();
Settings::LogSettings();
}
@@ -103,7 +103,7 @@ Q_DECLARE_METATYPE(QList<QWidget*>);
void ConfigureDialog::PopulateSelectionList() {
const std::array<std::pair<QString, QList<QWidget*>>, 6> items{
{{tr("General"), {ui->generalTab, ui->hotkeysTab, ui->uiTab, ui->webTab, ui->debugTab}},
{tr("System"), {ui->systemTab, ui->profileManagerTab, ui->serviceTab, ui->filesystemTab}},
{tr("System"), {ui->systemTab, ui->profileManagerTab, ui->networkTab, ui->filesystemTab}},
{tr("CPU"), {ui->cpuTab}},
{tr("Graphics"), {ui->graphicsTab, ui->graphicsAdvancedTab}},
{tr("Audio"), {ui->audioTab}},

View File

@@ -24,6 +24,36 @@
<layout class="QHBoxLayout" name="GeneralHorizontalLayout">
<item>
<layout class="QVBoxLayout" name="GeneralVerticalLayout">
<item>
<layout class="QHBoxLayout" name="horizontalLayout_2">
<item>
<widget class="QLabel" name="fps_cap_label">
<property name="text">
<string>Framerate Cap</string>
</property>
<property name="toolTip">
<string>Requires the use of the FPS Limiter Toggle hotkey to take effect.</string>
</property>
</widget>
</item>
<item>
<widget class="QSpinBox" name="fps_cap">
<property name="suffix">
<string>x</string>
</property>
<property name="minimum">
<number>1</number>
</property>
<property name="maximum">
<number>1000</number>
</property>
<property name="value">
<number>500</number>
</property>
</widget>
</item>
</layout>
</item>
<item>
<layout class="QHBoxLayout" name="horizontalLayout_2">
<item>
@@ -51,36 +81,6 @@
</item>
</layout>
</item>
<item>
<layout class="QHBoxLayout" name="horizontalLayout_2">
<item>
<widget class="QLabel" name="fps_cap_label">
<property name="text">
<string>Framerate Cap</string>
</property>
<property name="toolTip">
<string>Requires the use of the FPS Limiter Toggle hotkey to take effect.</string>
</property>
</widget>
</item>
<item>
<widget class="QSpinBox" name="fps_cap">
<property name="suffix">
<string>x</string>
</property>
<property name="minimum">
<number>1</number>
</property>
<property name="maximum">
<number>1000</number>
</property>
<property name="value">
<number>500</number>
</property>
</widget>
</item>
</layout>
</item>
<item>
<widget class="QCheckBox" name="use_multi_core">
<property name="text">

View File

@@ -5,9 +5,11 @@
#include <QGraphicsItem>
#include <QtConcurrent/QtConcurrent>
#include "common/settings.h"
#include "core/core.h"
#include "core/hle/service/bcat/backend/boxcat.h"
#include "ui_configure_service.h"
#include "yuzu/configuration/configure_service.h"
#include "core/network/network_interface.h"
#include "ui_configure_network.h"
#include "yuzu/configuration/configure_network.h"
#ifdef YUZU_ENABLE_BOXCAT
namespace {
@@ -35,8 +37,8 @@ QString FormatEventStatusString(const Service::BCAT::EventStatus& status) {
} // Anonymous namespace
#endif
ConfigureService::ConfigureService(QWidget* parent)
: QWidget(parent), ui(std::make_unique<Ui::ConfigureService>()) {
ConfigureNetwork::ConfigureNetwork(QWidget* parent)
: QWidget(parent), ui(std::make_unique<Ui::ConfigureNetwork>()) {
ui->setupUi(this);
ui->bcat_source->addItem(QStringLiteral("None"));
@@ -47,29 +49,42 @@ ConfigureService::ConfigureService(QWidget* parent)
ui->bcat_source->addItem(QStringLiteral("Boxcat"), QStringLiteral("boxcat"));
#endif
ui->network_interface->addItem(tr("None"));
for (const auto& iface : Network::GetAvailableNetworkInterfaces()) {
ui->network_interface->addItem(QString::fromStdString(iface.name));
}
connect(ui->bcat_source, QOverload<int>::of(&QComboBox::currentIndexChanged), this,
&ConfigureService::OnBCATImplChanged);
&ConfigureNetwork::OnBCATImplChanged);
this->SetConfiguration();
}
ConfigureService::~ConfigureService() = default;
ConfigureNetwork::~ConfigureNetwork() = default;
void ConfigureService::ApplyConfiguration() {
void ConfigureNetwork::ApplyConfiguration() {
Settings::values.bcat_backend = ui->bcat_source->currentText().toLower().toStdString();
Settings::values.network_interface = ui->network_interface->currentText().toStdString();
}
void ConfigureService::RetranslateUi() {
void ConfigureNetwork::RetranslateUi() {
ui->retranslateUi(this);
}
void ConfigureService::SetConfiguration() {
void ConfigureNetwork::SetConfiguration() {
const bool runtime_lock = !Core::System::GetInstance().IsPoweredOn();
const int index =
ui->bcat_source->findData(QString::fromStdString(Settings::values.bcat_backend.GetValue()));
ui->bcat_source->setCurrentIndex(index == -1 ? 0 : index);
const std::string& network_interface = Settings::values.network_interface.GetValue();
ui->network_interface->setCurrentText(QString::fromStdString(network_interface));
ui->network_interface->setEnabled(runtime_lock);
}
std::pair<QString, QString> ConfigureService::BCATDownloadEvents() {
std::pair<QString, QString> ConfigureNetwork::BCATDownloadEvents() {
#ifdef YUZU_ENABLE_BOXCAT
std::optional<std::string> global;
std::map<std::string, Service::BCAT::EventStatus> map;
@@ -114,7 +129,7 @@ std::pair<QString, QString> ConfigureService::BCATDownloadEvents() {
#endif
}
void ConfigureService::OnBCATImplChanged() {
void ConfigureNetwork::OnBCATImplChanged() {
#ifdef YUZU_ENABLE_BOXCAT
const auto boxcat = ui->bcat_source->currentText() == QStringLiteral("Boxcat");
ui->bcat_empty_header->setHidden(!boxcat);
@@ -133,7 +148,7 @@ void ConfigureService::OnBCATImplChanged() {
#endif
}
void ConfigureService::OnUpdateBCATEmptyLabel(std::pair<QString, QString> string) {
void ConfigureNetwork::OnUpdateBCATEmptyLabel(std::pair<QString, QString> string) {
#ifdef YUZU_ENABLE_BOXCAT
const auto boxcat = ui->bcat_source->currentText() == QStringLiteral("Boxcat");
if (boxcat) {

View File

@@ -9,15 +9,15 @@
#include <QWidget>
namespace Ui {
class ConfigureService;
class ConfigureNetwork;
}
class ConfigureService : public QWidget {
class ConfigureNetwork : public QWidget {
Q_OBJECT
public:
explicit ConfigureService(QWidget* parent = nullptr);
~ConfigureService() override;
explicit ConfigureNetwork(QWidget* parent = nullptr);
~ConfigureNetwork() override;
void ApplyConfiguration();
void RetranslateUi();
@@ -29,6 +29,6 @@ private:
void OnBCATImplChanged();
void OnUpdateBCATEmptyLabel(std::pair<QString, QString> string);
std::unique_ptr<Ui::ConfigureService> ui;
std::unique_ptr<Ui::ConfigureNetwork> ui;
QFutureWatcher<std::pair<QString, QString>> watcher{this};
};

View File

@@ -1,7 +1,7 @@
<?xml version="1.0" encoding="UTF-8"?>
<ui version="4.0">
<class>ConfigureService</class>
<widget class="QWidget" name="ConfigureService">
<class>ConfigureNetwork</class>
<widget class="QWidget" name="ConfigureNetwork">
<property name="geometry">
<rect>
<x>0</x>
@@ -16,22 +16,38 @@
<layout class="QVBoxLayout" name="verticalLayout">
<item>
<layout class="QVBoxLayout" name="verticalLayout_3">
<item>
<widget class="QGroupBox" name="groupBox_2">
<property name="title">
<string>General</string>
</property>
<layout class="QGridLayout" name="gridLayout_2">
<item row="1" column="1">
<widget class="QComboBox" name="network_interface"/>
</item>
<item row="1" column="0">
<widget class="QLabel" name="label_4">
<property name="text">
<string>Network Interface</string>
</property>
</widget>
</item>
</layout>
</widget>
</item>
<item>
<widget class="QGroupBox" name="groupBox">
<property name="title">
<string>BCAT</string>
</property>
<layout class="QGridLayout" name="gridLayout">
<item row="1" column="1" colspan="2">
<widget class="QLabel" name="label_2">
<property name="maximumSize">
<size>
<width>260</width>
<height>16777215</height>
</size>
</property>
<item row="3" column="0">
<widget class="QLabel" name="bcat_empty_header">
<property name="text">
<string>BCAT is Nintendo's way of sending data to games to engage its community and unlock additional content.</string>
<string/>
</property>
<property name="alignment">
<set>Qt::AlignLeading|Qt::AlignLeft|Qt::AlignTop</set>
</property>
<property name="wordWrap">
<bool>true</bool>
@@ -51,11 +67,8 @@
</property>
</widget>
</item>
<item row="3" column="1" colspan="2">
<widget class="QLabel" name="bcat_empty_label">
<property name="enabled">
<bool>true</bool>
</property>
<item row="1" column="1" colspan="2">
<widget class="QLabel" name="label_2">
<property name="maximumSize">
<size>
<width>260</width>
@@ -63,10 +76,7 @@
</size>
</property>
<property name="text">
<string/>
</property>
<property name="alignment">
<set>Qt::AlignLeading|Qt::AlignLeft|Qt::AlignTop</set>
<string>BCAT is Nintendo's way of sending data to games to engage its community and unlock additional content.</string>
</property>
<property name="wordWrap">
<bool>true</bool>
@@ -86,8 +96,17 @@
<item row="0" column="1" colspan="2">
<widget class="QComboBox" name="bcat_source"/>
</item>
<item row="3" column="0">
<widget class="QLabel" name="bcat_empty_header">
<item row="3" column="1" colspan="2">
<widget class="QLabel" name="bcat_empty_label">
<property name="enabled">
<bool>true</bool>
</property>
<property name="maximumSize">
<size>
<width>260</width>
<height>16777215</height>
</size>
</property>
<property name="text">
<string/>
</property>

View File

@@ -2814,8 +2814,6 @@ void GMainWindow::OnToggleFilterBar() {
}
void GMainWindow::OnCaptureScreenshot() {
OnPauseGame();
const u64 title_id = Core::System::GetInstance().CurrentProcess()->GetTitleID();
const auto screenshot_path =
QString::fromStdString(Common::FS::GetYuzuPathString(Common::FS::YuzuPath::ScreenshotsDir));
@@ -2827,23 +2825,22 @@ void GMainWindow::OnCaptureScreenshot() {
.arg(date);
if (!Common::FS::CreateDir(screenshot_path.toStdString())) {
OnStartGame();
return;
}
#ifdef _WIN32
if (UISettings::values.enable_screenshot_save_as) {
OnPauseGame();
filename = QFileDialog::getSaveFileName(this, tr("Capture Screenshot"), filename,
tr("PNG Image (*.png)"));
OnStartGame();
if (filename.isEmpty()) {
OnStartGame();
return;
}
}
#endif
render_window->CaptureScreenshot(UISettings::values.screenshot_resolution_factor.GetValue(),
filename);
OnStartGame();
}
// TODO: Written 2020-10-01: Remove per-game config migration code when it is irrelevant