Compare commits

...

66 Commits

Author SHA1 Message Date
Lioncash
44556dc21a hid/gesture: Factor out last gesture retrieval into its own function
Deduplicates a commonly repeated expression.
2021-05-18 03:59:44 -04:00
Lioncash
a9d8e24e47 hid/gesture: Ensure all ID arrays are initialized
Makes for deterministic initial state.
2021-05-18 03:39:21 -04:00
Lioncash
74f30c0223 hid/gesture: Make Point a template
We can now use this in a generic context to reuse it with the finger
position.
2021-05-18 03:39:18 -04:00
Lioncash
20699e90fa hid/gesture: Replace x,y members of GestureState with a Point
Simplifies assignments.
2021-05-18 03:32:42 -04:00
Lioncash
2f1ef3910b hid/gesture: Add default comparators to Point
Simplifies some comparisons.
2021-05-18 03:32:42 -04:00
Lioncash
60831eabd9 hid/gesture: Rename Points to Point
This only represents a single point
2021-05-18 03:32:38 -04:00
Mat M
b462618ed7 Merge pull request #6328 from Morph1984/enforce-c4715
CMakeLists: Enforce C4715 on MSVC
2021-05-17 13:20:58 -04:00
bunnei
e8269fe3bc Merge pull request #6327 from Morph1984/duplicate_labels
configure_debug: FIx duplicate labels
2021-05-17 06:18:10 -07:00
Morph
d001687ca6 CMakeLists: Enforce C4715 on MSVC
This is similar to -Werror=return-type
2021-05-17 03:48:58 -04:00
Morph
cd6dcef5aa configure_debug: FIx duplicate labels
Duplicate labels were unintentionally introduced due to copy-paste. This silences the compilation warning produced by the presence of these duplicates.
2021-05-16 23:32:51 -04:00
bunnei
0a74d8490a Merge pull request #6326 from Morph1984/fix-version
yuzu/main: Fix version info in logging and about dialog
2021-05-16 20:09:54 -07:00
Morph
af69b48390 yuzu/main: Fix version info in logging and about dialog 2021-05-16 22:17:17 -04:00
bunnei
440eb840ea Merge pull request #6319 from Morph1984/no-install-base
main: Prevent installing base titles into NAND
2021-05-16 16:33:33 -07:00
Ameer J
bfe8816f7c Merge pull request #6324 from lat9nq/appimage-freeze
ci: linux: Freeze AppImage binaries
2021-05-16 14:43:02 -04:00
lat9nq
9ec26a805a ci: linux: Freeze AppImage binaries
A regression was introduced on May 13 by linuxdeploy that causes file
open dialogs to crash yuzu in the AppImage (likely this commit
1e28ee38fa174279defe70cdaadf2a552c80258c from
linuxdeploy/linuxdeploy-desktopfile). Instead of downloading the latest
version from each of the repos we use to build the AppImage, just
download the ones hosted at yuzu-emu/ext-linux-bin, which are the same
binaries we have been using, but verified to be working and won't update
on us beyond our control.

This can eventually be moved into the container itself to remove the
need to download them at build time.
2021-05-16 05:07:49 -04:00
bunnei
d5131805ce Merge pull request #6284 from ameerj/shantae-fix
nvflinger: Create layers when they are queried but not found
2021-05-16 01:45:14 -07:00
bunnei
ad6e20cfde Merge pull request #6296 from lioncash/shadow-error
core: Make variable shadowing a compile-time error
2021-05-16 01:35:46 -07:00
bunnei
e8d2de1f99 Merge pull request #6307 from Morph1984/fix-response-push-size
nifm, ssl: Fix incorrect response sizes
2021-05-16 01:32:04 -07:00
Morph
a170aa16b6 main: Prevent installing base titles into NAND
Many users have been installing their base titles into NAND instead of adding them into the games list. This prevents users from installing any base titles and warns the user about the action.
2021-05-16 04:13:57 -04:00
Morph
9edfd88a8a Merge pull request #6293 from v1993/master
On Linux, build SDL2 from externals with HIDAPI support
2021-05-16 04:05:42 -04:00
Lioncash
9a07ed53eb core: Make variable shadowing a compile-time error
Now that we have most of core free of shadowing, we can enable the
warning as an error to catch anything that may be remaining and also
eliminate this class of logic bug entirely.
2021-05-16 03:43:16 -04:00
bunnei
06c410ee88 Merge pull request #6316 from ameerj/title-fix
main: Add running title's version to window name on EA/mainline
2021-05-15 22:40:35 -07:00
bunnei
5a2b15bf75 Merge pull request #6299 from bunnei/ipc-improvements
Various improvements to IPC and session management
2021-05-15 22:30:21 -07:00
bunnei
a1138028a8 Merge pull request #6289 from ameerj/oob-blit
texture_cache: Handle out of bound texture blits
2021-05-15 21:32:37 -07:00
Morph
faaea00069 nifm, ssl: Fix incorrect response sizes 2021-05-16 00:20:48 -04:00
Morph
6c78c2ae38 Merge pull request #6244 from german77/sdlmotion
input_common: Implement SDL motion
2021-05-15 23:20:18 -04:00
ameerj
a3e68dce56 main: Add title's version to window name on EA/mainline
Fixes the missing title version number on EA/mainline builds which override the title bar string.
2021-05-15 16:55:30 -04:00
german77
f20f4587e6 input_common: Implement SDL motion 2021-05-15 08:56:58 -05:00
Ameer J
904584e4ba Merge pull request #6300 from Morph1984/mbedtls
externals: Update mbedtls to 8c88150ca
2021-05-13 23:11:32 -04:00
Morph
0949e38263 Merge pull request #6306 from lat9nq/ffmpeg-untagged
externals: Checkout 79e8d17024 for FFmpeg
2021-05-13 04:59:48 -04:00
lat9nq
0ecb6c6647 externals: Checkout 79e8d17024 for FFmpeg
6b6b9e593d does not exist on FFmpeg master, and tag n4.3.1 requires
manually fetching all of FFmpeg's tags. `git` reports that the commit
does not exist initially and can be confusing as a result. Instead,
checkout the immediately previous commit from n4.3.1 on their master
branch.
2021-05-13 04:53:59 -04:00
bunnei
e12ee020e7 Merge pull request #6301 from Morph1984/ssl-ImportClientPki
ssl: Stub Import(Client/Server)Pki
2021-05-12 22:11:19 -07:00
Morph
c8707628f6 Merge pull request #6298 from Kewlan/toggled-show-add-on-refresh
configure_ui: Call RequestGameListUpdate when toggling "Show Add-Ons Column"
2021-05-12 21:06:04 -04:00
Morph
271f2e2d78 ssl: Stub Import(Client/Server)Pki
- Used in JUMP FORCE Deluxe Edition
2021-05-12 21:04:13 -04:00
Morph
5a042bdaa1 Merge pull request #6267 from german77/gestureRewrite
hid: Improve hardware accuracy of gestures
2021-05-12 09:17:23 -04:00
bunnei
eee302b9b9 common: tree: Avoid a nullptr dereference. 2021-05-11 15:40:20 -07:00
bunnei
12d569e483 hle: kernel: hle_ipc: Fix outgoing IPC response size calculation. 2021-05-11 12:27:43 -07:00
bunnei
fc086f93b2 WORKAROUND: temp. disable session resource limits while we work out issues 2021-05-11 10:51:39 -07:00
bunnei
f2c26443f8 WORKAROUND: Do not use slab heap while we track down issues with resource management. 2021-05-11 10:27:18 -07:00
bunnei
b9f543b29f audren 2021-05-11 10:24:53 -07:00
Morph
02547439b1 externals: Update mbedtls to 8c88150ca 2021-05-11 00:43:04 -04:00
bunnei
343d92a092 core: hle: ipc_helpers: Fix cast on raw_data_size calculation. 2021-05-10 20:34:38 -07:00
bunnei
2c1e119c4a hle: service: sm: Add TIPC support.
- Fixes our error checking of names as well.
2021-05-10 20:34:38 -07:00
bunnei
913971417e hle: kernel: hle_ipc: Improve IPC code and add initial support for TIPC.
- Fixes our move handles implementation to actually move objects.
- Simplifies the traditional IPC path.
2021-05-10 20:34:38 -07:00
bunnei
49c4c329f6 hle: service: sm: GetService: Reserve session resource when we create a KSession. 2021-05-10 20:34:38 -07:00
bunnei
21671d05a3 hle: service: Add support for dispatching TIPC requests. 2021-05-10 20:34:38 -07:00
bunnei
da25a59866 hle: service: Implement IPC::CommandType::Close.
- This was not actually closing sessions before.
2021-05-10 20:34:38 -07:00
bunnei
41928dfdda hle: service: sm: Use RegisterNamedService to register the service. 2021-05-10 20:34:38 -07:00
bunnei
934b2d8842 hle: service: sm: Improve Initialize implementation. 2021-05-10 20:34:38 -07:00
bunnei
f54ea749a4 hle: kernel: svc: Update ConnectToNamedPort to use new CreateNamedServicePort interface. 2021-05-10 20:34:38 -07:00
bunnei
c6de9657be hle: kernel: Implement named service ports using service interface factory.
- This allows us to create a new interface each time ConnectToNamedPort is called, removing the assumption that these are static.
2021-05-10 20:34:38 -07:00
bunnei
44c763f9c6 hle: kernel: KSession: Improve implementation of CloneCurrentObject. 2021-05-10 20:33:53 -07:00
bunnei
cfed6936f3 hle: service: sm: Increase point buffer size. 2021-05-10 15:43:42 -07:00
bunnei
9f44a44f2f hle: ipc_helpers: Reserve session resource when we create a KSession. 2021-05-10 15:42:46 -07:00
bunnei
75f23ad494 hle: kernel: KClientPort: Cleanup comment format. 2021-05-10 15:41:46 -07:00
bunnei
7a06037c5f hle: ipc: Add declarations for TIPC. 2021-05-10 15:05:10 -07:00
bunnei
ed25191ee6 hle: kernel: Further cleanup and add TIPC helpers. 2021-05-10 15:05:10 -07:00
bunnei
d08bd3e062 hle: ipc_helpers: Update IPC response generation for TIPC. 2021-05-10 15:05:10 -07:00
Kewlan
1b4331397b configure_ui: Call RequestGameListUpdate when toggling "Show Add-Ons Column" 2021-05-10 18:49:30 +02:00
bunnei
ec50a9b5b9 Merge pull request #6291 from lioncash/kern-shadow
kernel: Eliminate variable shadowing
2021-05-09 20:15:00 -07:00
Morph
bb7d4ec3d3 Merge pull request #6294 from german77/kernelCleanup
kernel: Delete unused files
2021-05-09 12:22:44 -04:00
german77
0c1bb46f0a kernel: Delete unused files 2021-05-09 11:15:31 -05:00
Lioncash
2f62bae9e3 kernel: Eliminate variable shadowing
Now that the large kernel refactor is merged, we can eliminate the
remaining variable shadowing cases.
2021-05-08 12:33:26 -04:00
ameerj
3671fd0a97 texture_cache: Handle out of bound texture blits
Some games interleave a texture blit using regions which are out-of-bounds. This addresses the interleaving to avoid oob reads from the src texture.
2021-05-07 22:14:21 -04:00
ameerj
da62e92784 nvflinger: Create layers when they are queried but not found
Fixes Shantae softlock on boot.
2021-05-06 11:20:52 -04:00
german77
8c30ed6d09 hid: Improve hardware accuracy of gestures 2021-05-05 10:13:09 -05:00
173 changed files with 1554 additions and 977 deletions

View File

@@ -30,10 +30,10 @@ make install DESTDIR=AppDir
rm -vf AppDir/usr/bin/yuzu-cmd AppDir/usr/bin/yuzu-tester
# Download tools needed to build an AppImage
wget -nc https://github.com/linuxdeploy/linuxdeploy/releases/download/continuous/linuxdeploy-x86_64.AppImage
wget -nc https://github.com/linuxdeploy/linuxdeploy-plugin-qt/releases/download/continuous/linuxdeploy-plugin-qt-x86_64.AppImage
wget -nc https://github.com/darealshinji/AppImageKit-checkrt/releases/download/continuous/AppRun-patched-x86_64
wget -nc https://github.com/darealshinji/AppImageKit-checkrt/releases/download/continuous/exec-x86_64.so
wget -nc https://github.com/yuzu-emu/ext-linux-bin/raw/main/appimage/linuxdeploy-x86_64.AppImage
wget -nc https://github.com/yuzu-emu/ext-linux-bin/raw/main/appimage/linuxdeploy-plugin-qt-x86_64.AppImage
wget -nc https://github.com/yuzu-emu/ext-linux-bin/raw/main/appimage/AppRun-patched-x86_64
wget -nc https://github.com/yuzu-emu/ext-linux-bin/raw/main/appimage/exec-x86_64.so
# Set executable bit
chmod 755 \
AppRun-patched-x86_64 \

View File

@@ -21,7 +21,7 @@ cp build/bin/yuzu "$DIR_NAME"
# Build an AppImage
cd build
wget -nc https://github.com/AppImage/AppImageKit/releases/download/continuous/appimagetool-x86_64.AppImage
wget -nc https://github.com/yuzu-emu/ext-linux-bin/raw/main/appimage/appimagetool-x86_64.AppImage
chmod 755 appimagetool-x86_64.AppImage
if [ "${RELEASE_NAME}" = "mainline" ]; then

View File

@@ -54,6 +54,7 @@ if (MSVC)
/we4547 # 'operator' : operator before comma has no effect; expected operator with side-effect
/we4549 # 'operator1': operator before comma has no effect; did you intend 'operator2'?
/we4555 # Expression has no effect; expected expression with side-effect
/we4715 # 'function': not all control paths return a value
/we4834 # Discarding return value of function with 'nodiscard' attribute
/we5038 # data member 'member1' will be initialized after data member 'member2'
)

View File

@@ -322,7 +322,7 @@ void RB_INSERT_COLOR(RBHead<Node>* head, Node* elm) {
template <typename Node>
void RB_REMOVE_COLOR(RBHead<Node>* head, Node* parent, Node* elm) {
Node* tmp;
while ((elm == nullptr || RB_IS_BLACK(elm)) && elm != head->Root()) {
while ((elm == nullptr || RB_IS_BLACK(elm)) && elm != head->Root() && parent != nullptr) {
if (RB_LEFT(parent) == elm) {
tmp = RB_RIGHT(parent);
if (RB_IS_RED(tmp)) {

View File

@@ -651,20 +651,17 @@ endif()
if (MSVC)
target_compile_options(core PRIVATE
# 'expression' : signed/unsigned mismatch
/we4018
# 'argument' : conversion from 'type1' to 'type2', possible loss of data (floating-point)
/we4244
# 'conversion' : conversion from 'type1' to 'type2', signed/unsigned mismatch
/we4245
# 'operator': conversion from 'type1:field_bits' to 'type2:field_bits', possible loss of data
/we4254
# 'var' : conversion from 'size_t' to 'type', possible loss of data
/we4267
# 'context' : truncation from 'type1' to 'type2'
/we4305
# 'function' : not all control paths return a value
/we4715
/we4018 # 'expression' : signed/unsigned mismatch
/we4244 # 'argument' : conversion from 'type1' to 'type2', possible loss of data (floating-point)
/we4245 # 'conversion' : conversion from 'type1' to 'type2', signed/unsigned mismatch
/we4254 # 'operator': conversion from 'type1:field_bits' to 'type2:field_bits', possible loss of data
/we4267 # 'var' : conversion from 'size_t' to 'type', possible loss of data
/we4305 # 'context' : truncation from 'type1' to 'type2'
/we4456 # Declaration of 'identifier' hides previous local declaration
/we4457 # Declaration of 'identifier' hides function parameter
/we4458 # Declaration of 'identifier' hides class member
/we4459 # Declaration of 'identifier' hides global declaration
/we4715 # 'function' : not all control paths return a value
)
else()
target_compile_options(core PRIVATE
@@ -672,6 +669,7 @@ else()
-Werror=ignored-qualifiers
-Werror=implicit-fallthrough
-Werror=sign-compare
-Werror=shadow
$<$<CXX_COMPILER_ID:GNU>:-Werror=class-memaccess>
$<$<CXX_COMPILER_ID:GNU>:-Werror=unused-but-set-parameter>

View File

@@ -24,7 +24,7 @@ namespace Core {
class DynarmicCallbacks32 : public Dynarmic::A32::UserCallbacks {
public:
explicit DynarmicCallbacks32(ARM_Dynarmic_32& parent) : parent(parent) {}
explicit DynarmicCallbacks32(ARM_Dynarmic_32& parent_) : parent{parent_} {}
u8 MemoryRead8(u32 vaddr) override {
return parent.system.Memory().Read8(vaddr);

View File

@@ -27,7 +27,7 @@ using Vector = Dynarmic::A64::Vector;
class DynarmicCallbacks64 : public Dynarmic::A64::UserCallbacks {
public:
explicit DynarmicCallbacks64(ARM_Dynarmic_64& parent) : parent(parent) {}
explicit DynarmicCallbacks64(ARM_Dynarmic_64& parent_) : parent{parent_} {}
u8 MemoryRead8(u64 vaddr) override {
return parent.system.Memory().Read8(vaddr);

View File

@@ -18,7 +18,7 @@ class DynarmicCP15 final : public Dynarmic::A32::Coprocessor {
public:
using CoprocReg = Dynarmic::A32::CoprocReg;
explicit DynarmicCP15(ARM_Dynarmic_32& parent) : parent(parent) {}
explicit DynarmicCP15(ARM_Dynarmic_32& parent_) : parent{parent_} {}
std::optional<Callback> CompileInternalOperation(bool two, unsigned opc1, CoprocReg CRd,
CoprocReg CRn, CoprocReg CRm,

View File

@@ -9,8 +9,8 @@
namespace Core {
DynarmicExclusiveMonitor::DynarmicExclusiveMonitor(Memory::Memory& memory, std::size_t core_count)
: monitor(core_count), memory{memory} {}
DynarmicExclusiveMonitor::DynarmicExclusiveMonitor(Memory::Memory& memory_, std::size_t core_count_)
: monitor{core_count_}, memory{memory_} {}
DynarmicExclusiveMonitor::~DynarmicExclusiveMonitor() = default;

View File

@@ -22,7 +22,7 @@ namespace Core {
class DynarmicExclusiveMonitor final : public ExclusiveMonitor {
public:
explicit DynarmicExclusiveMonitor(Memory::Memory& memory, std::size_t core_count);
explicit DynarmicExclusiveMonitor(Memory::Memory& memory_, std::size_t core_count_);
~DynarmicExclusiveMonitor() override;
u8 ExclusiveRead8(std::size_t core_index, VAddr addr) override;

View File

@@ -18,7 +18,7 @@
namespace Core {
CpuManager::CpuManager(System& system) : system{system} {}
CpuManager::CpuManager(System& system_) : system{system_} {}
CpuManager::~CpuManager() = default;
void CpuManager::ThreadStart(CpuManager& cpu_manager, std::size_t core) {

View File

@@ -25,7 +25,7 @@ class System;
class CpuManager {
public:
explicit CpuManager(System& system);
explicit CpuManager(System& system_);
CpuManager(const CpuManager&) = delete;
CpuManager(CpuManager&&) = delete;

View File

@@ -10,8 +10,8 @@
namespace Core::Crypto {
CTREncryptionLayer::CTREncryptionLayer(FileSys::VirtualFile base_, Key128 key_,
std::size_t base_offset)
: EncryptionLayer(std::move(base_)), base_offset(base_offset), cipher(key_, Mode::CTR) {}
std::size_t base_offset_)
: EncryptionLayer(std::move(base_)), base_offset(base_offset_), cipher(key_, Mode::CTR) {}
std::size_t CTREncryptionLayer::Read(u8* data, std::size_t length, std::size_t offset) const {
if (length == 0)

View File

@@ -17,7 +17,7 @@ class CTREncryptionLayer : public EncryptionLayer {
public:
using IVData = std::array<u8, 16>;
CTREncryptionLayer(FileSys::VirtualFile base, Key128 key, std::size_t base_offset);
CTREncryptionLayer(FileSys::VirtualFile base_, Key128 key_, std::size_t base_offset_);
std::size_t Read(u8* data, std::size_t length, std::size_t offset) const override;

View File

@@ -458,7 +458,7 @@ static std::array<u8, size> operator^(const std::array<u8, size>& lhs,
const std::array<u8, size>& rhs) {
std::array<u8, size> out;
std::transform(lhs.begin(), lhs.end(), rhs.begin(), out.begin(),
[](u8 lhs, u8 rhs) { return u8(lhs ^ rhs); });
[](u8 lhs_elem, u8 rhs_elem) { return u8(lhs_elem ^ rhs_elem); });
return out;
}

View File

@@ -39,10 +39,10 @@ CNMT::CNMT(VirtualFile file) {
}
}
CNMT::CNMT(CNMTHeader header, OptionalHeader opt_header, std::vector<ContentRecord> content_records,
std::vector<MetaRecord> meta_records)
: header(std::move(header)), opt_header(std::move(opt_header)),
content_records(std::move(content_records)), meta_records(std::move(meta_records)) {}
CNMT::CNMT(CNMTHeader header_, OptionalHeader opt_header_,
std::vector<ContentRecord> content_records_, std::vector<MetaRecord> meta_records_)
: header(std::move(header_)), opt_header(std::move(opt_header_)),
content_records(std::move(content_records_)), meta_records(std::move(meta_records_)) {}
CNMT::~CNMT() = default;

View File

@@ -87,8 +87,8 @@ static_assert(sizeof(CNMTHeader) == 0x20, "CNMTHeader has incorrect size.");
class CNMT {
public:
explicit CNMT(VirtualFile file);
CNMT(CNMTHeader header, OptionalHeader opt_header, std::vector<ContentRecord> content_records,
std::vector<MetaRecord> meta_records);
CNMT(CNMTHeader header_, OptionalHeader opt_header_,
std::vector<ContentRecord> content_records_, std::vector<MetaRecord> meta_records_);
~CNMT();
u64 GetTitleID() const;

View File

@@ -12,6 +12,7 @@
#include "common/logging/log.h"
#include "core/crypto/key_manager.h"
#include "core/file_sys/card_image.h"
#include "core/file_sys/common_funcs.h"
#include "core/file_sys/content_archive.h"
#include "core/file_sys/nca_metadata.h"
#include "core/file_sys/registered_cache.h"
@@ -592,6 +593,12 @@ InstallResult RegisteredCache::InstallEntry(const NSP& nsp, bool overwrite_if_ex
const CNMT cnmt(cnmt_file);
const auto title_id = cnmt.GetTitleID();
const auto version = cnmt.GetTitleVersion();
if (title_id == GetBaseTitleID(title_id) && version == 0) {
return InstallResult::ErrorBaseInstall;
}
const auto result = RemoveExistingEntry(title_id);
// Install Metadata File

View File

@@ -38,6 +38,7 @@ enum class InstallResult {
ErrorAlreadyExists,
ErrorCopyFailed,
ErrorMetaFailed,
ErrorBaseInstall,
};
struct ContentProviderEntry {

View File

@@ -20,8 +20,8 @@
namespace FileSys {
NSP::NSP(VirtualFile file_, std::size_t program_index)
: file(std::move(file_)), program_index(program_index), status{Loader::ResultStatus::Success},
NSP::NSP(VirtualFile file_, std::size_t program_index_)
: file(std::move(file_)), program_index(program_index_), status{Loader::ResultStatus::Success},
pfs(std::make_shared<PartitionFilesystem>(file)), keys{Core::Crypto::KeyManager::Instance()} {
if (pfs->GetStatus() != Loader::ResultStatus::Success) {
status = pfs->GetStatus();

View File

@@ -27,7 +27,7 @@ enum class ContentRecordType : u8;
class NSP : public ReadOnlyVfsDirectory {
public:
explicit NSP(VirtualFile file, std::size_t program_index = 0);
explicit NSP(VirtualFile file_, std::size_t program_index_ = 0);
~NSP() override;
Loader::ResultStatus GetStatus() const;

View File

@@ -23,8 +23,8 @@ static bool VerifyConcatenationMapContinuity(const std::multimap<u64, VirtualFil
return map.begin()->first == 0;
}
ConcatenatedVfsFile::ConcatenatedVfsFile(std::vector<VirtualFile> files_, std::string name)
: name(std::move(name)) {
ConcatenatedVfsFile::ConcatenatedVfsFile(std::vector<VirtualFile> files_, std::string name_)
: name(std::move(name_)) {
std::size_t next_offset = 0;
for (const auto& file : files_) {
files.emplace(next_offset, file);
@@ -32,8 +32,8 @@ ConcatenatedVfsFile::ConcatenatedVfsFile(std::vector<VirtualFile> files_, std::s
}
}
ConcatenatedVfsFile::ConcatenatedVfsFile(std::multimap<u64, VirtualFile> files_, std::string name)
: files(std::move(files_)), name(std::move(name)) {
ConcatenatedVfsFile::ConcatenatedVfsFile(std::multimap<u64, VirtualFile> files_, std::string name_)
: files(std::move(files_)), name(std::move(name_)) {
ASSERT(VerifyConcatenationMapContinuity(files));
}

View File

@@ -14,8 +14,8 @@ namespace FileSys {
// Class that wraps multiple vfs files and concatenates them, making reads seamless. Currently
// read-only.
class ConcatenatedVfsFile : public VfsFile {
ConcatenatedVfsFile(std::vector<VirtualFile> files, std::string name);
ConcatenatedVfsFile(std::multimap<u64, VirtualFile> files, std::string name);
explicit ConcatenatedVfsFile(std::vector<VirtualFile> files, std::string name_);
explicit ConcatenatedVfsFile(std::multimap<u64, VirtualFile> files, std::string name_);
public:
~ConcatenatedVfsFile() override;

View File

@@ -8,8 +8,8 @@
namespace FileSys {
LayeredVfsDirectory::LayeredVfsDirectory(std::vector<VirtualDir> dirs, std::string name)
: dirs(std::move(dirs)), name(std::move(name)) {}
LayeredVfsDirectory::LayeredVfsDirectory(std::vector<VirtualDir> dirs_, std::string name_)
: dirs(std::move(dirs_)), name(std::move(name_)) {}
LayeredVfsDirectory::~LayeredVfsDirectory() = default;

View File

@@ -13,7 +13,7 @@ namespace FileSys {
// one and falling back to the one after. The highest priority directory (overwrites all others)
// should be element 0 in the dirs vector.
class LayeredVfsDirectory : public VfsDirectory {
LayeredVfsDirectory(std::vector<VirtualDir> dirs, std::string name);
explicit LayeredVfsDirectory(std::vector<VirtualDir> dirs_, std::string name_);
public:
~LayeredVfsDirectory() override;

View File

@@ -3,7 +3,16 @@
// Refer to the license.txt file included.
#include <string>
#ifdef __GNUC__
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wshadow"
#endif
#include <zip.h>
#ifdef __GNUC__
#pragma GCC diagnostic pop
#endif
#include "common/logging/backend.h"
#include "core/file_sys/vfs.h"
#include "core/file_sys/vfs_libzip.h"

View File

@@ -14,9 +14,9 @@ namespace FileSys {
class StaticVfsFile : public VfsFile {
public:
explicit StaticVfsFile(u8 value, std::size_t size = 0, std::string name = "",
VirtualDir parent = nullptr)
: value{value}, size{size}, name{std::move(name)}, parent{std::move(parent)} {}
explicit StaticVfsFile(u8 value_, std::size_t size_ = 0, std::string name_ = "",
VirtualDir parent_ = nullptr)
: value{value_}, size{size_}, name{std::move(name_)}, parent{std::move(parent_)} {}
std::string GetName() const override {
return name;

View File

@@ -7,8 +7,8 @@
#include "core/file_sys/vfs_vector.h"
namespace FileSys {
VectorVfsFile::VectorVfsFile(std::vector<u8> initial_data, std::string name, VirtualDir parent)
: data(std::move(initial_data)), parent(std::move(parent)), name(std::move(name)) {}
VectorVfsFile::VectorVfsFile(std::vector<u8> initial_data, std::string name_, VirtualDir parent_)
: data(std::move(initial_data)), parent(std::move(parent_)), name(std::move(name_)) {}
VectorVfsFile::~VectorVfsFile() = default;

View File

@@ -75,8 +75,8 @@ std::shared_ptr<ArrayVfsFile<Size>> MakeArrayFile(const std::array<u8, Size>& da
// An implementation of VfsFile that is backed by a vector optionally supplied upon construction
class VectorVfsFile : public VfsFile {
public:
explicit VectorVfsFile(std::vector<u8> initial_data = {}, std::string name = "",
VirtualDir parent = nullptr);
explicit VectorVfsFile(std::vector<u8> initial_data = {}, std::string name_ = "",
VirtualDir parent_ = nullptr);
~VectorVfsFile() override;
std::string GetName() const override;

View File

@@ -26,7 +26,7 @@ public:
private:
class Device : public Input::TouchDevice {
public:
explicit Device(std::weak_ptr<TouchState>&& touch_state) : touch_state(touch_state) {}
explicit Device(std::weak_ptr<TouchState>&& touch_state_) : touch_state(touch_state_) {}
Input::TouchStatus GetStatus() const override {
if (auto state = touch_state.lock()) {
std::lock_guard guard{state->mutex};

View File

@@ -32,7 +32,8 @@ enum class CommandType : u32 {
Control = 5,
RequestWithContext = 6,
ControlWithContext = 7,
Unspecified,
TIPC_Close = 15,
TIPC_CommandRegion = 16, // Start of TIPC commands, this is an offset.
};
struct CommandHeader {
@@ -57,6 +58,20 @@ struct CommandHeader {
BitField<10, 4, BufferDescriptorCFlag> buf_c_descriptor_flags;
BitField<31, 1, u32> enable_handle_descriptor;
};
bool IsTipc() const {
return type.Value() >= CommandType::TIPC_CommandRegion;
}
bool IsCloseCommand() const {
switch (type.Value()) {
case CommandType::Close:
case CommandType::TIPC_Close:
return true;
default:
return false;
}
}
};
static_assert(sizeof(CommandHeader) == 8, "CommandHeader size is incorrect");

View File

@@ -15,6 +15,8 @@
#include "core/hle/ipc.h"
#include "core/hle/kernel/hle_ipc.h"
#include "core/hle/kernel/k_client_port.h"
#include "core/hle/kernel/k_process.h"
#include "core/hle/kernel/k_resource_limit.h"
#include "core/hle/kernel/k_session.h"
#include "core/hle/result.h"
@@ -26,19 +28,19 @@ class RequestHelperBase {
protected:
Kernel::HLERequestContext* context = nullptr;
u32* cmdbuf;
ptrdiff_t index = 0;
u32 index = 0;
public:
explicit RequestHelperBase(u32* command_buffer) : cmdbuf(command_buffer) {}
explicit RequestHelperBase(Kernel::HLERequestContext& context)
: context(&context), cmdbuf(context.CommandBuffer()) {}
explicit RequestHelperBase(Kernel::HLERequestContext& ctx)
: context(&ctx), cmdbuf(ctx.CommandBuffer()) {}
void Skip(u32 size_in_words, bool set_to_null) {
if (set_to_null) {
memset(cmdbuf + index, 0, size_in_words * sizeof(u32));
}
index += static_cast<ptrdiff_t>(size_in_words);
index += size_in_words;
}
/**
@@ -51,11 +53,11 @@ public:
}
u32 GetCurrentOffset() const {
return static_cast<u32>(index);
return index;
}
void SetCurrentOffset(u32 offset) {
index = static_cast<ptrdiff_t>(offset);
index = offset;
}
};
@@ -69,64 +71,79 @@ public:
AlwaysMoveHandles = 1,
};
explicit ResponseBuilder(Kernel::HLERequestContext& context, u32 normal_params_size,
u32 num_handles_to_copy = 0, u32 num_objects_to_move = 0,
explicit ResponseBuilder(Kernel::HLERequestContext& ctx, u32 normal_params_size_,
u32 num_handles_to_copy_ = 0, u32 num_objects_to_move_ = 0,
Flags flags = Flags::None)
: RequestHelperBase(context), normal_params_size(normal_params_size),
num_handles_to_copy(num_handles_to_copy),
num_objects_to_move(num_objects_to_move), kernel{context.kernel} {
: RequestHelperBase(ctx), normal_params_size(normal_params_size_),
num_handles_to_copy(num_handles_to_copy_),
num_objects_to_move(num_objects_to_move_), kernel{ctx.kernel} {
memset(cmdbuf, 0, sizeof(u32) * IPC::COMMAND_BUFFER_LENGTH);
context.ClearIncomingObjects();
ctx.ClearIncomingObjects();
IPC::CommandHeader header{};
// The entire size of the raw data section in u32 units, including the 16 bytes of mandatory
// padding.
u64 raw_data_size = sizeof(IPC::DataPayloadHeader) / 4 + 4 + normal_params_size;
u32 raw_data_size = ctx.IsTipc()
? normal_params_size - 1
: sizeof(IPC::DataPayloadHeader) / 4 + 4 + normal_params_size;
u32 num_handles_to_move{};
u32 num_domain_objects{};
const bool always_move_handles{
(static_cast<u32>(flags) & static_cast<u32>(Flags::AlwaysMoveHandles)) != 0};
if (!context.Session()->IsDomain() || always_move_handles) {
if (!ctx.Session()->IsDomain() || always_move_handles) {
num_handles_to_move = num_objects_to_move;
} else {
num_domain_objects = num_objects_to_move;
}
if (context.Session()->IsDomain()) {
raw_data_size += sizeof(DomainMessageHeader) / 4 + num_domain_objects;
if (ctx.Session()->IsDomain()) {
raw_data_size += static_cast<u32>(sizeof(DomainMessageHeader) / 4 + num_domain_objects);
}
if (ctx.IsTipc()) {
header.type.Assign(ctx.GetCommandType());
}
ctx.data_size = static_cast<u32>(raw_data_size);
header.data_size.Assign(static_cast<u32>(raw_data_size));
if (num_handles_to_copy || num_handles_to_move) {
if (num_handles_to_copy != 0 || num_handles_to_move != 0) {
header.enable_handle_descriptor.Assign(1);
}
PushRaw(header);
if (header.enable_handle_descriptor) {
IPC::HandleDescriptorHeader handle_descriptor_header{};
handle_descriptor_header.num_handles_to_copy.Assign(num_handles_to_copy);
handle_descriptor_header.num_handles_to_copy.Assign(num_handles_to_copy_);
handle_descriptor_header.num_handles_to_move.Assign(num_handles_to_move);
PushRaw(handle_descriptor_header);
ctx.handles_offset = index;
Skip(num_handles_to_copy + num_handles_to_move, true);
}
AlignWithPadding();
if (!ctx.IsTipc()) {
AlignWithPadding();
if (context.Session()->IsDomain() && context.HasDomainMessageHeader()) {
IPC::DomainMessageHeader domain_header{};
domain_header.num_objects = num_domain_objects;
PushRaw(domain_header);
if (ctx.Session()->IsDomain() && ctx.HasDomainMessageHeader()) {
IPC::DomainMessageHeader domain_header{};
domain_header.num_objects = num_domain_objects;
PushRaw(domain_header);
}
IPC::DataPayloadHeader data_payload_header{};
data_payload_header.magic = Common::MakeMagic('S', 'F', 'C', 'O');
PushRaw(data_payload_header);
}
IPC::DataPayloadHeader data_payload_header{};
data_payload_header.magic = Common::MakeMagic('S', 'F', 'C', 'O');
PushRaw(data_payload_header);
data_payload_index = index;
datapayload_index = index;
ctx.data_payload_offset = index;
ctx.domain_offset = index + raw_data_size / 4;
}
template <class T>
@@ -134,6 +151,9 @@ public:
if (context->Session()->IsDomain()) {
context->AddDomainObject(std::move(iface));
} else {
// kernel.CurrentProcess()->GetResourceLimit()->Reserve(
// Kernel::LimitableResource::Sessions, 1);
auto* session = Kernel::KSession::Create(kernel);
session->Initialize(nullptr, iface->GetServiceName());
@@ -152,7 +172,7 @@ public:
const std::size_t num_move_objects = context->NumMoveObjects();
ASSERT_MSG(!num_domain_objects || !num_move_objects,
"cannot move normal handles and domain objects");
ASSERT_MSG((index - datapayload_index) == normal_params_size,
ASSERT_MSG((index - data_payload_index) == normal_params_size,
"normal_params_size value is incorrect");
ASSERT_MSG((num_domain_objects + num_move_objects) == num_objects_to_move,
"num_objects_to_move value is incorrect");
@@ -229,14 +249,14 @@ private:
u32 normal_params_size{};
u32 num_handles_to_copy{};
u32 num_objects_to_move{}; ///< Domain objects or move handles, context dependent
std::ptrdiff_t datapayload_index{};
u32 data_payload_index{};
Kernel::KernelCore& kernel;
};
/// Push ///
inline void ResponseBuilder::PushImpl(s32 value) {
cmdbuf[index++] = static_cast<u32>(value);
cmdbuf[index++] = value;
}
inline void ResponseBuilder::PushImpl(u32 value) {
@@ -341,9 +361,9 @@ class RequestParser : public RequestHelperBase {
public:
explicit RequestParser(u32* command_buffer) : RequestHelperBase(command_buffer) {}
explicit RequestParser(Kernel::HLERequestContext& context) : RequestHelperBase(context) {
ASSERT_MSG(context.GetDataPayloadOffset(), "context is incomplete");
Skip(context.GetDataPayloadOffset(), false);
explicit RequestParser(Kernel::HLERequestContext& ctx) : RequestHelperBase(ctx) {
ASSERT_MSG(ctx.GetDataPayloadOffset(), "context is incomplete");
Skip(ctx.GetDataPayloadOffset(), false);
// Skip the u64 command id, it's already stored in the context
static constexpr u32 CommandIdSize = 2;
Skip(CommandIdSize, false);

View File

@@ -12,8 +12,8 @@
namespace Kernel {
GlobalSchedulerContext::GlobalSchedulerContext(KernelCore& kernel)
: kernel{kernel}, scheduler_lock{kernel} {}
GlobalSchedulerContext::GlobalSchedulerContext(KernelCore& kernel_)
: kernel{kernel_}, scheduler_lock{kernel_} {}
GlobalSchedulerContext::~GlobalSchedulerContext() = default;

View File

@@ -34,7 +34,7 @@ class GlobalSchedulerContext final {
public:
using LockType = KAbstractSchedulerLock<KScheduler>;
explicit GlobalSchedulerContext(KernelCore& kernel);
explicit GlobalSchedulerContext(KernelCore& kernel_);
~GlobalSchedulerContext();
/// Adds a new thread to the scheduler

View File

@@ -55,7 +55,7 @@ void HLERequestContext::ParseCommandBuffer(const KHandleTable& handle_table, u32
IPC::RequestParser rp(src_cmdbuf);
command_header = rp.PopRaw<IPC::CommandHeader>();
if (command_header->type == IPC::CommandType::Close) {
if (command_header->IsCloseCommand()) {
// Close does not populate the rest of the IPC header
return;
}
@@ -99,39 +99,43 @@ void HLERequestContext::ParseCommandBuffer(const KHandleTable& handle_table, u32
buffer_w_desciptors.push_back(rp.PopRaw<IPC::BufferDescriptorABW>());
}
buffer_c_offset = rp.GetCurrentOffset() + command_header->data_size;
const auto buffer_c_offset = rp.GetCurrentOffset() + command_header->data_size;
// Padding to align to 16 bytes
rp.AlignWithPadding();
if (!command_header->IsTipc()) {
// Padding to align to 16 bytes
rp.AlignWithPadding();
if (Session()->IsDomain() && ((command_header->type == IPC::CommandType::Request ||
command_header->type == IPC::CommandType::RequestWithContext) ||
!incoming)) {
// If this is an incoming message, only CommandType "Request" has a domain header
// All outgoing domain messages have the domain header, if only incoming has it
if (incoming || domain_message_header) {
domain_message_header = rp.PopRaw<IPC::DomainMessageHeader>();
} else {
if (Session()->IsDomain()) {
LOG_WARNING(IPC, "Domain request has no DomainMessageHeader!");
if (Session()->IsDomain() &&
((command_header->type == IPC::CommandType::Request ||
command_header->type == IPC::CommandType::RequestWithContext) ||
!incoming)) {
// If this is an incoming message, only CommandType "Request" has a domain header
// All outgoing domain messages have the domain header, if only incoming has it
if (incoming || domain_message_header) {
domain_message_header = rp.PopRaw<IPC::DomainMessageHeader>();
} else {
if (Session()->IsDomain()) {
LOG_WARNING(IPC, "Domain request has no DomainMessageHeader!");
}
}
}
}
data_payload_header = rp.PopRaw<IPC::DataPayloadHeader>();
data_payload_header = rp.PopRaw<IPC::DataPayloadHeader>();
data_payload_offset = rp.GetCurrentOffset();
data_payload_offset = rp.GetCurrentOffset();
if (domain_message_header && domain_message_header->command ==
IPC::DomainMessageHeader::CommandType::CloseVirtualHandle) {
// CloseVirtualHandle command does not have SFC* or any data
return;
}
if (domain_message_header &&
domain_message_header->command ==
IPC::DomainMessageHeader::CommandType::CloseVirtualHandle) {
// CloseVirtualHandle command does not have SFC* or any data
return;
}
if (incoming) {
ASSERT(data_payload_header->magic == Common::MakeMagic('S', 'F', 'C', 'I'));
} else {
ASSERT(data_payload_header->magic == Common::MakeMagic('S', 'F', 'C', 'O'));
if (incoming) {
ASSERT(data_payload_header->magic == Common::MakeMagic('S', 'F', 'C', 'I'));
} else {
ASSERT(data_payload_header->magic == Common::MakeMagic('S', 'F', 'C', 'O'));
}
}
rp.SetCurrentOffset(buffer_c_offset);
@@ -166,84 +170,67 @@ void HLERequestContext::ParseCommandBuffer(const KHandleTable& handle_table, u32
ResultCode HLERequestContext::PopulateFromIncomingCommandBuffer(const KHandleTable& handle_table,
u32_le* src_cmdbuf) {
ParseCommandBuffer(handle_table, src_cmdbuf, true);
if (command_header->type == IPC::CommandType::Close) {
if (command_header->IsCloseCommand()) {
// Close does not populate the rest of the IPC header
return RESULT_SUCCESS;
}
// The data_size already includes the payload header, the padding and the domain header.
std::size_t size = data_payload_offset + command_header->data_size -
sizeof(IPC::DataPayloadHeader) / sizeof(u32) - 4;
if (domain_message_header)
size -= sizeof(IPC::DomainMessageHeader) / sizeof(u32);
std::copy_n(src_cmdbuf, size, cmd_buf.begin());
std::copy_n(src_cmdbuf, IPC::COMMAND_BUFFER_LENGTH, cmd_buf.begin());
return RESULT_SUCCESS;
}
ResultCode HLERequestContext::WriteToOutgoingCommandBuffer(KThread& thread) {
auto& owner_process = *thread.GetOwnerProcess();
ResultCode HLERequestContext::WriteToOutgoingCommandBuffer(KThread& requesting_thread) {
auto current_offset = handles_offset;
auto& owner_process = *requesting_thread.GetOwnerProcess();
auto& handle_table = owner_process.GetHandleTable();
std::array<u32, IPC::COMMAND_BUFFER_LENGTH> dst_cmdbuf;
memory.ReadBlock(owner_process, thread.GetTLSAddress(), dst_cmdbuf.data(),
dst_cmdbuf.size() * sizeof(u32));
// The header was already built in the internal command buffer. Attempt to parse it to verify
// the integrity and then copy it over to the target command buffer.
ParseCommandBuffer(handle_table, cmd_buf.data(), false);
// The data_size already includes the payload header, the padding and the domain header.
std::size_t size = data_payload_offset + command_header->data_size -
sizeof(IPC::DataPayloadHeader) / sizeof(u32) - 4;
if (domain_message_header)
size -= sizeof(IPC::DomainMessageHeader) / sizeof(u32);
std::size_t size{};
std::copy_n(cmd_buf.begin(), size, dst_cmdbuf.data());
if (command_header->enable_handle_descriptor) {
ASSERT_MSG(!move_objects.empty() || !copy_objects.empty(),
"Handle descriptor bit set but no handles to translate");
// We write the translated handles at a specific offset in the command buffer, this space
// was already reserved when writing the header.
std::size_t current_offset =
(sizeof(IPC::CommandHeader) + sizeof(IPC::HandleDescriptorHeader)) / sizeof(u32);
ASSERT_MSG(!handle_descriptor_header->send_current_pid, "Sending PID is not implemented");
ASSERT(copy_objects.size() == handle_descriptor_header->num_handles_to_copy);
ASSERT(move_objects.size() == handle_descriptor_header->num_handles_to_move);
// We don't make a distinction between copy and move handles when translating since HLE
// services don't deal with handles directly. However, the guest applications might check
// for specific values in each of these descriptors.
for (auto& object : copy_objects) {
ASSERT(object != nullptr);
R_TRY(handle_table.Add(&dst_cmdbuf[current_offset++], object));
}
for (auto& object : move_objects) {
ASSERT(object != nullptr);
R_TRY(handle_table.Add(&dst_cmdbuf[current_offset++], object));
if (IsTipc()) {
size = cmd_buf.size();
} else {
size = data_payload_offset + data_size - sizeof(IPC::DataPayloadHeader) / sizeof(u32) - 4;
if (Session()->IsDomain()) {
size -= sizeof(IPC::DomainMessageHeader) / sizeof(u32);
}
}
// TODO(Subv): Translate the X/A/B/W buffers.
for (auto& object : copy_objects) {
Handle handle{};
if (object) {
R_TRY(handle_table.Add(&handle, object));
}
cmd_buf[current_offset++] = handle;
}
for (auto& object : move_objects) {
Handle handle{};
if (object) {
R_TRY(handle_table.Add(&handle, object));
if (Session()->IsDomain() && domain_message_header) {
ASSERT(domain_message_header->num_objects == domain_objects.size());
// Write the domain objects to the command buffer, these go after the raw untranslated data.
// TODO(Subv): This completely ignores C buffers.
std::size_t domain_offset = size - domain_message_header->num_objects;
// Close our reference to the object, as it is being moved to the caller.
object->Close();
}
cmd_buf[current_offset++] = handle;
}
// Write the domain objects to the command buffer, these go after the raw untranslated data.
// TODO(Subv): This completely ignores C buffers.
if (Session()->IsDomain()) {
current_offset = domain_offset - static_cast<u32>(domain_objects.size());
for (const auto& object : domain_objects) {
server_session->AppendDomainRequestHandler(object);
dst_cmdbuf[domain_offset++] =
cmd_buf[current_offset++] =
static_cast<u32_le>(server_session->NumDomainRequestHandlers());
}
}
// Copy the translated command buffer back into the thread's command buffer area.
memory.WriteBlock(owner_process, thread.GetTLSAddress(), dst_cmdbuf.data(),
dst_cmdbuf.size() * sizeof(u32));
memory.WriteBlock(owner_process, requesting_thread.GetTLSAddress(), cmd_buf.data(),
size * sizeof(u32));
return RESULT_SUCCESS;
}

View File

@@ -66,7 +66,8 @@ public:
* this request (ServerSession, Originator thread, Translated command buffer, etc).
* @returns ResultCode the result code of the translate operation.
*/
virtual ResultCode HandleSyncRequest(Kernel::HLERequestContext& context) = 0;
virtual ResultCode HandleSyncRequest(Kernel::KServerSession& session,
Kernel::HLERequestContext& context) = 0;
/**
* Signals that a client has just connected to this HLE handler and keeps the
@@ -126,17 +127,30 @@ public:
u32_le* src_cmdbuf);
/// Writes data from this context back to the requesting process/thread.
ResultCode WriteToOutgoingCommandBuffer(KThread& thread);
ResultCode WriteToOutgoingCommandBuffer(KThread& requesting_thread);
u32_le GetHipcCommand() const {
return command;
}
u32_le GetTipcCommand() const {
return static_cast<u32_le>(command_header->type.Value()) -
static_cast<u32_le>(IPC::CommandType::TIPC_CommandRegion);
}
u32_le GetCommand() const {
return command;
return command_header->IsTipc() ? GetTipcCommand() : GetHipcCommand();
}
bool IsTipc() const {
return command_header->IsTipc();
}
IPC::CommandType GetCommandType() const {
return command_header->type;
}
unsigned GetDataPayloadOffset() const {
u32 GetDataPayloadOffset() const {
return data_payload_offset;
}
@@ -291,8 +305,10 @@ private:
std::vector<IPC::BufferDescriptorABW> buffer_w_desciptors;
std::vector<IPC::BufferDescriptorC> buffer_c_desciptors;
unsigned data_payload_offset{};
unsigned buffer_c_offset{};
u32 data_payload_offset{};
u32 handles_offset{};
u32 domain_offset{};
u32 data_size{};
u32_le command{};
std::vector<std::shared_ptr<SessionRequestHandler>> domain_request_handlers;

View File

@@ -177,7 +177,7 @@ class KAutoObjectWithListContainer;
class KAutoObjectWithList : public KAutoObject {
public:
explicit KAutoObjectWithList(KernelCore& kernel_) : KAutoObject(kernel_), kernel(kernel_) {}
explicit KAutoObjectWithList(KernelCore& kernel_) : KAutoObject(kernel_) {}
static int Compare(const KAutoObjectWithList& lhs, const KAutoObjectWithList& rhs) {
const u64 lid = lhs.GetId();
@@ -204,11 +204,7 @@ public:
private:
friend class KAutoObjectWithListContainer;
private:
Common::IntrusiveRedBlackTreeNode list_node;
protected:
KernelCore& kernel;
};
template <typename T>

View File

@@ -13,7 +13,7 @@
namespace Kernel {
KClientPort::KClientPort(KernelCore& kernel) : KSynchronizationObject{kernel} {}
KClientPort::KClientPort(KernelCore& kernel_) : KSynchronizationObject{kernel_} {}
KClientPort::~KClientPort() = default;
void KClientPort::Initialize(KPort* parent_, s32 max_sessions_, std::string&& name_) {
@@ -58,9 +58,9 @@ bool KClientPort::IsSignaled() const {
ResultCode KClientPort::CreateSession(KClientSession** out) {
// Reserve a new session from the resource limit.
KScopedResourceReservation session_reservation(kernel.CurrentProcess()->GetResourceLimit(),
LimitableResource::Sessions);
R_UNLESS(session_reservation.Succeeded(), ResultLimitReached);
// KScopedResourceReservation session_reservation(kernel.CurrentProcess()->GetResourceLimit(),
// LimitableResource::Sessions);
// R_UNLESS(session_reservation.Succeeded(), ResultLimitReached);
// Update the session counts.
{
@@ -91,7 +91,7 @@ ResultCode KClientPort::CreateSession(KClientSession** out) {
// Create a new session.
KSession* session = KSession::Create(kernel);
if (session == nullptr) {
/* Decrement the session count. */
// Decrement the session count.
const auto prev = num_sessions--;
if (prev == max_sessions) {
this->NotifyAvailable();
@@ -104,7 +104,7 @@ ResultCode KClientPort::CreateSession(KClientSession** out) {
session->Initialize(this, parent->GetName());
// Commit the session reservation.
session_reservation.Commit();
// session_reservation.Commit();
// Register the session.
KSession::Register(kernel, session);

View File

@@ -21,7 +21,7 @@ class KClientPort final : public KSynchronizationObject {
KERNEL_AUTOOBJECT_TRAITS(KClientPort, KSynchronizationObject);
public:
explicit KClientPort(KernelCore& kernel);
explicit KClientPort(KernelCore& kernel_);
virtual ~KClientPort() override;
void Initialize(KPort* parent_, s32 max_sessions_, std::string&& name_);

View File

@@ -12,7 +12,8 @@
namespace Kernel {
KClientSession::KClientSession(KernelCore& kernel) : KAutoObjectWithSlabHeapAndContainer{kernel} {}
KClientSession::KClientSession(KernelCore& kernel_)
: KAutoObjectWithSlabHeapAndContainer{kernel_} {}
KClientSession::~KClientSession() = default;
void KClientSession::Destroy() {

View File

@@ -33,7 +33,7 @@ class KClientSession final
KERNEL_AUTOOBJECT_TRAITS(KClientSession, KAutoObject);
public:
explicit KClientSession(KernelCore& kernel);
explicit KClientSession(KernelCore& kernel_);
virtual ~KClientSession();
void Initialize(KSession* parent_, std::string&& name_) {

View File

@@ -254,8 +254,7 @@ void KConditionVariable::Signal(u64 cv_key, s32 count) {
}
// Close threads in the list.
for (auto it = thread_list.begin(); it != thread_list.end();
it = thread_list.erase(kernel, it)) {
for (auto it = thread_list.begin(); it != thread_list.end(); it = thread_list.erase(it)) {
(*it).Close();
}
}

View File

@@ -8,8 +8,9 @@
namespace Kernel {
KEvent::KEvent(KernelCore& kernel)
: KAutoObjectWithSlabHeapAndContainer{kernel}, readable_event{kernel}, writable_event{kernel} {}
KEvent::KEvent(KernelCore& kernel_)
: KAutoObjectWithSlabHeapAndContainer{kernel_}, readable_event{kernel_}, writable_event{
kernel_} {}
KEvent::~KEvent() = default;

View File

@@ -19,7 +19,7 @@ class KEvent final : public KAutoObjectWithSlabHeapAndContainer<KEvent, KAutoObj
KERNEL_AUTOOBJECT_TRAITS(KEvent, KAutoObject);
public:
explicit KEvent(KernelCore& kernel);
explicit KEvent(KernelCore& kernel_);
virtual ~KEvent();
void Initialize(std::string&& name);

View File

@@ -18,7 +18,8 @@ class KernelCore;
class KLightConditionVariable {
public:
explicit KLightConditionVariable(KernelCore& kernel) : thread_queue(kernel), kernel(kernel) {}
explicit KLightConditionVariable(KernelCore& kernel_)
: thread_queue(kernel_), kernel(kernel_) {}
void Wait(KLightLock* lock, s64 timeout = -1) {
WaitImpl(lock, timeout);

View File

@@ -124,7 +124,7 @@ public:
~KLinkedList() {
// Erase all elements.
for (auto it = this->begin(); it != this->end(); it = this->erase(kernel, it)) {
for (auto it = begin(); it != end(); it = erase(it)) {
}
// Ensure we succeeded.
@@ -201,10 +201,10 @@ public:
}
iterator insert(const_iterator pos, reference ref) {
KLinkedListNode* node = KLinkedListNode::Allocate(kernel);
ASSERT(node != nullptr);
node->Initialize(std::addressof(ref));
return iterator(BaseList::insert(pos.m_base_it, *node));
KLinkedListNode* new_node = KLinkedListNode::Allocate(kernel);
ASSERT(new_node != nullptr);
new_node->Initialize(std::addressof(ref));
return iterator(BaseList::insert(pos.m_base_it, *new_node));
}
void push_back(reference ref) {
@@ -223,7 +223,7 @@ public:
this->erase(this->begin());
}
iterator erase(KernelCore& kernel, const iterator pos) {
iterator erase(const iterator pos) {
KLinkedListNode* freed_node = std::addressof(*pos.m_base_it);
iterator ret = iterator(BaseList::erase(pos.m_base_it));
KLinkedListNode::Free(kernel, freed_node);

View File

@@ -7,8 +7,8 @@
namespace Kernel {
KMemoryBlockManager::KMemoryBlockManager(VAddr start_addr, VAddr end_addr)
: start_addr{start_addr}, end_addr{end_addr} {
KMemoryBlockManager::KMemoryBlockManager(VAddr start_addr_, VAddr end_addr_)
: start_addr{start_addr_}, end_addr{end_addr_} {
const u64 num_pages{(end_addr - start_addr) / PageSize};
memory_block_tree.emplace_back(start_addr, num_pages, KMemoryState::Free,
KMemoryPermission::None, KMemoryAttribute::None);
@@ -17,8 +17,8 @@ KMemoryBlockManager::KMemoryBlockManager(VAddr start_addr, VAddr end_addr)
KMemoryBlockManager::iterator KMemoryBlockManager::FindIterator(VAddr addr) {
auto node{memory_block_tree.begin()};
while (node != end()) {
const VAddr end_addr{node->GetNumPages() * PageSize + node->GetAddress()};
if (node->GetAddress() <= addr && end_addr - 1 >= addr) {
const VAddr node_end_addr{node->GetNumPages() * PageSize + node->GetAddress()};
if (node->GetAddress() <= addr && node_end_addr - 1 >= addr) {
return node;
}
node = std::next(node);
@@ -67,7 +67,7 @@ void KMemoryBlockManager::Update(VAddr addr, std::size_t num_pages, KMemoryState
KMemoryPermission prev_perm, KMemoryAttribute prev_attribute,
KMemoryState state, KMemoryPermission perm,
KMemoryAttribute attribute) {
const VAddr end_addr{addr + num_pages * PageSize};
const VAddr update_end_addr{addr + num_pages * PageSize};
iterator node{memory_block_tree.begin()};
prev_attribute |= KMemoryAttribute::IpcAndDeviceMapped;
@@ -78,7 +78,7 @@ void KMemoryBlockManager::Update(VAddr addr, std::size_t num_pages, KMemoryState
const VAddr cur_addr{block->GetAddress()};
const VAddr cur_end_addr{block->GetNumPages() * PageSize + cur_addr};
if (addr < cur_end_addr && cur_addr < end_addr) {
if (addr < cur_end_addr && cur_addr < update_end_addr) {
if (!block->HasProperties(prev_state, prev_perm, prev_attribute)) {
node = next_node;
continue;
@@ -89,8 +89,8 @@ void KMemoryBlockManager::Update(VAddr addr, std::size_t num_pages, KMemoryState
memory_block_tree.insert(node, block->Split(addr));
}
if (end_addr < cur_end_addr) {
new_node = memory_block_tree.insert(node, block->Split(end_addr));
if (update_end_addr < cur_end_addr) {
new_node = memory_block_tree.insert(node, block->Split(update_end_addr));
}
new_node->Update(state, perm, attribute);
@@ -98,7 +98,7 @@ void KMemoryBlockManager::Update(VAddr addr, std::size_t num_pages, KMemoryState
MergeAdjacent(new_node, next_node);
}
if (cur_end_addr - 1 >= end_addr - 1) {
if (cur_end_addr - 1 >= update_end_addr - 1) {
break;
}
@@ -108,7 +108,7 @@ void KMemoryBlockManager::Update(VAddr addr, std::size_t num_pages, KMemoryState
void KMemoryBlockManager::Update(VAddr addr, std::size_t num_pages, KMemoryState state,
KMemoryPermission perm, KMemoryAttribute attribute) {
const VAddr end_addr{addr + num_pages * PageSize};
const VAddr update_end_addr{addr + num_pages * PageSize};
iterator node{memory_block_tree.begin()};
while (node != memory_block_tree.end()) {
@@ -117,15 +117,15 @@ void KMemoryBlockManager::Update(VAddr addr, std::size_t num_pages, KMemoryState
const VAddr cur_addr{block->GetAddress()};
const VAddr cur_end_addr{block->GetNumPages() * PageSize + cur_addr};
if (addr < cur_end_addr && cur_addr < end_addr) {
if (addr < cur_end_addr && cur_addr < update_end_addr) {
iterator new_node{node};
if (addr > cur_addr) {
memory_block_tree.insert(node, block->Split(addr));
}
if (end_addr < cur_end_addr) {
new_node = memory_block_tree.insert(node, block->Split(end_addr));
if (update_end_addr < cur_end_addr) {
new_node = memory_block_tree.insert(node, block->Split(update_end_addr));
}
new_node->Update(state, perm, attribute);
@@ -133,7 +133,7 @@ void KMemoryBlockManager::Update(VAddr addr, std::size_t num_pages, KMemoryState
MergeAdjacent(new_node, next_node);
}
if (cur_end_addr - 1 >= end_addr - 1) {
if (cur_end_addr - 1 >= update_end_addr - 1) {
break;
}
@@ -143,7 +143,7 @@ void KMemoryBlockManager::Update(VAddr addr, std::size_t num_pages, KMemoryState
void KMemoryBlockManager::UpdateLock(VAddr addr, std::size_t num_pages, LockFunc&& lock_func,
KMemoryPermission perm) {
const VAddr end_addr{addr + num_pages * PageSize};
const VAddr update_end_addr{addr + num_pages * PageSize};
iterator node{memory_block_tree.begin()};
while (node != memory_block_tree.end()) {
@@ -152,15 +152,15 @@ void KMemoryBlockManager::UpdateLock(VAddr addr, std::size_t num_pages, LockFunc
const VAddr cur_addr{block->GetAddress()};
const VAddr cur_end_addr{block->GetNumPages() * PageSize + cur_addr};
if (addr < cur_end_addr && cur_addr < end_addr) {
if (addr < cur_end_addr && cur_addr < update_end_addr) {
iterator new_node{node};
if (addr > cur_addr) {
memory_block_tree.insert(node, block->Split(addr));
}
if (end_addr < cur_end_addr) {
new_node = memory_block_tree.insert(node, block->Split(end_addr));
if (update_end_addr < cur_end_addr) {
new_node = memory_block_tree.insert(node, block->Split(update_end_addr));
}
lock_func(new_node, perm);
@@ -168,7 +168,7 @@ void KMemoryBlockManager::UpdateLock(VAddr addr, std::size_t num_pages, LockFunc
MergeAdjacent(new_node, next_node);
}
if (cur_end_addr - 1 >= end_addr - 1) {
if (cur_end_addr - 1 >= update_end_addr - 1) {
break;
}

View File

@@ -19,7 +19,7 @@ public:
using const_iterator = MemoryBlockTree::const_iterator;
public:
KMemoryBlockManager(VAddr start_addr, VAddr end_addr);
KMemoryBlockManager(VAddr start_addr_, VAddr end_addr_);
iterator end() {
return memory_block_tree.end();

View File

@@ -82,9 +82,9 @@ public:
type_id = type;
}
constexpr bool Contains(u64 address) const {
constexpr bool Contains(u64 addr) const {
ASSERT(this->GetEndAddress() != 0);
return this->GetAddress() <= address && address <= this->GetLastAddress();
return this->GetAddress() <= addr && addr <= this->GetLastAddress();
}
constexpr bool IsDerivedFrom(u32 type) const {

View File

@@ -17,7 +17,7 @@ class KPageLinkedList final {
public:
class Node final {
public:
constexpr Node(u64 addr, std::size_t num_pages) : addr{addr}, num_pages{num_pages} {}
constexpr Node(u64 addr_, std::size_t num_pages_) : addr{addr_}, num_pages{num_pages_} {}
constexpr u64 GetAddress() const {
return addr;

View File

@@ -58,7 +58,7 @@ constexpr std::size_t GetSizeInRange(const KMemoryInfo& info, VAddr start, VAddr
} // namespace
KPageTable::KPageTable(Core::System& system) : system{system} {}
KPageTable::KPageTable(Core::System& system_) : system{system_} {}
ResultCode KPageTable::InitializeForProcess(FileSys::ProgramAddressSpaceType as_type,
bool enable_aslr, VAddr code_addr,
@@ -906,8 +906,8 @@ ResultCode KPageTable::LockForDeviceAddressSpace(VAddr addr, std::size_t size) {
block_manager->UpdateLock(
addr, size / PageSize,
[](KMemoryBlockManager::iterator block, KMemoryPermission perm) {
block->ShareToDevice(perm);
[](KMemoryBlockManager::iterator block, KMemoryPermission permission) {
block->ShareToDevice(permission);
},
perm);
@@ -929,8 +929,8 @@ ResultCode KPageTable::UnlockForDeviceAddressSpace(VAddr addr, std::size_t size)
block_manager->UpdateLock(
addr, size / PageSize,
[](KMemoryBlockManager::iterator block, KMemoryPermission perm) {
block->UnshareToDevice(perm);
[](KMemoryBlockManager::iterator block, KMemoryPermission permission) {
block->UnshareToDevice(permission);
},
perm);

View File

@@ -24,7 +24,7 @@ class KMemoryBlockManager;
class KPageTable final : NonCopyable {
public:
explicit KPageTable(Core::System& system);
explicit KPageTable(Core::System& system_);
ResultCode InitializeForProcess(FileSys::ProgramAddressSpaceType as_type, bool enable_aslr,
VAddr code_addr, std::size_t code_size,

View File

@@ -9,8 +9,8 @@
namespace Kernel {
KPort::KPort(KernelCore& kernel)
: KAutoObjectWithSlabHeapAndContainer{kernel}, server{kernel}, client{kernel} {}
KPort::KPort(KernelCore& kernel_)
: KAutoObjectWithSlabHeapAndContainer{kernel_}, server{kernel_}, client{kernel_} {}
KPort::~KPort() = default;

View File

@@ -21,7 +21,7 @@ class KPort final : public KAutoObjectWithSlabHeapAndContainer<KPort, KAutoObjec
KERNEL_AUTOOBJECT_TRAITS(KPort, KAutoObject);
public:
explicit KPort(KernelCore& kernel);
explicit KPort(KernelCore& kernel_);
virtual ~KPort();
static void PostDestroy([[maybe_unused]] uintptr_t arg) {}

View File

@@ -118,11 +118,11 @@ private:
std::bitset<num_slot_entries> is_slot_used;
};
ResultCode KProcess::Initialize(KProcess* process, Core::System& system, std::string name,
ResultCode KProcess::Initialize(KProcess* process, Core::System& system, std::string process_name,
ProcessType type) {
auto& kernel = system.Kernel();
process->name = std::move(name);
process->name = std::move(process_name);
process->resource_limit = kernel.GetSystemResourceLimit();
process->status = ProcessStatus::Created;
@@ -373,8 +373,8 @@ void KProcess::Run(s32 main_thread_priority, u64 stack_size) {
void KProcess::PrepareForTermination() {
ChangeStatus(ProcessStatus::Exiting);
const auto stop_threads = [this](const std::vector<KThread*>& thread_list) {
for (auto& thread : thread_list) {
const auto stop_threads = [this](const std::vector<KThread*>& in_thread_list) {
for (auto& thread : in_thread_list) {
if (thread->GetOwnerProcess() != this)
continue;
@@ -491,10 +491,10 @@ bool KProcess::IsSignaled() const {
return is_signaled;
}
KProcess::KProcess(KernelCore& kernel)
: KAutoObjectWithSlabHeapAndContainer{kernel},
page_table{std::make_unique<KPageTable>(kernel.System())}, handle_table{kernel},
address_arbiter{kernel.System()}, condition_var{kernel.System()}, state_lock{kernel} {}
KProcess::KProcess(KernelCore& kernel_)
: KAutoObjectWithSlabHeapAndContainer{kernel_},
page_table{std::make_unique<KPageTable>(kernel_.System())}, handle_table{kernel_},
address_arbiter{kernel_.System()}, condition_var{kernel_.System()}, state_lock{kernel_} {}
KProcess::~KProcess() = default;

View File

@@ -67,7 +67,7 @@ class KProcess final
KERNEL_AUTOOBJECT_TRAITS(KProcess, KSynchronizationObject);
public:
explicit KProcess(KernelCore& kernel);
explicit KProcess(KernelCore& kernel_);
~KProcess() override;
enum : u64 {
@@ -90,7 +90,7 @@ public:
static constexpr std::size_t RANDOM_ENTROPY_SIZE = 4;
static ResultCode Initialize(KProcess* process, Core::System& system, std::string name,
static ResultCode Initialize(KProcess* process, Core::System& system, std::string process_name,
ProcessType type);
/// Gets a reference to the process' page table.

View File

@@ -12,7 +12,7 @@
namespace Kernel {
KReadableEvent::KReadableEvent(KernelCore& kernel) : KSynchronizationObject{kernel} {}
KReadableEvent::KReadableEvent(KernelCore& kernel_) : KSynchronizationObject{kernel_} {}
KReadableEvent::~KReadableEvent() = default;

View File

@@ -18,7 +18,7 @@ class KReadableEvent : public KSynchronizationObject {
KERNEL_AUTOOBJECT_TRAITS(KReadableEvent, KSynchronizationObject);
public:
explicit KReadableEvent(KernelCore& kernel);
explicit KReadableEvent(KernelCore& kernel_);
~KReadableEvent() override;
void Initialize(KEvent* parent_, std::string&& name_) {

View File

@@ -10,8 +10,8 @@
namespace Kernel {
constexpr s64 DefaultTimeout = 10000000000; // 10 seconds
KResourceLimit::KResourceLimit(KernelCore& kernel)
: KAutoObjectWithSlabHeapAndContainer{kernel}, lock{kernel}, cond_var{kernel} {}
KResourceLimit::KResourceLimit(KernelCore& kernel_)
: KAutoObjectWithSlabHeapAndContainer{kernel_}, lock{kernel_}, cond_var{kernel_} {}
KResourceLimit::~KResourceLimit() = default;
void KResourceLimit::Initialize(const Core::Timing::CoreTiming* core_timing_) {

View File

@@ -36,7 +36,7 @@ class KResourceLimit final
KERNEL_AUTOOBJECT_TRAITS(KResourceLimit, KAutoObject);
public:
explicit KResourceLimit(KernelCore& kernel);
explicit KResourceLimit(KernelCore& kernel_);
virtual ~KResourceLimit();
void Initialize(const Core::Timing::CoreTiming* core_timing_);

View File

@@ -259,7 +259,7 @@ void KScheduler::OnThreadAffinityMaskChanged(KernelCore& kernel, KThread* thread
}
}
void KScheduler::RotateScheduledQueue(s32 core_id, s32 priority) {
void KScheduler::RotateScheduledQueue(s32 cpu_core_id, s32 priority) {
ASSERT(system.GlobalSchedulerContext().IsLocked());
// Get a reference to the priority queue.
@@ -267,7 +267,7 @@ void KScheduler::RotateScheduledQueue(s32 core_id, s32 priority) {
auto& priority_queue = GetPriorityQueue(kernel);
// Rotate the front of the queue to the end.
KThread* top_thread = priority_queue.GetScheduledFront(core_id, priority);
KThread* top_thread = priority_queue.GetScheduledFront(cpu_core_id, priority);
KThread* next_thread = nullptr;
if (top_thread != nullptr) {
next_thread = priority_queue.MoveToScheduledBack(top_thread);
@@ -279,7 +279,7 @@ void KScheduler::RotateScheduledQueue(s32 core_id, s32 priority) {
// While we have a suggested thread, try to migrate it!
{
KThread* suggested = priority_queue.GetSuggestedFront(core_id, priority);
KThread* suggested = priority_queue.GetSuggestedFront(cpu_core_id, priority);
while (suggested != nullptr) {
// Check if the suggested thread is the top thread on its core.
const s32 suggested_core = suggested->GetActiveCore();
@@ -300,7 +300,7 @@ void KScheduler::RotateScheduledQueue(s32 core_id, s32 priority) {
// to the front of the queue.
if (top_on_suggested_core == nullptr ||
top_on_suggested_core->GetPriority() >= HighestCoreMigrationAllowedPriority) {
suggested->SetActiveCore(core_id);
suggested->SetActiveCore(cpu_core_id);
priority_queue.ChangeCore(suggested_core, suggested, true);
IncrementScheduledCount(suggested);
break;
@@ -308,22 +308,22 @@ void KScheduler::RotateScheduledQueue(s32 core_id, s32 priority) {
}
// Get the next suggestion.
suggested = priority_queue.GetSamePriorityNext(core_id, suggested);
suggested = priority_queue.GetSamePriorityNext(cpu_core_id, suggested);
}
}
// Now that we might have migrated a thread with the same priority, check if we can do better.
{
KThread* best_thread = priority_queue.GetScheduledFront(core_id);
KThread* best_thread = priority_queue.GetScheduledFront(cpu_core_id);
if (best_thread == GetCurrentThread()) {
best_thread = priority_queue.GetScheduledNext(core_id, best_thread);
best_thread = priority_queue.GetScheduledNext(cpu_core_id, best_thread);
}
// If the best thread we can choose has a priority the same or worse than ours, try to
// migrate a higher priority thread.
if (best_thread != nullptr && best_thread->GetPriority() >= priority) {
KThread* suggested = priority_queue.GetSuggestedFront(core_id);
KThread* suggested = priority_queue.GetSuggestedFront(cpu_core_id);
while (suggested != nullptr) {
// If the suggestion's priority is the same as ours, don't bother.
if (suggested->GetPriority() >= best_thread->GetPriority()) {
@@ -342,7 +342,7 @@ void KScheduler::RotateScheduledQueue(s32 core_id, s32 priority) {
if (top_on_suggested_core == nullptr ||
top_on_suggested_core->GetPriority() >=
HighestCoreMigrationAllowedPriority) {
suggested->SetActiveCore(core_id);
suggested->SetActiveCore(cpu_core_id);
priority_queue.ChangeCore(suggested_core, suggested, true);
IncrementScheduledCount(suggested);
break;
@@ -350,7 +350,7 @@ void KScheduler::RotateScheduledQueue(s32 core_id, s32 priority) {
}
// Get the next suggestion.
suggested = priority_queue.GetSuggestedNext(core_id, suggested);
suggested = priority_queue.GetSuggestedNext(cpu_core_id, suggested);
}
}
}
@@ -607,7 +607,7 @@ void KScheduler::YieldToAnyThread(KernelCore& kernel) {
}
}
KScheduler::KScheduler(Core::System& system, s32 core_id) : system(system), core_id(core_id) {
KScheduler::KScheduler(Core::System& system_, s32 core_id_) : system{system_}, core_id{core_id_} {
switch_fiber = std::make_shared<Common::Fiber>(OnSwitch, this);
state.needs_scheduling.store(true);
state.interrupt_task_thread_runnable = false;

View File

@@ -30,7 +30,7 @@ class KThread;
class KScheduler final {
public:
explicit KScheduler(Core::System& system, s32 core_id);
explicit KScheduler(Core::System& system_, s32 core_id_);
~KScheduler();
/// Reschedules to the next available thread (call after current thread is suspended)
@@ -141,7 +141,7 @@ private:
[[nodiscard]] static KSchedulerPriorityQueue& GetPriorityQueue(KernelCore& kernel);
void RotateScheduledQueue(s32 core_id, s32 priority);
void RotateScheduledQueue(s32 cpu_core_id, s32 priority);
void Schedule() {
ASSERT(GetCurrentThread()->GetDisableDispatchCount() == 1);

View File

@@ -17,8 +17,8 @@ namespace Kernel {
class [[nodiscard]] KScopedSchedulerLockAndSleep {
public:
explicit KScopedSchedulerLockAndSleep(KernelCore & kernel, KThread * t, s64 timeout)
: kernel(kernel), thread(t), timeout_tick(timeout) {
explicit KScopedSchedulerLockAndSleep(KernelCore & kernel_, KThread * t, s64 timeout)
: kernel(kernel_), thread(t), timeout_tick(timeout) {
// Lock the scheduler.
kernel.GlobalSchedulerContext().scheduler_lock.Lock();
}

View File

@@ -14,7 +14,7 @@
namespace Kernel {
KServerPort::KServerPort(KernelCore& kernel) : KSynchronizationObject{kernel} {}
KServerPort::KServerPort(KernelCore& kernel_) : KSynchronizationObject{kernel_} {}
KServerPort::~KServerPort() = default;
void KServerPort::Initialize(KPort* parent_, std::string&& name_) {

View File

@@ -29,7 +29,7 @@ private:
using SessionList = boost::intrusive::list<KServerSession>;
public:
explicit KServerPort(KernelCore& kernel);
explicit KServerPort(KernelCore& kernel_);
virtual ~KServerPort() override;
using HLEHandler = std::shared_ptr<SessionRequestHandler>;

View File

@@ -23,7 +23,7 @@
namespace Kernel {
KServerSession::KServerSession(KernelCore& kernel) : KSynchronizationObject{kernel} {}
KServerSession::KServerSession(KernelCore& kernel_) : KSynchronizationObject{kernel_} {}
KServerSession::~KServerSession() {
kernel.ReleaseServiceThread(service_thread);
@@ -95,7 +95,7 @@ ResultCode KServerSession::HandleDomainSyncRequest(Kernel::HLERequestContext& co
UNREACHABLE();
return RESULT_SUCCESS; // Ignore error if asserts are off
}
return domain_request_handlers[object_id - 1]->HandleSyncRequest(context);
return domain_request_handlers[object_id - 1]->HandleSyncRequest(*this, context);
case IPC::DomainMessageHeader::CommandType::CloseVirtualHandle: {
LOG_DEBUG(IPC, "CloseVirtualHandle, object_id=0x{:08X}", object_id);
@@ -135,7 +135,7 @@ ResultCode KServerSession::CompleteSyncRequest(HLERequestContext& context) {
// If there is no domain header, the regular session handler is used
} else if (hle_handler != nullptr) {
// If this ServerSession has an associated HLE handler, forward the request to it.
result = hle_handler->HandleSyncRequest(context);
result = hle_handler->HandleSyncRequest(*this, context);
}
if (convert_to_domain) {

View File

@@ -40,7 +40,7 @@ class KServerSession final : public KSynchronizationObject,
friend class ServiceThread;
public:
explicit KServerSession(KernelCore& kernel);
explicit KServerSession(KernelCore& kernel_);
virtual ~KServerSession() override;
virtual void Destroy() override;

View File

@@ -11,8 +11,8 @@
namespace Kernel {
KSession::KSession(KernelCore& kernel)
: KAutoObjectWithSlabHeapAndContainer{kernel}, server{kernel}, client{kernel} {}
KSession::KSession(KernelCore& kernel_)
: KAutoObjectWithSlabHeapAndContainer{kernel_}, server{kernel_}, client{kernel_} {}
KSession::~KSession() = default;
void KSession::Initialize(KClientPort* port_, const std::string& name_) {
@@ -78,7 +78,7 @@ void KSession::OnClientClosed() {
void KSession::PostDestroy(uintptr_t arg) {
// Release the session count resource the owner process holds.
KProcess* owner = reinterpret_cast<KProcess*>(arg);
owner->GetResourceLimit()->Release(LimitableResource::Sessions, 1);
// owner->GetResourceLimit()->Release(LimitableResource::Sessions, 1);
owner->Close();
}

View File

@@ -17,7 +17,7 @@ class KSession final : public KAutoObjectWithSlabHeapAndContainer<KSession, KAut
KERNEL_AUTOOBJECT_TRAITS(KSession, KAutoObject);
public:
explicit KSession(KernelCore& kernel);
explicit KSession(KernelCore& kernel_);
virtual ~KSession() override;
void Initialize(KClientPort* port_, const std::string& name_);

View File

@@ -12,14 +12,14 @@
namespace Kernel {
KSharedMemory::KSharedMemory(KernelCore& kernel) : KAutoObjectWithSlabHeapAndContainer{kernel} {}
KSharedMemory::KSharedMemory(KernelCore& kernel_) : KAutoObjectWithSlabHeapAndContainer{kernel_} {}
KSharedMemory::~KSharedMemory() {
kernel.GetSystemResourceLimit()->Release(LimitableResource::PhysicalMemory, size);
}
ResultCode KSharedMemory::Initialize(KernelCore& kernel_, Core::DeviceMemory& device_memory_,
KProcess* owner_process_, KPageLinkedList&& page_list_,
ResultCode KSharedMemory::Initialize(Core::DeviceMemory& device_memory_, KProcess* owner_process_,
KPageLinkedList&& page_list_,
Svc::MemoryPermission owner_permission_,
Svc::MemoryPermission user_permission_,
PAddr physical_address_, std::size_t size_,
@@ -32,7 +32,7 @@ ResultCode KSharedMemory::Initialize(KernelCore& kernel_, Core::DeviceMemory& de
user_permission = user_permission_;
physical_address = physical_address_;
size = size_;
name = name_;
name = std::move(name_);
// Get the resource limit.
KResourceLimit* reslimit = kernel.GetSystemResourceLimit();
@@ -67,9 +67,9 @@ void KSharedMemory::Finalize() {
KAutoObjectWithSlabHeapAndContainer<KSharedMemory, KAutoObjectWithList>::Finalize();
}
ResultCode KSharedMemory::Map(KProcess& target_process, VAddr address, std::size_t size,
ResultCode KSharedMemory::Map(KProcess& target_process, VAddr address, std::size_t map_size,
Svc::MemoryPermission permissions) {
const u64 page_count{(size + PageSize - 1) / PageSize};
const u64 page_count{(map_size + PageSize - 1) / PageSize};
if (page_list.GetNumPages() != page_count) {
UNIMPLEMENTED_MSG("Page count does not match");
@@ -86,8 +86,8 @@ ResultCode KSharedMemory::Map(KProcess& target_process, VAddr address, std::size
ConvertToKMemoryPermission(permissions));
}
ResultCode KSharedMemory::Unmap(KProcess& target_process, VAddr address, std::size_t size) {
const u64 page_count{(size + PageSize - 1) / PageSize};
ResultCode KSharedMemory::Unmap(KProcess& target_process, VAddr address, std::size_t unmap_size) {
const u64 page_count{(unmap_size + PageSize - 1) / PageSize};
if (page_list.GetNumPages() != page_count) {
UNIMPLEMENTED_MSG("Page count does not match");

View File

@@ -24,12 +24,11 @@ class KSharedMemory final
KERNEL_AUTOOBJECT_TRAITS(KSharedMemory, KAutoObject);
public:
explicit KSharedMemory(KernelCore& kernel);
explicit KSharedMemory(KernelCore& kernel_);
~KSharedMemory() override;
ResultCode Initialize(KernelCore& kernel_, Core::DeviceMemory& device_memory_,
KProcess* owner_process_, KPageLinkedList&& page_list_,
Svc::MemoryPermission owner_permission_,
ResultCode Initialize(Core::DeviceMemory& device_memory_, KProcess* owner_process_,
KPageLinkedList&& page_list_, Svc::MemoryPermission owner_permission_,
Svc::MemoryPermission user_permission_, PAddr physical_address_,
std::size_t size_, std::string name_);
@@ -37,19 +36,19 @@ public:
* Maps a shared memory block to an address in the target process' address space
* @param target_process Process on which to map the memory block
* @param address Address in system memory to map shared memory block to
* @param size Size of the shared memory block to map
* @param map_size Size of the shared memory block to map
* @param permissions Memory block map permissions (specified by SVC field)
*/
ResultCode Map(KProcess& target_process, VAddr address, std::size_t size,
ResultCode Map(KProcess& target_process, VAddr address, std::size_t map_size,
Svc::MemoryPermission permissions);
/**
* Unmaps a shared memory block from an address in the target process' address space
* @param target_process Process on which to unmap the memory block
* @param address Address in system memory to unmap shared memory block
* @param size Size of the shared memory block to unmap
* @param unmap_size Size of the shared memory block to unmap
*/
ResultCode Unmap(KProcess& target_process, VAddr address, std::size_t size);
ResultCode Unmap(KProcess& target_process, VAddr address, std::size_t unmap_size);
/**
* Gets a pointer to the shared memory block

View File

@@ -18,18 +18,18 @@ void KSynchronizationObject::Finalize() {
KAutoObject::Finalize();
}
ResultCode KSynchronizationObject::Wait(KernelCore& kernel, s32* out_index,
ResultCode KSynchronizationObject::Wait(KernelCore& kernel_ctx, s32* out_index,
KSynchronizationObject** objects, const s32 num_objects,
s64 timeout) {
// Allocate space on stack for thread nodes.
std::vector<ThreadListNode> thread_nodes(num_objects);
// Prepare for wait.
KThread* thread = kernel.CurrentScheduler()->GetCurrentThread();
KThread* thread = kernel_ctx.CurrentScheduler()->GetCurrentThread();
{
// Setup the scheduling lock and sleep.
KScopedSchedulerLockAndSleep slp{kernel, thread, timeout};
KScopedSchedulerLockAndSleep slp{kernel_ctx, thread, timeout};
// Check if any of the objects are already signaled.
for (auto i = 0; i < num_objects; ++i) {
@@ -94,13 +94,13 @@ ResultCode KSynchronizationObject::Wait(KernelCore& kernel, s32* out_index,
thread->SetWaitObjectsForDebugging({});
// Cancel the timer as needed.
kernel.TimeManager().UnscheduleTimeEvent(thread);
kernel_ctx.TimeManager().UnscheduleTimeEvent(thread);
// Get the wait result.
ResultCode wait_result{RESULT_SUCCESS};
s32 sync_index = -1;
{
KScopedSchedulerLock lock(kernel);
KScopedSchedulerLock lock(kernel_ctx);
KSynchronizationObject* synced_obj;
wait_result = thread->GetWaitResult(std::addressof(synced_obj));
@@ -135,7 +135,8 @@ ResultCode KSynchronizationObject::Wait(KernelCore& kernel, s32* out_index,
return wait_result;
}
KSynchronizationObject::KSynchronizationObject(KernelCore& kernel) : KAutoObjectWithList{kernel} {}
KSynchronizationObject::KSynchronizationObject(KernelCore& kernel_)
: KAutoObjectWithList{kernel_} {}
KSynchronizationObject::~KSynchronizationObject() = default;

View File

@@ -60,8 +60,8 @@ static void ResetThreadContext64(Core::ARM_Interface::ThreadContext64& context,
namespace Kernel {
KThread::KThread(KernelCore& kernel)
: KAutoObjectWithSlabHeapAndContainer{kernel}, activity_pause_lock{kernel} {}
KThread::KThread(KernelCore& kernel_)
: KAutoObjectWithSlabHeapAndContainer{kernel_}, activity_pause_lock{kernel_} {}
KThread::~KThread() = default;
ResultCode KThread::Initialize(KThreadFunction func, uintptr_t arg, VAddr user_stack_top, s32 prio,
@@ -479,7 +479,7 @@ ResultCode KThread::GetPhysicalCoreMask(s32* out_ideal_core, u64* out_affinity_m
return RESULT_SUCCESS;
}
ResultCode KThread::SetCoreMask(s32 core_id, u64 v_affinity_mask) {
ResultCode KThread::SetCoreMask(s32 cpu_core_id, u64 v_affinity_mask) {
ASSERT(parent != nullptr);
ASSERT(v_affinity_mask != 0);
KScopedLightLock lk{activity_pause_lock};
@@ -491,18 +491,18 @@ ResultCode KThread::SetCoreMask(s32 core_id, u64 v_affinity_mask) {
ASSERT(num_core_migration_disables >= 0);
// If the core id is no-update magic, preserve the ideal core id.
if (core_id == Svc::IdealCoreNoUpdate) {
core_id = virtual_ideal_core_id;
R_UNLESS(((1ULL << core_id) & v_affinity_mask) != 0, ResultInvalidCombination);
if (cpu_core_id == Svc::IdealCoreNoUpdate) {
cpu_core_id = virtual_ideal_core_id;
R_UNLESS(((1ULL << cpu_core_id) & v_affinity_mask) != 0, ResultInvalidCombination);
}
// Set the virtual core/affinity mask.
virtual_ideal_core_id = core_id;
virtual_ideal_core_id = cpu_core_id;
virtual_affinity_mask = v_affinity_mask;
// Translate the virtual core to a physical core.
if (core_id >= 0) {
core_id = Core::Hardware::VirtualToPhysicalCoreMap[core_id];
if (cpu_core_id >= 0) {
cpu_core_id = Core::Hardware::VirtualToPhysicalCoreMap[cpu_core_id];
}
// Translate the virtual affinity mask to a physical one.
@@ -517,7 +517,7 @@ ResultCode KThread::SetCoreMask(s32 core_id, u64 v_affinity_mask) {
const KAffinityMask old_mask = physical_affinity_mask;
// Set our new ideals.
physical_ideal_core_id = core_id;
physical_ideal_core_id = cpu_core_id;
physical_affinity_mask.SetAffinityMask(p_affinity_mask);
if (physical_affinity_mask.GetAffinityMask() != old_mask.GetAffinityMask()) {
@@ -535,7 +535,7 @@ ResultCode KThread::SetCoreMask(s32 core_id, u64 v_affinity_mask) {
}
} else {
// Otherwise, we edit the original affinity for restoration later.
original_physical_ideal_core_id = core_id;
original_physical_ideal_core_id = cpu_core_id;
original_physical_affinity_mask.SetAffinityMask(p_affinity_mask);
}
}
@@ -851,8 +851,8 @@ void KThread::RemoveWaiterImpl(KThread* thread) {
thread->SetLockOwner(nullptr);
}
void KThread::RestorePriority(KernelCore& kernel, KThread* thread) {
ASSERT(kernel.GlobalSchedulerContext().IsLocked());
void KThread::RestorePriority(KernelCore& kernel_ctx, KThread* thread) {
ASSERT(kernel_ctx.GlobalSchedulerContext().IsLocked());
while (true) {
// We want to inherit priority where possible.
@@ -868,7 +868,7 @@ void KThread::RestorePriority(KernelCore& kernel, KThread* thread) {
// Ensure we don't violate condition variable red black tree invariants.
if (auto* cv_tree = thread->GetConditionVariableTree(); cv_tree != nullptr) {
BeforeUpdatePriority(kernel, cv_tree, thread);
BeforeUpdatePriority(kernel_ctx, cv_tree, thread);
}
// Change the priority.
@@ -877,11 +877,11 @@ void KThread::RestorePriority(KernelCore& kernel, KThread* thread) {
// Restore the condition variable, if relevant.
if (auto* cv_tree = thread->GetConditionVariableTree(); cv_tree != nullptr) {
AfterUpdatePriority(kernel, cv_tree, thread);
AfterUpdatePriority(kernel_ctx, cv_tree, thread);
}
// Update the scheduler.
KScheduler::OnThreadPriorityChanged(kernel, thread, old_priority);
KScheduler::OnThreadPriorityChanged(kernel_ctx, thread, old_priority);
// Keep the lock owner up to date.
KThread* lock_owner = thread->GetLockOwner();

View File

@@ -111,7 +111,7 @@ public:
static constexpr s32 DefaultThreadPriority = 44;
static constexpr s32 IdleThreadPriority = Svc::LowestThreadPriority + 1;
explicit KThread(KernelCore& kernel);
explicit KThread(KernelCore& kernel_);
~KThread() override;
public:
@@ -318,7 +318,7 @@ public:
[[nodiscard]] ResultCode GetPhysicalCoreMask(s32* out_ideal_core, u64* out_affinity_mask);
[[nodiscard]] ResultCode SetCoreMask(s32 core_id, u64 v_affinity_mask);
[[nodiscard]] ResultCode SetCoreMask(s32 cpu_core_id, u64 v_affinity_mask);
[[nodiscard]] ResultCode SetActivity(Svc::ThreadActivity activity);
@@ -649,7 +649,7 @@ private:
std::function<void(void*)>&& init_func,
void* init_func_parameter);
static void RestorePriority(KernelCore& kernel, KThread* thread);
static void RestorePriority(KernelCore& kernel_ctx, KThread* thread);
// For core KThread implementation
ThreadContext32 thread_context_32{};

View File

@@ -10,7 +10,7 @@ namespace Kernel {
class KThreadQueue {
public:
explicit KThreadQueue(KernelCore& kernel) : kernel{kernel} {}
explicit KThreadQueue(KernelCore& kernel_) : kernel{kernel_} {}
bool IsEmpty() const {
return wait_list.empty();

View File

@@ -9,8 +9,8 @@
namespace Kernel {
KTransferMemory::KTransferMemory(KernelCore& kernel)
: KAutoObjectWithSlabHeapAndContainer{kernel} {}
KTransferMemory::KTransferMemory(KernelCore& kernel_)
: KAutoObjectWithSlabHeapAndContainer{kernel_} {}
KTransferMemory::~KTransferMemory() = default;

View File

@@ -26,7 +26,7 @@ class KTransferMemory final
KERNEL_AUTOOBJECT_TRAITS(KTransferMemory, KAutoObject);
public:
explicit KTransferMemory(KernelCore& kernel);
explicit KTransferMemory(KernelCore& kernel_);
virtual ~KTransferMemory() override;
ResultCode Initialize(VAddr address_, std::size_t size_, Svc::MemoryPermission owner_perm_);

View File

@@ -8,7 +8,8 @@
namespace Kernel {
KWritableEvent::KWritableEvent(KernelCore& kernel) : KAutoObjectWithSlabHeapAndContainer{kernel} {}
KWritableEvent::KWritableEvent(KernelCore& kernel_)
: KAutoObjectWithSlabHeapAndContainer{kernel_} {}
KWritableEvent::~KWritableEvent() = default;

View File

@@ -18,7 +18,7 @@ class KWritableEvent final
KERNEL_AUTOOBJECT_TRAITS(KWritableEvent, KAutoObject);
public:
explicit KWritableEvent(KernelCore& kernel);
explicit KWritableEvent(KernelCore& kernel_);
~KWritableEvent() override;
virtual void Destroy() override;

View File

@@ -44,6 +44,7 @@
#include "core/hle/kernel/time_manager.h"
#include "core/hle/lock.h"
#include "core/hle/result.h"
#include "core/hle/service/sm/sm.h"
#include "core/memory.h"
MICROPROFILE_DEFINE(Kernel_SVC, "Kernel", "SVC", MP_RGB(70, 200, 70));
@@ -51,11 +52,11 @@ MICROPROFILE_DEFINE(Kernel_SVC, "Kernel", "SVC", MP_RGB(70, 200, 70));
namespace Kernel {
struct KernelCore::Impl {
explicit Impl(Core::System& system, KernelCore& kernel)
: time_manager{system}, object_list_container{kernel}, system{system} {}
explicit Impl(Core::System& system_, KernelCore& kernel_)
: time_manager{system_}, object_list_container{kernel_}, system{system_} {}
void SetMulticore(bool is_multicore) {
this->is_multicore = is_multicore;
void SetMulticore(bool is_multi) {
is_multicore = is_multi;
}
void Initialize(KernelCore& kernel) {
@@ -599,19 +600,19 @@ struct KernelCore::Impl {
irs_shared_mem = KSharedMemory::Create(system.Kernel());
time_shared_mem = KSharedMemory::Create(system.Kernel());
hid_shared_mem->Initialize(system.Kernel(), system.DeviceMemory(), nullptr,
hid_shared_mem->Initialize(system.DeviceMemory(), nullptr,
{hid_phys_addr, hid_size / PageSize},
Svc::MemoryPermission::None, Svc::MemoryPermission::Read,
hid_phys_addr, hid_size, "HID:SharedMemory");
font_shared_mem->Initialize(system.Kernel(), system.DeviceMemory(), nullptr,
font_shared_mem->Initialize(system.DeviceMemory(), nullptr,
{font_phys_addr, font_size / PageSize},
Svc::MemoryPermission::None, Svc::MemoryPermission::Read,
font_phys_addr, font_size, "Font:SharedMemory");
irs_shared_mem->Initialize(system.Kernel(), system.DeviceMemory(), nullptr,
irs_shared_mem->Initialize(system.DeviceMemory(), nullptr,
{irs_phys_addr, irs_size / PageSize},
Svc::MemoryPermission::None, Svc::MemoryPermission::Read,
irs_phys_addr, irs_size, "IRS:SharedMemory");
time_shared_mem->Initialize(system.Kernel(), system.DeviceMemory(), nullptr,
time_shared_mem->Initialize(system.DeviceMemory(), nullptr,
{time_phys_addr, time_size / PageSize},
Svc::MemoryPermission::None, Svc::MemoryPermission::Read,
time_phys_addr, time_size, "Time:SharedMemory");
@@ -656,6 +657,7 @@ struct KernelCore::Impl {
/// Map of named ports managed by the kernel, which can be retrieved using
/// the ConnectToPort SVC.
std::unordered_map<std::string, ServiceInterfaceFactory> service_interface_factory;
NamedPortTable named_ports;
std::unique_ptr<Core::ExclusiveMonitor> exclusive_monitor;
@@ -844,18 +846,17 @@ void KernelCore::PrepareReschedule(std::size_t id) {
// TODO: Reimplement, this
}
void KernelCore::AddNamedPort(std::string name, KClientPort* port) {
port->Open();
impl->named_ports.emplace(std::move(name), port);
void KernelCore::RegisterNamedService(std::string name, ServiceInterfaceFactory&& factory) {
impl->service_interface_factory.emplace(std::move(name), factory);
}
KernelCore::NamedPortTable::iterator KernelCore::FindNamedPort(const std::string& name) {
return impl->named_ports.find(name);
}
KernelCore::NamedPortTable::const_iterator KernelCore::FindNamedPort(
const std::string& name) const {
return impl->named_ports.find(name);
KClientPort* KernelCore::CreateNamedServicePort(std::string name) {
auto search = impl->service_interface_factory.find(name);
if (search == impl->service_interface_factory.end()) {
UNIMPLEMENTED();
return {};
}
return &search->second(impl->system.ServiceManager(), impl->system);
}
bool KernelCore::IsValidNamedPort(NamedPortTable::const_iterator port) const {

View File

@@ -27,6 +27,10 @@ class CoreTiming;
struct EventType;
} // namespace Core::Timing
namespace Service::SM {
class ServiceManager;
}
namespace Kernel {
class KClientPort;
@@ -51,6 +55,9 @@ class ServiceThread;
class Synchronization;
class TimeManager;
using ServiceInterfaceFactory =
std::function<KClientPort&(Service::SM::ServiceManager&, Core::System&)>;
namespace Init {
struct KSlabResourceCounts;
}
@@ -172,14 +179,11 @@ public:
void InvalidateCpuInstructionCacheRange(VAddr addr, std::size_t size);
/// Adds a port to the named port table
void AddNamedPort(std::string name, KClientPort* port);
/// Registers a named HLE service, passing a factory used to open a port to that service.
void RegisterNamedService(std::string name, ServiceInterfaceFactory&& factory);
/// Finds a port within the named port table with the given name.
NamedPortTable::iterator FindNamedPort(const std::string& name);
/// Finds a port within the named port table with the given name.
NamedPortTable::const_iterator FindNamedPort(const std::string& name) const;
/// Opens a port to a service previously registered with RegisterNamedService.
KClientPort* CreateNamedServicePort(std::string name);
/// Determines whether or not the given port is a valid named port.
bool IsValidNamedPort(NamedPortTable::const_iterator port) const;

View File

@@ -13,10 +13,10 @@
namespace Kernel {
PhysicalCore::PhysicalCore(std::size_t core_index, Core::System& system,
Kernel::KScheduler& scheduler, Core::CPUInterrupts& interrupts)
: core_index{core_index}, system{system}, scheduler{scheduler},
interrupts{interrupts}, guard{std::make_unique<Common::SpinLock>()} {}
PhysicalCore::PhysicalCore(std::size_t core_index_, Core::System& system_, KScheduler& scheduler_,
Core::CPUInterrupts& interrupts_)
: core_index{core_index_}, system{system_}, scheduler{scheduler_},
interrupts{interrupts_}, guard{std::make_unique<Common::SpinLock>()} {}
PhysicalCore::~PhysicalCore() = default;

View File

@@ -28,8 +28,8 @@ namespace Kernel {
class PhysicalCore {
public:
PhysicalCore(std::size_t core_index, Core::System& system, Kernel::KScheduler& scheduler,
Core::CPUInterrupts& interrupts);
PhysicalCore(std::size_t core_index_, Core::System& system_, KScheduler& scheduler_,
Core::CPUInterrupts& interrupts_);
~PhysicalCore();
PhysicalCore(const PhysicalCore&) = delete;

View File

@@ -67,11 +67,11 @@ class KAutoObjectWithSlabHeapAndContainer : public Base {
private:
static Derived* Allocate(KernelCore& kernel) {
return kernel.SlabHeap<Derived>().AllocateWithKernel(kernel);
return new Derived(kernel);
}
static void Free(KernelCore& kernel, Derived* obj) {
kernel.SlabHeap<Derived>().Free(obj);
delete obj;
}
public:

View File

@@ -284,12 +284,11 @@ static ResultCode ConnectToNamedPort(Core::System& system, Handle* out, VAddr po
auto& handle_table = kernel.CurrentProcess()->GetHandleTable();
// Find the client port.
const auto it = kernel.FindNamedPort(port_name);
if (!kernel.IsValidNamedPort(it)) {
LOG_WARNING(Kernel_SVC, "tried to connect to unknown port: {}", port_name);
auto port = kernel.CreateNamedServicePort(port_name);
if (!port) {
LOG_ERROR(Kernel_SVC, "tried to connect to unknown port: {}", port_name);
return ResultNotFound;
}
auto port = it->second;
// Reserve a handle for the port.
// NOTE: Nintendo really does write directly to the output handle here.
@@ -820,10 +819,10 @@ static ResultCode GetInfo(Core::System& system, u64* result, u64 info_id, Handle
return RESULT_SUCCESS;
}
Handle handle{};
R_TRY(handle_table.Add(&handle, resource_limit));
Handle resource_handle{};
R_TRY(handle_table.Add(&resource_handle, resource_limit));
*result = handle;
*result = resource_handle;
return RESULT_SUCCESS;
}

View File

@@ -1,55 +0,0 @@
// Copyright 2019 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include "core/hle/kernel/k_page_table.h"
#include "core/hle/kernel/k_resource_limit.h"
#include "core/hle/kernel/kernel.h"
#include "core/hle/kernel/process.h"
#include "core/hle/kernel/transfer_memory.h"
#include "core/hle/result.h"
#include "core/memory.h"
namespace Kernel {
TransferMemory::TransferMemory(KernelCore& kernel, Core::Memory::Memory& memory)
: Object{kernel}, memory{memory} {}
TransferMemory::~TransferMemory() {
// Release memory region when transfer memory is destroyed
Reset();
owner_process->GetResourceLimit()->Release(LimitableResource::TransferMemory, 1);
}
std::shared_ptr<TransferMemory> TransferMemory::Create(KernelCore& kernel,
Core::Memory::Memory& memory,
VAddr base_address, std::size_t size,
KMemoryPermission permissions) {
std::shared_ptr<TransferMemory> transfer_memory{
std::make_shared<TransferMemory>(kernel, memory)};
transfer_memory->base_address = base_address;
transfer_memory->size = size;
transfer_memory->owner_permissions = permissions;
transfer_memory->owner_process = kernel.CurrentProcess();
return transfer_memory;
}
u8* TransferMemory::GetPointer() {
return memory.GetPointer(base_address);
}
const u8* TransferMemory::GetPointer() const {
return memory.GetPointer(base_address);
}
ResultCode TransferMemory::Reserve() {
return owner_process->PageTable().ReserveTransferMemory(base_address, size, owner_permissions);
}
ResultCode TransferMemory::Reset() {
return owner_process->PageTable().ResetTransferMemory(base_address, size);
}
} // namespace Kernel

View File

@@ -1,96 +0,0 @@
// Copyright 2019 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <memory>
#include "core/hle/kernel/k_memory_block.h"
#include "core/hle/kernel/object.h"
#include "core/hle/kernel/physical_memory.h"
union ResultCode;
namespace Core::Memory {
class Memory;
}
namespace Kernel {
class KernelCore;
class Process;
/// Defines the interface for transfer memory objects.
///
/// Transfer memory is typically used for the purpose of
/// transferring memory between separate process instances,
/// thus the name.
///
class TransferMemory final : public Object {
public:
explicit TransferMemory(KernelCore& kernel, Core::Memory::Memory& memory);
~TransferMemory() override;
static constexpr HandleType HANDLE_TYPE = HandleType::TransferMemory;
static std::shared_ptr<TransferMemory> Create(KernelCore& kernel, Core::Memory::Memory& memory,
VAddr base_address, std::size_t size,
KMemoryPermission permissions);
TransferMemory(const TransferMemory&) = delete;
TransferMemory& operator=(const TransferMemory&) = delete;
TransferMemory(TransferMemory&&) = delete;
TransferMemory& operator=(TransferMemory&&) = delete;
std::string GetTypeName() const override {
return "TransferMemory";
}
std::string GetName() const override {
return GetTypeName();
}
HandleType GetHandleType() const override {
return HANDLE_TYPE;
}
/// Gets a pointer to the backing block of this instance.
u8* GetPointer();
/// Gets a pointer to the backing block of this instance.
const u8* GetPointer() const;
/// Gets the size of the memory backing this instance in bytes.
constexpr std::size_t GetSize() const {
return size;
}
/// Reserves the region to be used for the transfer memory, called after the transfer memory is
/// created.
ResultCode Reserve();
/// Resets the region previously used for the transfer memory, called after the transfer memory
/// is closed.
ResultCode Reset();
void Finalize() override {}
private:
/// The base address for the memory managed by this instance.
VAddr base_address{};
/// Size of the memory, in bytes, that this instance manages.
std::size_t size{};
/// The memory permissions that are applied to this instance.
KMemoryPermission owner_permissions{};
/// The process that this transfer memory instance was created under.
Process* owner_process{};
Core::Memory::Memory& memory;
};
} // namespace Kernel

View File

@@ -56,7 +56,7 @@ enum class ErrorModule : u32 {
PCIe = 120,
Friends = 121,
BCAT = 122,
SSL = 123,
SSLSrv = 123,
Account = 124,
News = 125,
Mii = 126,

View File

@@ -833,7 +833,7 @@ IStorageImpl::~IStorageImpl() = default;
class StorageDataImpl final : public IStorageImpl {
public:
explicit StorageDataImpl(std::vector<u8>&& buffer) : buffer{std::move(buffer)} {}
explicit StorageDataImpl(std::vector<u8>&& buffer_) : buffer{std::move(buffer_)} {}
std::vector<u8>& GetData() override {
return buffer;
@@ -1513,9 +1513,9 @@ void IApplicationFunctions::GetDisplayVersion(Kernel::HLERequestContext& ctx) {
const FileSys::PatchManager pm{title_id, system.GetFileSystemController(),
system.GetContentProvider()};
auto res = pm.GetControlMetadata();
if (res.first != nullptr) {
return res;
auto metadata = pm.GetControlMetadata();
if (metadata.first != nullptr) {
return metadata;
}
const FileSys::PatchManager pm_update{FileSys::GetUpdateTitleID(title_id),
@@ -1550,9 +1550,9 @@ void IApplicationFunctions::GetDesiredLanguage(Kernel::HLERequestContext& ctx) {
const FileSys::PatchManager pm{title_id, system.GetFileSystemController(),
system.GetContentProvider()};
auto res = pm.GetControlMetadata();
if (res.first != nullptr) {
return res;
auto metadata = pm.GetControlMetadata();
if (metadata.first != nullptr) {
return metadata;
}
const FileSys::PatchManager pm_update{FileSys::GetUpdateTitleID(title_id),

View File

@@ -15,11 +15,11 @@ namespace Service::APM {
constexpr auto DEFAULT_PERFORMANCE_CONFIGURATION = PerformanceConfiguration::Config7;
Controller::Controller(Core::Timing::CoreTiming& core_timing)
: core_timing{core_timing}, configs{
{PerformanceMode::Handheld, DEFAULT_PERFORMANCE_CONFIGURATION},
{PerformanceMode::Docked, DEFAULT_PERFORMANCE_CONFIGURATION},
} {}
Controller::Controller(Core::Timing::CoreTiming& core_timing_)
: core_timing{core_timing_}, configs{
{PerformanceMode::Handheld, DEFAULT_PERFORMANCE_CONFIGURATION},
{PerformanceMode::Docked, DEFAULT_PERFORMANCE_CONFIGURATION},
} {}
Controller::~Controller() = default;

View File

@@ -50,7 +50,7 @@ enum class PerformanceMode : u8 {
// system during times of high load -- this simply maps to different PerformanceConfigs to use.
class Controller {
public:
explicit Controller(Core::Timing::CoreTiming& core_timing);
explicit Controller(Core::Timing::CoreTiming& core_timing_);
~Controller();
void SetPerformanceConfiguration(PerformanceMode mode, PerformanceConfiguration config);

View File

@@ -169,10 +169,9 @@ private:
class IAudioDevice final : public ServiceFramework<IAudioDevice> {
public:
explicit IAudioDevice(Core::System& system_, u32_le revision_num)
: ServiceFramework{system_, "IAudioDevice"}, revision{revision_num},
buffer_event{system.Kernel()}, audio_input_device_switch_event{system.Kernel()},
audio_output_device_switch_event{system.Kernel()} {
explicit IAudioDevice(Core::System& system_, Kernel::KEvent& buffer_event_, u32_le revision_)
: ServiceFramework{system_, "IAudioDevice"}, buffer_event{buffer_event_}, revision{
revision_} {
static const FunctionInfo functions[] = {
{0, &IAudioDevice::ListAudioDeviceName, "ListAudioDeviceName"},
{1, &IAudioDevice::SetAudioDeviceOutputVolume, "SetAudioDeviceOutputVolume"},
@@ -189,18 +188,6 @@ public:
{13, nullptr, "GetAudioSystemMasterVolumeSetting"},
};
RegisterHandlers(functions);
Kernel::KAutoObject::Create(std::addressof(buffer_event));
buffer_event.Initialize("IAudioOutBufferReleasedEvent");
// Should be similar to audio_output_device_switch_event
Kernel::KAutoObject::Create(std::addressof(audio_input_device_switch_event));
audio_input_device_switch_event.Initialize("IAudioDevice:AudioInputDeviceSwitchedEvent");
// Should only be signalled when an audio output device has been changed, example: speaker
// to headset
Kernel::KAutoObject::Create(std::addressof(audio_output_device_switch_event));
audio_output_device_switch_event.Initialize("IAudioDevice:AudioOutputDeviceSwitchedEvent");
}
private:
@@ -310,7 +297,7 @@ private:
IPC::ResponseBuilder rb{ctx, 2, 1};
rb.Push(RESULT_SUCCESS);
rb.PushCopyObjects(audio_input_device_switch_event.GetReadableEvent());
rb.PushCopyObjects(buffer_event.GetReadableEvent());
}
void QueryAudioDeviceOutputEvent(Kernel::HLERequestContext& ctx) {
@@ -318,17 +305,16 @@ private:
IPC::ResponseBuilder rb{ctx, 2, 1};
rb.Push(RESULT_SUCCESS);
rb.PushCopyObjects(audio_output_device_switch_event.GetReadableEvent());
rb.PushCopyObjects(buffer_event.GetReadableEvent());
}
Kernel::KEvent& buffer_event;
u32_le revision = 0;
Kernel::KEvent buffer_event;
Kernel::KEvent audio_input_device_switch_event;
Kernel::KEvent audio_output_device_switch_event;
};
}; // namespace Audio
AudRenU::AudRenU(Core::System& system_)
: ServiceFramework{system_, "audren:u"}, buffer_event{system.Kernel()} {
AudRenU::AudRenU(Core::System& system_) : ServiceFramework{system_, "audren:u"} {
// clang-format off
static const FunctionInfo functions[] = {
{0, &AudRenU::OpenAudioRenderer, "OpenAudioRenderer"},
@@ -340,6 +326,9 @@ AudRenU::AudRenU(Core::System& system_) : ServiceFramework{system_, "audren:u"}
// clang-format on
RegisterHandlers(functions);
Kernel::KAutoObject::Create(std::addressof(buffer_event));
buffer_event.Initialize("IAudioOutBufferReleasedEvent");
}
AudRenU::~AudRenU() = default;
@@ -373,7 +362,7 @@ void AudRenU::GetAudioRendererWorkBufferSize(Kernel::HLERequestContext& ctx) {
static constexpr u64 max_perf_detail_entries = 100;
// Size of the data structure representing the bulk of the voice-related state.
static constexpr u64 voice_state_size = 0x100;
static constexpr u64 voice_state_size_bytes = 0x100;
// Size of the upsampler manager data structure
constexpr u64 upsampler_manager_size = 0x48;
@@ -460,7 +449,8 @@ void AudRenU::GetAudioRendererWorkBufferSize(Kernel::HLERequestContext& ctx) {
size += Common::AlignUp(voice_info_size * params.voice_count, info_field_alignment_size);
size +=
Common::AlignUp(voice_resource_size * params.voice_count, info_field_alignment_size);
size += Common::AlignUp(voice_state_size * params.voice_count, info_field_alignment_size);
size +=
Common::AlignUp(voice_state_size_bytes * params.voice_count, info_field_alignment_size);
return size;
};
@@ -662,7 +652,7 @@ void AudRenU::GetAudioDeviceService(Kernel::HLERequestContext& ctx) {
// always assumes the initial release revision (REV1).
IPC::ResponseBuilder rb{ctx, 2, 0, 1};
rb.Push(RESULT_SUCCESS);
rb.PushIpcInterface<IAudioDevice>(system, Common::MakeMagic('R', 'E', 'V', '1'));
rb.PushIpcInterface<IAudioDevice>(system, buffer_event, Common::MakeMagic('R', 'E', 'V', '1'));
}
void AudRenU::OpenAudioRendererForManualExecution(Kernel::HLERequestContext& ctx) {
@@ -684,7 +674,7 @@ void AudRenU::GetAudioDeviceServiceWithRevisionInfo(Kernel::HLERequestContext& c
IPC::ResponseBuilder rb{ctx, 2, 0, 1};
rb.Push(RESULT_SUCCESS);
rb.PushIpcInterface<IAudioDevice>(system, revision);
rb.PushIpcInterface<IAudioDevice>(system, buffer_event, revision);
}
void AudRenU::OpenAudioRendererImpl(Kernel::HLERequestContext& ctx) {

View File

@@ -4,6 +4,7 @@
#pragma once
#include "core/hle/kernel/k_event.h"
#include "core/hle/service/service.h"
namespace Core {
@@ -31,6 +32,7 @@ private:
void OpenAudioRendererImpl(Kernel::HLERequestContext& ctx);
std::size_t audren_instance_count = 0;
Kernel::KEvent buffer_event;
};
// Describes a particular audio feature that may be supported in a particular revision.

View File

@@ -50,8 +50,8 @@ public:
Enabled,
};
explicit OpusDecoderState(OpusDecoderPtr decoder, u32 sample_rate, u32 channel_count)
: decoder{std::move(decoder)}, sample_rate{sample_rate}, channel_count{channel_count} {}
explicit OpusDecoderState(OpusDecoderPtr decoder_, u32 sample_rate_, u32 channel_count_)
: decoder{std::move(decoder_)}, sample_rate{sample_rate_}, channel_count{channel_count_} {}
// Decodes interleaved Opus packets. Optionally allows reporting time taken to
// perform the decoding, as well as any relevant extra behavior.
@@ -160,9 +160,9 @@ private:
class IHardwareOpusDecoderManager final : public ServiceFramework<IHardwareOpusDecoderManager> {
public:
explicit IHardwareOpusDecoderManager(Core::System& system_, OpusDecoderState decoder_state)
explicit IHardwareOpusDecoderManager(Core::System& system_, OpusDecoderState decoder_state_)
: ServiceFramework{system_, "IHardwareOpusDecoderManager"}, decoder_state{
std::move(decoder_state)} {
std::move(decoder_state_)} {
// clang-format off
static const FunctionInfo functions[] = {
{0, &IHardwareOpusDecoderManager::DecodeInterleavedOld, "DecodeInterleavedOld"},

View File

@@ -3,9 +3,18 @@
// Refer to the license.txt file included.
#include <fmt/ostream.h>
#ifdef __GNUC__
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wshadow"
#endif
#include <httplib.h>
#include <mbedtls/sha256.h>
#include <nlohmann/json.hpp>
#ifdef __GNUC__
#pragma GCC diagnostic pop
#endif
#include "common/hex_util.h"
#include "common/logging/backend.h"
#include "common/logging/log.h"
@@ -178,8 +187,8 @@ bool VfsRawCopyDProgress(FileSys::VirtualDir src, FileSys::VirtualDir dest,
class Boxcat::Client {
public:
Client(std::string path, u64 title_id, u64 build_id)
: path(std::move(path)), title_id(title_id), build_id(build_id) {}
Client(std::string path_, u64 title_id_, u64 build_id_)
: path(std::move(path_)), title_id(title_id_), build_id(build_id_) {}
DownloadResult DownloadDataZip() {
return DownloadInternal(fmt::format(BOXCAT_PATHNAME_DATA, title_id), TIMEOUT_SECONDS,

View File

@@ -6,7 +6,7 @@
namespace Service::HID {
ControllerBase::ControllerBase(Core::System& system) : system(system) {}
ControllerBase::ControllerBase(Core::System& system_) : system(system_) {}
ControllerBase::~ControllerBase() = default;
void ControllerBase::ActivateController() {

View File

@@ -18,7 +18,7 @@ class System;
namespace Service::HID {
class ControllerBase {
public:
explicit ControllerBase(Core::System& system);
explicit ControllerBase(Core::System& system_);
virtual ~ControllerBase();
// Called when the controller is initialized

View File

@@ -1,10 +1,9 @@
// Copyright 2018 yuzu emulator team
// Copyright 2021 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include <cstring>
#include "common/common_types.h"
#include "common/logging/log.h"
#include "common/math_util.h"
#include "common/settings.h"
#include "core/core_timing.h"
#include "core/frontend/emu_window.h"
@@ -12,10 +11,19 @@
namespace Service::HID {
constexpr std::size_t SHARED_MEMORY_OFFSET = 0x3BA00;
constexpr f32 angle_threshold = 0.08f;
constexpr f32 pinch_threshold = 100.0f;
Controller_Gesture::Controller_Gesture(Core::System& system_) : ControllerBase{system_} {}
// HW is around 700, value is set to 400 to make it easier to trigger with mouse
constexpr f32 swipe_threshold = 400.0f; // Threshold in pixels/s
constexpr f32 angle_threshold = 0.015f; // Threshold in radians
constexpr f32 pinch_threshold = 0.5f; // Threshold in pixels
constexpr f32 press_delay = 0.5f; // Time in seconds
constexpr f32 double_tap_delay = 0.35f; // Time in seconds
constexpr f32 Square(s32 num) {
return static_cast<f32>(num * num);
}
Controller_Gesture::Controller_Gesture(Core::System& system_) : ControllerBase(system_) {}
Controller_Gesture::~Controller_Gesture() = default;
void Controller_Gesture::OnInit() {
@@ -24,6 +32,8 @@ void Controller_Gesture::OnInit() {
keyboard_finger_id[id] = MAX_POINTS;
udp_finger_id[id] = MAX_POINTS;
}
shared_memory.header.entry_count = 0;
force_update = true;
}
void Controller_Gesture::OnRelease() {}
@@ -38,17 +48,23 @@ void Controller_Gesture::OnUpdate(const Core::Timing::CoreTiming& core_timing, u
shared_memory.header.last_entry_index = 0;
return;
}
shared_memory.header.entry_count = 16;
const auto& last_entry = shared_memory.gesture_states[shared_memory.header.last_entry_index];
shared_memory.header.last_entry_index = (shared_memory.header.last_entry_index + 1) % 17;
auto& cur_entry = shared_memory.gesture_states[shared_memory.header.last_entry_index];
ReadTouchInput();
cur_entry.sampling_number = last_entry.sampling_number + 1;
cur_entry.sampling_number2 = cur_entry.sampling_number;
GestureProperties gesture = GetGestureProperties();
f32 time_difference = static_cast<f32>(shared_memory.header.timestamp - last_update_timestamp) /
(1000 * 1000 * 1000);
// TODO(german77): Implement all gesture types
// Only update if necesary
if (!ShouldUpdateGesture(gesture, time_difference)) {
return;
}
last_update_timestamp = shared_memory.header.timestamp;
UpdateGestureSharedMemory(data, size, gesture, time_difference);
}
void Controller_Gesture::ReadTouchInput() {
const Input::TouchStatus& mouse_status = touch_mouse_device->GetStatus();
const Input::TouchStatus& udp_status = touch_udp_device->GetStatus();
for (std::size_t id = 0; id < mouse_status.size(); ++id) {
@@ -63,65 +79,255 @@ void Controller_Gesture::OnUpdate(const Core::Timing::CoreTiming& core_timing, u
UpdateTouchInputEvent(keyboard_status[id], keyboard_finger_id[id]);
}
}
}
bool Controller_Gesture::ShouldUpdateGesture(const GestureProperties& gesture,
f32 time_difference) {
const auto& last_entry = shared_memory.gesture_states[shared_memory.header.last_entry_index];
if (force_update) {
force_update = false;
return true;
}
// Update if coordinates change
for (size_t id = 0; id < MAX_POINTS; id++) {
if (gesture.points[id] != last_gesture.points[id]) {
return true;
}
}
// Update on press and hold event after 0.5 seconds
if (last_entry.type == TouchType::Touch && last_entry.point_count == 1 &&
time_difference > press_delay) {
return enable_press_and_tap;
}
return false;
}
void Controller_Gesture::UpdateGestureSharedMemory(u8* data, std::size_t size,
GestureProperties& gesture,
f32 time_difference) {
TouchType type = TouchType::Idle;
Attribute attributes{};
GestureProperties gesture = GetGestureProperties();
if (last_gesture.active_points != gesture.active_points) {
++last_gesture.detection_count;
const auto& last_entry = shared_memory.gesture_states[shared_memory.header.last_entry_index];
shared_memory.header.last_entry_index = (shared_memory.header.last_entry_index + 1) % 17;
auto& cur_entry = shared_memory.gesture_states[shared_memory.header.last_entry_index];
if (shared_memory.header.entry_count < 16) {
shared_memory.header.entry_count++;
}
if (gesture.active_points > 0) {
if (last_gesture.active_points == 0) {
attributes.is_new_touch.Assign(true);
last_gesture.average_distance = gesture.average_distance;
last_gesture.angle = gesture.angle;
}
type = TouchType::Touch;
if (gesture.mid_point.x != last_entry.x || gesture.mid_point.y != last_entry.y) {
type = TouchType::Pan;
}
if (std::abs(gesture.average_distance - last_gesture.average_distance) > pinch_threshold) {
type = TouchType::Pinch;
}
if (std::abs(gesture.angle - last_gesture.angle) > angle_threshold) {
type = TouchType::Rotate;
}
cur_entry.sampling_number = last_entry.sampling_number + 1;
cur_entry.sampling_number2 = cur_entry.sampling_number;
cur_entry.delta_x = gesture.mid_point.x - last_entry.x;
cur_entry.delta_y = gesture.mid_point.y - last_entry.y;
// TODO: Find how velocities are calculated
cur_entry.vel_x = static_cast<float>(cur_entry.delta_x) * 150.1f;
cur_entry.vel_y = static_cast<float>(cur_entry.delta_y) * 150.1f;
// Slowdown the rate of change for less flapping
last_gesture.average_distance =
(last_gesture.average_distance * 0.9f) + (gesture.average_distance * 0.1f);
last_gesture.angle = (last_gesture.angle * 0.9f) + (gesture.angle * 0.1f);
} else {
cur_entry.delta_x = 0;
cur_entry.delta_y = 0;
cur_entry.vel_x = 0;
cur_entry.vel_y = 0;
}
last_gesture.active_points = gesture.active_points;
cur_entry.detection_count = last_gesture.detection_count;
cur_entry.type = type;
cur_entry.attributes = attributes;
cur_entry.x = gesture.mid_point.x;
cur_entry.y = gesture.mid_point.y;
cur_entry.point_count = static_cast<s32>(gesture.active_points);
for (size_t id = 0; id < MAX_POINTS; id++) {
cur_entry.points[id].x = gesture.points[id].x;
cur_entry.points[id].y = gesture.points[id].y;
}
// Reset values to default
cur_entry.delta = {};
cur_entry.vel_x = 0;
cur_entry.vel_y = 0;
cur_entry.direction = Direction::None;
cur_entry.rotation_angle = 0;
cur_entry.scale = 0;
if (gesture.active_points > 0) {
if (last_gesture.active_points == 0) {
NewGesture(gesture, type, attributes);
} else {
UpdateExistingGesture(gesture, type, time_difference);
}
} else {
EndGesture(gesture, last_gesture, type, attributes, time_difference);
}
// Apply attributes
cur_entry.detection_count = gesture.detection_count;
cur_entry.type = type;
cur_entry.attributes = attributes;
cur_entry.pos = gesture.mid_point;
cur_entry.point_count = static_cast<s32>(gesture.active_points);
cur_entry.points = gesture.points;
last_gesture = gesture;
std::memcpy(data + SHARED_MEMORY_OFFSET, &shared_memory, sizeof(SharedMemory));
}
void Controller_Gesture::NewGesture(GestureProperties& gesture, TouchType& type,
Attribute& attributes) {
const auto& last_entry = GetLastGestureEntry();
gesture.detection_count++;
type = TouchType::Touch;
// New touch after cancel is not considered new
if (last_entry.type != TouchType::Cancel) {
attributes.is_new_touch.Assign(1);
enable_press_and_tap = true;
}
}
void Controller_Gesture::UpdateExistingGesture(GestureProperties& gesture, TouchType& type,
f32 time_difference) {
const auto& last_entry = GetLastGestureEntry();
// Promote to pan type if touch moved
for (size_t id = 0; id < MAX_POINTS; id++) {
if (gesture.points[id] != last_gesture.points[id]) {
type = TouchType::Pan;
break;
}
}
// Number of fingers changed cancel the last event and clear data
if (gesture.active_points != last_gesture.active_points) {
type = TouchType::Cancel;
enable_press_and_tap = false;
gesture.active_points = 0;
gesture.mid_point = {};
gesture.points.fill({});
return;
}
// Calculate extra parameters of panning
if (type == TouchType::Pan) {
UpdatePanEvent(gesture, last_gesture, type, time_difference);
return;
}
// Promote to press type
if (last_entry.type == TouchType::Touch) {
type = TouchType::Press;
}
}
void Controller_Gesture::EndGesture(GestureProperties& gesture,
GestureProperties& last_gesture_props, TouchType& type,
Attribute& attributes, f32 time_difference) {
const auto& last_entry = GetLastGestureEntry();
if (last_gesture_props.active_points != 0) {
switch (last_entry.type) {
case TouchType::Touch:
if (enable_press_and_tap) {
SetTapEvent(gesture, last_gesture_props, type, attributes);
return;
}
type = TouchType::Cancel;
force_update = true;
break;
case TouchType::Press:
case TouchType::Tap:
case TouchType::Swipe:
case TouchType::Pinch:
case TouchType::Rotate:
type = TouchType::Complete;
force_update = true;
break;
case TouchType::Pan:
EndPanEvent(gesture, last_gesture_props, type, time_difference);
break;
default:
break;
}
return;
}
if (last_entry.type == TouchType::Complete || last_entry.type == TouchType::Cancel) {
gesture.detection_count++;
}
}
void Controller_Gesture::SetTapEvent(GestureProperties& gesture,
GestureProperties& last_gesture_props, TouchType& type,
Attribute& attributes) {
type = TouchType::Tap;
gesture = last_gesture_props;
force_update = true;
f32 tap_time_difference =
static_cast<f32>(last_update_timestamp - last_tap_timestamp) / (1000 * 1000 * 1000);
last_tap_timestamp = last_update_timestamp;
if (tap_time_difference < double_tap_delay) {
attributes.is_double_tap.Assign(1);
}
}
void Controller_Gesture::UpdatePanEvent(GestureProperties& gesture,
GestureProperties& last_gesture_props, TouchType& type,
f32 time_difference) {
auto& cur_entry = shared_memory.gesture_states[shared_memory.header.last_entry_index];
const auto& last_entry = GetLastGestureEntry();
cur_entry.delta = gesture.mid_point - last_entry.pos;
cur_entry.vel_x = static_cast<f32>(cur_entry.delta.x) / time_difference;
cur_entry.vel_y = static_cast<f32>(cur_entry.delta.y) / time_difference;
last_pan_time_difference = time_difference;
// Promote to pinch type
if (std::abs(gesture.average_distance - last_gesture_props.average_distance) >
pinch_threshold) {
type = TouchType::Pinch;
cur_entry.scale = gesture.average_distance / last_gesture_props.average_distance;
}
const f32 angle_between_two_lines = std::atan((gesture.angle - last_gesture_props.angle) /
(1 + (gesture.angle * last_gesture_props.angle)));
// Promote to rotate type
if (std::abs(angle_between_two_lines) > angle_threshold) {
type = TouchType::Rotate;
cur_entry.scale = 0;
cur_entry.rotation_angle = angle_between_two_lines * 180.0f / Common::PI;
}
}
void Controller_Gesture::EndPanEvent(GestureProperties& gesture,
GestureProperties& last_gesture_props, TouchType& type,
f32 time_difference) {
auto& cur_entry = shared_memory.gesture_states[shared_memory.header.last_entry_index];
const auto& last_entry = GetLastGestureEntry();
cur_entry.vel_x =
static_cast<f32>(last_entry.delta.x) / (last_pan_time_difference + time_difference);
cur_entry.vel_y =
static_cast<f32>(last_entry.delta.y) / (last_pan_time_difference + time_difference);
const f32 curr_vel =
std::sqrt((cur_entry.vel_x * cur_entry.vel_x) + (cur_entry.vel_y * cur_entry.vel_y));
// Set swipe event with parameters
if (curr_vel > swipe_threshold) {
SetSwipeEvent(gesture, last_gesture_props, type);
return;
}
// End panning without swipe
type = TouchType::Complete;
cur_entry.vel_x = 0;
cur_entry.vel_y = 0;
force_update = true;
}
void Controller_Gesture::SetSwipeEvent(GestureProperties& gesture,
GestureProperties& last_gesture_props, TouchType& type) {
auto& cur_entry = shared_memory.gesture_states[shared_memory.header.last_entry_index];
const auto& last_entry = GetLastGestureEntry();
type = TouchType::Swipe;
gesture = last_gesture_props;
force_update = true;
cur_entry.delta = last_entry.delta;
if (std::abs(cur_entry.delta.x) > std::abs(cur_entry.delta.y)) {
if (cur_entry.delta.x > 0) {
cur_entry.direction = Direction::Right;
return;
}
cur_entry.direction = Direction::Left;
return;
}
if (cur_entry.delta.y > 0) {
cur_entry.direction = Direction::Down;
return;
}
cur_entry.direction = Direction::Up;
}
void Controller_Gesture::OnLoadInputDevices() {
touch_mouse_device = Input::CreateDevice<Input::TouchDevice>("engine:emu_window");
touch_udp_device = Input::CreateDevice<Input::TouchDevice>("engine:cemuhookudp");
@@ -144,6 +350,14 @@ std::optional<std::size_t> Controller_Gesture::GetUnusedFingerID() const {
return std::nullopt;
}
Controller_Gesture::GestureState& Controller_Gesture::GetLastGestureEntry() {
return shared_memory.gesture_states[(shared_memory.header.last_entry_index + 16) % 17];
}
const Controller_Gesture::GestureState& Controller_Gesture::GetLastGestureEntry() const {
return shared_memory.gesture_states[(shared_memory.header.last_entry_index + 16) % 17];
}
std::size_t Controller_Gesture::UpdateTouchInputEvent(
const std::tuple<float, float, bool>& touch_input, std::size_t finger_id) {
const auto& [x, y, pressed] = touch_input;
@@ -161,8 +375,7 @@ std::size_t Controller_Gesture::UpdateTouchInputEvent(
finger_id = first_free_id.value();
fingers[finger_id].pressed = true;
}
fingers[finger_id].x = x;
fingers[finger_id].y = y;
fingers[finger_id].pos = {x, y};
return finger_id;
}
@@ -182,24 +395,35 @@ Controller_Gesture::GestureProperties Controller_Gesture::GetGestureProperties()
static_cast<std::size_t>(std::distance(active_fingers.begin(), end_iter));
for (size_t id = 0; id < gesture.active_points; ++id) {
gesture.points[id].x =
static_cast<int>(active_fingers[id].x * Layout::ScreenUndocked::Width);
gesture.points[id].y =
static_cast<int>(active_fingers[id].y * Layout::ScreenUndocked::Height);
gesture.mid_point.x += static_cast<int>(gesture.points[id].x / gesture.active_points);
gesture.mid_point.y += static_cast<int>(gesture.points[id].y / gesture.active_points);
const auto& [active_x, active_y] = active_fingers[id].pos;
gesture.points[id] = {
.x = static_cast<s32>(active_x * Layout::ScreenUndocked::Width),
.y = static_cast<s32>(active_y * Layout::ScreenUndocked::Height),
};
// Hack: There is no touch in docked but games still allow it
if (Settings::values.use_docked_mode.GetValue()) {
gesture.points[id] = {
.x = static_cast<s32>(active_x * Layout::ScreenDocked::Width),
.y = static_cast<s32>(active_y * Layout::ScreenDocked::Height),
};
}
gesture.mid_point.x += static_cast<s32>(gesture.points[id].x / gesture.active_points);
gesture.mid_point.y += static_cast<s32>(gesture.points[id].y / gesture.active_points);
}
for (size_t id = 0; id < gesture.active_points; ++id) {
const double distance =
std::pow(static_cast<float>(gesture.mid_point.x - gesture.points[id].x), 2) +
std::pow(static_cast<float>(gesture.mid_point.y - gesture.points[id].y), 2);
gesture.average_distance +=
static_cast<float>(distance) / static_cast<float>(gesture.active_points);
const f32 distance = std::sqrt(Square(gesture.mid_point.x - gesture.points[id].x) +
Square(gesture.mid_point.y - gesture.points[id].y));
gesture.average_distance += distance / static_cast<f32>(gesture.active_points);
}
gesture.angle = std::atan2(static_cast<float>(gesture.mid_point.y - gesture.points[0].y),
static_cast<float>(gesture.mid_point.x - gesture.points[0].x));
gesture.angle = std::atan2(static_cast<f32>(gesture.mid_point.y - gesture.points[0].y),
static_cast<f32>(gesture.mid_point.x - gesture.points[0].x));
gesture.detection_count = last_gesture.detection_count;
return gesture;
}

Some files were not shown because too many files have changed in this diff Show More