Compare commits

..

116 Commits

Author SHA1 Message Date
Lioncash
28719ee3b4 kernel/svc: Implement svcGetThreadList
Similarly like svcGetProcessList, this retrieves the list of threads
from the current process. In the kernel itself, a process instance
maintains a list of threads, which are used within this function.

Threads are registered to a process' thread list at thread
initialization, and unregistered from the list upon thread destruction
(if said thread has a non-null owning process).

We assert on the debug event case, as we currently don't implement
kernel debug objects.
2019-04-02 00:48:40 -04:00
Lioncash
cb2bce8006 kernel/svc: Implement svcGetProcessList
This service function simply copies out a specified number of kernel
process IDs, while simultaneously reporting the total number of
processes.
2019-04-02 00:47:14 -04:00
Mat M
628153cccd Merge pull request #2316 from ReinUsesLisp/fixup-process
process: Fix up compilation
2019-04-02 00:46:00 -04:00
ReinUsesLisp
592a24ae53 process: Fix up compilation 2019-04-02 01:44:32 -03:00
bunnei
29df6bbbd3 Merge pull request #2281 from lioncash/memory
kernel/codeset: Make CodeSet's memory data member a regular std::vector
2019-04-01 22:20:05 -04:00
bunnei
62860dc0b0 Merge pull request #2301 from FearlessTobi/remove-amiibo-setting
core/yuzu: Remove enable_nfc setting
2019-04-01 15:02:08 -04:00
bunnei
ffc72c8f15 Merge pull request #2283 from FearlessTobi/port-4517
Port citra-emu/citra#4517 &  citra-emu/citra#4686: Changes to macOS buildscripts
2019-04-01 14:59:44 -04:00
bunnei
e0eee250bb Merge pull request #2312 from lioncash/locks
general: Use deducation guides for std::lock_guard and std::unique_lock
2019-04-01 14:36:24 -04:00
Lioncash
781ab8407b general: Use deducation guides for std::lock_guard and std::unique_lock
Since C++17, the introduction of deduction guides for locking facilities
means that we no longer need to hardcode the mutex type into the locks
themselves, making it easier to switch mutex types, should it ever be
necessary in the future.
2019-04-01 12:53:47 -04:00
bunnei
d9b7bc4474 Merge pull request #2304 from lioncash/memsize
kernel/process: Report total physical memory used to svcGetInfo slightly better
2019-03-30 20:11:17 -04:00
bunnei
a89266bc5e Merge pull request #2303 from lioncash/thread
common/thread: Remove unused functions
2019-03-30 20:10:32 -04:00
bunnei
1960164055 Merge pull request #2297 from lioncash/reorder
video_core: Amend constructor initializer list order where applicable
2019-03-30 20:00:26 -04:00
bunnei
e199d1e14f Merge pull request #2298 from lioncash/variable
video_core/{gl_rasterizer, gpu_thread}: Remove unused class variables where applicable
2019-03-30 19:59:45 -04:00
bunnei
ba3d95e550 Merge pull request #2308 from lioncash/deduction
kernel/scheduler: Minor tidying up
2019-03-30 19:59:10 -04:00
bunnei
f911d1f685 Merge pull request #2307 from lioncash/regnames
service/fatal: Name FatalInfo structure members
2019-03-30 19:57:21 -04:00
Lioncash
824b8e4086 kernel/scheduler: Remove unused parameter to AddThread()
This was made unused in b404fcdf14, but
the parameter itself wasn't removed.
2019-03-30 05:29:33 -04:00
Lioncash
cb805f45ae kernel/scheduler: Use deduction guides on mutex locks
Since C++17, we no longer need to explicitly specify the type of the
mutex within the lock_guard. The type system can now deduce these with
deduction guides.
2019-03-30 05:28:43 -04:00
Lioncash
4b33a346ed service/fatal: Mark local variables as const where applicable 2019-03-30 03:06:23 -04:00
Lioncash
11505d3d9f service/fatal: Remove unnecessary semicolon
Resolves a -Wextra-semi warning.
2019-03-30 03:04:16 -04:00
Lioncash
cc737e5832 service/fatal: Name FatalInfo structure members
Based off RE, most of these structure members are register values, which
makes, sense given this service is used to convey fatal errors.

One member indicates the program entry point address, one is a set of
bit flags used to determine which registers to print, and one member
indicates the architecture type.

The only member that still isn't determined is the final member within
the data structure.
2019-03-30 03:01:20 -04:00
Lioncash
394095438a common/thread: Remove unused functions
Many of these functions are carried over from Dolphin (where they aren't
used anymore). Given these have no use (and we really shouldn't be
screwing around with OS-specific thread scheduler handling from the
emulator, these can be removed.

The function for setting the thread name is left, however, since it can
have debugging utility usages.
2019-03-29 13:26:21 -04:00
fearlessTobi
ff7e6a42c1 core/yuzu: Remove enable_nfc setting
This was initially added to prevent problems from stubbed/not implemented NFC services, but as we never encountered such and as it's only used in a deprecated function anyway, I guess we can just remove it to prevent more clutter of the settings.
2019-03-29 15:02:28 +01:00
Lioncash
3a846aa80f kernel/process: Report total physical memory used to svcGetInfo
Reports the (mostly) correct size through svcGetInfo now for queries to
total used physical memory. This still doesn't correctly handle memory
allocated via svcMapPhysicalMemory, however, we don't currently handle
that case anyways.
2019-03-28 22:59:20 -04:00
Lioncash
2289e895aa kernel/process: Store the total size of the code memory loaded
This will be necessary to properly report the used memory size in
svcGetInfo.
2019-03-28 22:51:17 -04:00
bunnei
f770c17d01 Merge pull request #2266 from FernandoS27/arbitration
Kernel: Fixes to Arbitration and SignalProcessWideKey Management
2019-03-28 21:42:24 -04:00
bunnei
b404fcdf14 Merge pull request #2265 from FernandoS27/multilevelqueue
Replace old Thread Queue for a new Multi Level Queue
2019-03-28 21:41:40 -04:00
Lioncash
5d4ab5ec2f kernel/process: Store the main thread stack size to a data member
This will be necessary in order to properly report memory usage within
svcGetInfo.
2019-03-28 18:45:06 -04:00
Lioncash
427f1e3e3d kernel/process: Make Run's stack size parameter a u64
This will make operating with the process-related SVC commands much
nicer in the future (the parameter representing the stack size in
svcStartProcess is a 64-bit value).
2019-03-28 18:26:12 -04:00
Lioncash
2aca7b9e1e kernel/process: Ensure that given stack size is always page-aligned
The kernel always makes sure that the given stack size is aligned to
page boundaries.
2019-03-28 18:25:00 -04:00
bunnei
16dc3a1dd5 Merge pull request #2284 from lioncash/heap-alloc
kernel/vm_manager: Unify heap allocation/freeing functions
2019-03-28 17:56:49 -04:00
bunnei
76f024865d Merge pull request #2296 from lioncash/override
video_core: Add missing override specifiers
2019-03-28 17:54:51 -04:00
bunnei
a09d8cc8a2 Merge pull request #2295 from lioncash/typo
video_core/gpu: Amend typo in GPU member variable name
2019-03-28 17:54:20 -04:00
Fernando Sahmkow
db42bcb306 Fixes and corrections on formatting. 2019-03-27 14:49:43 -04:00
Fernando Sahmkow
f35e09fe0d Fixes to multilevelqueue's iterator. 2019-03-27 14:34:33 -04:00
Fernando Sahmkow
dde0814837 Use MultiLevelQueue instead of old ThreadQueueList 2019-03-27 14:34:32 -04:00
Fernando Sahmkow
9dbba9240b Add MultiLevelQueue Tests 2019-03-27 14:34:31 -04:00
Fernando Sahmkow
3bc815a5dc Implement intrinsics CountTrailingZeroes and test it. 2019-03-27 14:34:29 -04:00
Fernando Sahmkow
522957f9f3 Implement a MultiLevelQueue 2019-03-27 14:33:44 -04:00
Lioncash
947d364dba gpu_thread: Remove unused dma_pusher class member variable from ThreadManager
The pusher instance is only ever used in the constructor of the
ThreadManager for creating the thread that the ThreadManager instance
contains. Aside from that, the member is unused, so it can be removed.
2019-03-27 12:51:21 -04:00
Lioncash
e2131f7310 gl_rasterizer: Remove unused reference member variable from RasterizerOpenGL
This member variable is no longer being used, so it can be removed,
removing a dependency on EmuWindow from the rasterizer's interface"
2019-03-27 12:45:59 -04:00
Lioncash
a5fa4b311e video_core: Amend constructor initializer list order where applicable
Specifies the members in the same order that initialization would take
place in.

This also silences -Wreorder warnings.
2019-03-27 12:37:53 -04:00
Lioncash
bbe700359d video_core: Add missing override specifiers
Ensures that the signatures will always match with the base class.

Also silences a few compilation warnings.
2019-03-27 12:24:52 -04:00
Lioncash
e36f1a5ba9 video_core/gpu: Amend typo in GPU member variable name
smaphore -> semaphore
2019-03-27 12:12:57 -04:00
bunnei
47f2405ab1 Merge pull request #2285 from lioncash/unused-struct
kernel/process: Remove unused AddressMapping struct
2019-03-26 11:17:03 -04:00
bunnei
595511876e Merge pull request #2287 from lioncash/coretiming-cb
core/core_timing: Make callback parameters consistent
2019-03-25 21:06:33 -04:00
bunnei
8a24a804c5 Merge pull request #2286 from lioncash/fwd
kernel/kernel: Remove unnecessary forward declaration
2019-03-25 21:05:33 -04:00
bunnei
b93a8a368f Merge pull request #2288 from lioncash/linkage
core/cheat_engine: Make MemoryReadImpl and MemoryWriteImpl internally linked
2019-03-25 21:02:25 -04:00
Lioncash
b26481c94b core/cheat_engine: Make MemoryReadImpl and MemoryWriteImpl internally linked
These don't need to be visible outside of the translation unit, so they
can be enclosed within an anonymous namespace.
2019-03-24 18:34:42 -04:00
Lioncash
c5d41fd812 core/core_timing: Make callback parameters consistent
In some cases, our callbacks were using s64 as a parameter, and in other
cases, they were using an int, which is inconsistent.

To make all callbacks consistent, we can just use an s64 as the type for
late cycles, given it gets rid of the need to cast internally.

While we're at it, also resolve some signed/unsigned conversions that
were occurring related to the callback registration.
2019-03-24 18:12:17 -04:00
Lioncash
bd7ec1a749 kernel/kernel: Remove unnecessary forward declaration
This is no longer necessary, as ResultVal isn't used anywhere in the
header.
2019-03-24 17:48:54 -04:00
Lioncash
7c4bc7b883 kernel/process: Remove unused AddressMapping struct
Another leftover from citra that's now no longer necessary.
2019-03-24 17:40:11 -04:00
Lioncash
1e92ba2785 kernel/vm_manager: Handle shrinking of the heap size within SetHeapSize()
One behavior that we weren't handling properly in our heap allocation
process was the ability for the heap to be shrunk down in size if a
larger size was previously requested.

This adds the basic behavior to do so and also gets rid of HeapFree, as
it's no longer necessary now that we have allocations and deallocations
going through the same API function.

While we're at it, fully document the behavior that this function
performs.
2019-03-24 17:08:30 -04:00
Lioncash
99a163478b kernel/vm_manager: Rename HeapAllocate to SetHeapSize
Makes it more obvious that this function is intending to stand in for
the actual supervisor call itself, and not acting as a general heap
allocation function.

Also the following change will merge the freeing behavior of HeapFree
into this function, so leaving it as HeapAllocate would be misleading.
2019-03-24 17:08:30 -04:00
Lioncash
abdb81ccaf kernel/vm_manager: Handle case of identical calls to HeapAllocate
In cases where HeapAllocate is called with the same size of the current
heap, we can simply do nothing and return successfully.

This avoids doing work where we otherwise don't have to. This is also
what the kernel itself does in this scenario.
2019-03-24 17:08:30 -04:00
Lioncash
9f63acac0f kernel/vm_manager: Remove unused class variables
Over time these have fallen out of use due to refactoring, so these can
be removed.
2019-03-24 17:08:30 -04:00
Lioncash
52980df1aa kernel/vm_manager: Remove unnecessary heap_used data member
This isn't required anymore, as all the kernel ever queries is the size
of the current heap, not the total usage of it.
2019-03-24 17:08:16 -04:00
Lioncash
586cab6172 kernel/vm_manager: Tidy up heap allocation code
Another holdover from citra that can be tossed out is the notion of the
heap needing to be allocated in different addresses. On the switch, the
base address of the heap will always be managed by the memory allocator
in the kernel, so this doesn't need to be specified in the function's
interface itself.

The heap on the switch is always allocated with read/write permissions,
so we don't need to add specifying the memory permissions as part of the
heap allocation itself either.

This also corrects the error code returned from within the function.
If the size of the heap is larger than the entire heap region, then the
kernel will report an out of memory condition.
2019-03-24 16:17:31 -04:00
bunnei
3f74518e19 Merge pull request #2232 from lioncash/transfer-memory
core/hle/kernel: Split transfer memory handling out into its own class
2019-03-24 16:00:23 -04:00
bunnei
1665b70cc6 Merge pull request #2221 from DarkLordZach/firmware-version
set_sys: Implement GetFirmwareVersion(2) for libnx hosversion
2019-03-23 13:48:29 -04:00
bunnei
f08db7295a Merge pull request #2253 from lioncash/flags
Migrate off directly modifying CMAKE_* compilation-related flags directly
2019-03-23 13:46:53 -04:00
bunnei
6af322a347 Merge pull request #2280 from lioncash/nso
loader/nso: Minor refactoring
2019-03-23 13:46:09 -04:00
Alex James
a5dbda3f76 travis/macos: Use macpack to bundle dependencies
This appears to properly handle the ffmpeg libraries that dylibbundler
failed to patch.
2019-03-23 01:37:38 +01:00
MerryMage
2bcebcff2a travis: Simplify macos/upload.sh 2019-03-23 01:33:53 +01:00
Lioncash
c0a01f3adc kernel/codeset: Make CodeSet's memory data member a regular std::vector
The use of a shared_ptr is an implementation detail of the VMManager
itself when mapping memory. Because of that, we shouldn't require all
users of the CodeSet to have to allocate the shared_ptr ahead of time.
It's intended that CodeSet simply pass in the required direct data, and
that the memory manager takes care of it from that point on.

This means we just do the shared pointer allocation in a single place,
when loading modules, as opposed to in each loader.
2019-03-22 18:43:46 -04:00
bunnei
819dd93257 Merge pull request #2279 from lioncash/cheat-global
file_sys/cheat_engine: Remove use of global system accessors
2019-03-22 18:41:44 -04:00
bunnei
e5893db3e6 Merge pull request #2256 from bunnei/gpu-vmm
gpu: Rewrite MemoryManager based on the VMManager implementation.
2019-03-22 18:41:12 -04:00
bunnei
a7157fe27d Merge pull request #2277 from bunnei/fix-smo-transitions
Revert "Devirtualize Register/Unregister and use a wrapper instead."
2019-03-22 18:40:53 -04:00
Lioncash
f3297d8cd1 loader/nso: Place translation unit specific functions into an anonymous namespace
Makes it impossible to indirectly violate the ODR in some other
translation unit due to these existing.
2019-03-22 15:25:53 -04:00
Lioncash
733cf179b8 file_sys/cheat_engine: Silence truncation and sign-conversion warnings 2019-03-22 14:43:41 -04:00
Lioncash
540235bb05 file_sys/cheat_engine: Remove use of global system accessors
Instead, pass in the core timing instance and make the dependency
explicit in the interface.
2019-03-22 14:43:37 -04:00
Lioncash
611f4666fd loader/nso: Clean up use of magic constants
Now that the NSO header has the proper size, we can just use sizeof on
it instead of having magic constants.
2019-03-22 14:39:17 -04:00
Lioncash
1cf90f4570 file_sys/patch_manager: Deduplicate NSO header
This source file was utilizing its own version of the NSO header.
Instead of keeping this around, we can have the patch manager also use
the version of the header that we have defined in loader/nso.h
2019-03-22 14:39:10 -04:00
Lioncash
90e27ea003 loader/nso: Fix definition of the NSO header struct
The total struct itself is 0x100 (256) bytes in size, so we should be
providing that amount of data.

Without the data, this can result in omitted data from the final loaded
NSO file.
2019-03-22 14:26:58 -04:00
Lioncash
ee49e1fcb6 file_sys/patch_manager: Remove two magic values
These correspond to the NSOBuildHeader.
2019-03-22 14:17:50 -04:00
bunnei
7b6d516faa Merge pull request #2234 from lioncash/mutex
core/hle/kernel: Make Mutex a per-process class.
2019-03-21 22:18:36 -04:00
bunnei
b78e7b3454 Merge pull request #2274 from lioncash/include
core/memory: Remove unnecessary includes
2019-03-21 22:14:27 -04:00
bunnei
d0dddb3e9d Revert "Devirtualize Register/Unregister and use a wrapper instead."
- Fixes graphical issues from transitions in Super Mario Odyssey.
2019-03-21 21:56:56 -04:00
bunnei
4d95adcac5 Merge pull request #2275 from lioncash/memflags
kernel/vm_manager: Amend flag value for code data
2019-03-21 21:43:15 -04:00
bunnei
e703772c83 Merge pull request #2276 from lioncash/am
service/am: Add function table for IDebugFunctions
2019-03-21 21:42:17 -04:00
bunnei
639f0c524d Merge pull request #1933 from DarkLordZach/cheat-engine
file_sys: Implement parser and interpreter for game memory cheats
2019-03-21 21:41:59 -04:00
Lioncash
76f27d1f44 service/am: Add function table for IDebugFunctions
We already have the service related stuff set up for this, however, it's
missing the function table.
2019-03-21 15:58:03 -04:00
Lioncash
18918f5f2f kernel/vm_manager: Rename CodeStatic/CodeMutable to Code and CodeData respectively
Makes it more evident that one is for actual code and one is for actual
data. Mutable and static are less than ideal terms here, because
read-only data is technically not mutable, but we were mapping it with
that label.
2019-03-21 11:43:35 -04:00
Lioncash
56c80a2a21 kernel/vm_manager: Amend flag values for CodeMutable
This should actually be using the data flags, rather than the code
flags.
2019-03-21 11:23:14 -04:00
Lioncash
c221308a66 core/memory: Remove unnecessary includes
In 93da8e0abf, the page table construct
was moved to the common library (which utilized these inclusions). Since
the move, nothing requires these headers to be included within the
memory header.
2019-03-21 09:48:54 -04:00
bunnei
2117edd0f8 memory_manager: Cleanup FindFreeRegion. 2019-03-20 23:12:28 -04:00
bunnei
5a5fccaa23 memory_manager: Use Common::AlignUp in public interface as needed. 2019-03-20 22:58:49 -04:00
bunnei
72837e4b3d memory_manager: Bug fixes and further cleanup. 2019-03-20 22:36:03 -04:00
bunnei
3ae0de9b53 memory: Check that core is powered on before attempting to use GPU.
- GPU will be released on shutdown, before pages are unmapped.
- On subsequent runs, current_page_table will be not nullptr, but GPU might not be valid yet.
2019-03-20 22:36:03 -04:00
bunnei
19330f45d3 maxwell_dma: Check for valid source in destination before copy.
- Avoid a crash in Octopath Traveler.
2019-03-20 22:36:03 -04:00
bunnei
197dcf0b5e memory_manager: Add protections for invalid GPU addresses.
- Avoid a crash in Xenoblade Chronicles 2.
2019-03-20 22:36:03 -04:00
bunnei
21eb4cfa7f gl_rasterizer_cache: Check that backing memory is valid before creating a surface.
- Fixes a crash in Puyo Puyo Tetris.
2019-03-20 22:36:02 -04:00
bunnei
22d3dfbcd4 gpu: Rewrite virtual memory manager using PageTable. 2019-03-20 22:36:02 -04:00
bunnei
241563d15c gpu: Move GPUVAddr definition to common_types. 2019-03-20 22:36:02 -04:00
Fernando Sahmkow
9c7319a4d4 Fix small bug that kept a thread as a condvar thread after being signalled. 2019-03-19 22:43:13 -04:00
Fernando Sahmkow
acbdfdae64 Add CondVar Thread State. 2019-03-19 20:32:47 -04:00
Fernando Sahmkow
774f139e65 Small fixes to address_arbiter to better match the IDB. 2019-03-19 20:32:46 -04:00
Lioncash
e6612d6d8d CMakeLists: Move off of modifying CMAKE_*-related flags
Modifying CMAKE_* related flags directly applies those changes to every
single CMake target. This includes even the targets we have in the
externals directory.

So, if we ever increased our warning levels, or enabled particular ones,
or enabled any other compilation setting, then this would apply to
externals as well, which is often not desirable.

This makes our compilation flag setup less error prone by only applying
our settings to our targets and leaving the externals alone entirely.

This also means we don't end up clobbering any provided flags on the
command line either, allowing users to specifically use the flags they
want.
2019-03-17 06:55:24 -04:00
Lioncash
13bc74e957 CMakeLists: Move compilation flags into the src directory
We generally shouldn't be hijacking CMAKE_CXX_FLAGS, etc as a means to
append flags to the targets, since this adds the compilation flags to
everything, including our externals, which can result in weird issues
and makes the build hierarchy fragile.

Instead, we want to just apply these compilation flags to our targets,
and let those managing external libraries to properly specify their
compilation flags.

This also results in us not getting as many warnings, as we don't raise
the warning level on every external target.
2019-03-17 01:49:09 -04:00
Lioncash
d71cad6ed0 core/hle/kernel/mutex: Remove usages of global system accessors
Removes the use of global system accessors, and instead uses the
explicit interface provided.
2019-03-14 20:55:52 -04:00
Lioncash
555cd26ec2 core/hle/kernel: Make Mutex a per-process class.
Makes it an instantiable class like it is in the actual kernel. This
will also allow removing reliance on global accessors in a following
change, now that we can encapsulate a reference to the system instance
in the class.
2019-03-14 20:55:52 -04:00
Lioncash
5379063108 core/hle/kernel/svc: Implement svcUnmapTransferMemory
Similarly, like svcMapTransferMemory, we can also implement
svcUnmapTransferMemory fairly trivially as well.
2019-03-13 06:04:49 -04:00
Lioncash
567134f874 core/hle/kernel/svc: Implement svcMapTransferMemory
Now that transfer memory handling is separated from shared memory, we
can implement svcMapTransferMemory pretty trivially.
2019-03-13 06:04:49 -04:00
Lioncash
cb198d7985 core/hle/kernel: Split transfer memory handling out into its own class
Within the kernel, shared memory and transfer memory facilities exist as
completely different kernel objects. They also have different validity
checking as well. Therefore, we shouldn't be treating the two as the
same kind of memory.

They also differ in terms of their behavioral aspect as well. Shared
memory is intended for sharing memory between processes, while transfer
memory is intended to be for transferring memory to other processes.

This breaks out the handling for transfer memory into its own class and
treats it as its own kernel object. This is also important when we
consider resource limits as well. Particularly because transfer memory
is limited by the resource limit value set for it.

While we currently don't handle resource limit testing against objects
yet (but we do allow setting them), this will make implementing that
behavior much easier in the future, as we don't need to distinguish
between shared memory and transfer memory allocations in the same place.
2019-03-13 06:04:44 -04:00
Zach Hilman
cd2921a047 set_sys: Move constants to anonymous namespace 2019-03-11 11:16:35 -04:00
Zach Hilman
debc7442f2 set_sys: Use official nintendo version string 2019-03-10 19:54:13 -04:00
Zach Hilman
73f2ee5484 system_version: Correct sizes on VectorVfsFile construction 2019-03-10 19:16:17 -04:00
Zach Hilman
597c00698d set_sys: Use correct error codes in GetFirmwareVersion* 2019-03-10 19:09:23 -04:00
Zach Hilman
ed82bb968a set_sys: Implement GetFirmwareVersion(2) for libnx hosversion
Uses the synthesized system archive 9 (SystemVersion) and reports v5.1.0-0.0
2019-03-10 16:51:42 -04:00
Zach Hilman
52ac6419da vm_manager: Remove cheat-specific ranges from VMManager 2019-03-05 10:09:36 -05:00
Zach Hilman
7053546687 core: Add support for registering and controlling ownership of CheatEngine 2019-03-04 18:41:29 -05:00
Zach Hilman
769b346682 cheat_engine: Add parser and interpreter for game cheats 2019-03-04 18:39:58 -05:00
Zach Hilman
c100a4b8d4 loader/nso: Set main code region in VMManager
For rom directories (and by extension, XCI/NSP/NAX/NCA) this is for the NSO with name 'main', for regular NSOs, this is the NSO.
2019-03-04 18:39:58 -05:00
Zach Hilman
b952a30555 vm_manager: Add support for storing and getting main code region
Used as root for one region of cheats, set by loader
2019-03-04 18:39:58 -05:00
Zach Hilman
4495bf5706 patch_manager: Display cheats in game list add-ons 2019-03-04 18:39:57 -05:00
Zach Hilman
c5091bfe00 patch_manager: Add support for loading cheats lists
Uses load/<title_id>/<mod_name>/cheats as root dir, file name is all upper or lower hex first 8 bytes build ID.
2019-03-04 18:39:57 -05:00
Zach Hilman
9d1ab766a0 controllers/npad: Add accessor for current press state
Allows frontend/features to access pressed buttons conveniently as possible
2019-03-04 18:39:57 -05:00
123 changed files with 3031 additions and 1908 deletions

View File

@@ -1,5 +1,6 @@
#!/bin/sh -ex
brew update
brew install dylibbundler p7zip qt5 sdl2 ccache
brew install p7zip qt5 sdl2 ccache
brew outdated cmake || brew upgrade cmake
pip3 install macpack

View File

@@ -11,92 +11,19 @@ mkdir "$REV_NAME"
cp build/bin/yuzu-cmd "$REV_NAME"
cp -r build/bin/yuzu.app "$REV_NAME"
# move qt libs into app bundle for deployment
$(brew --prefix)/opt/qt5/bin/macdeployqt "${REV_NAME}/yuzu.app"
# move libs into folder for deployment
macpack "${REV_NAME}/yuzu.app/Contents/MacOS/yuzu" -d "../Frameworks"
# move qt frameworks into app bundle for deployment
$(brew --prefix)/opt/qt5/bin/macdeployqt "${REV_NAME}/yuzu.app" -executable="${REV_NAME}/yuzu.app/Contents/MacOS/yuzu"
# move SDL2 libs into folder for deployment
dylibbundler -b -x "${REV_NAME}/yuzu-cmd" -cd -d "${REV_NAME}/libs" -p "@executable_path/libs/"
# Make the changes to make the yuzu app standalone (i.e. not dependent on the current brew installation).
# To do this, the absolute references to each and every QT framework must be re-written to point to the local frameworks
# (in the Contents/Frameworks folder).
# The "install_name_tool" is used to do so.
# Coreutils is a hack to coerce Homebrew to point to the absolute Cellar path (symlink dereferenced). i.e:
# ls -l /usr/local/opt/qt5:: /usr/local/opt/qt5 -> ../Cellar/qt5/5.6.1-1
# grealpath ../Cellar/qt5/5.6.1-1:: /usr/local/Cellar/qt5/5.6.1-1
brew install coreutils || brew upgrade coreutils || true
REV_NAME_ALT=$REV_NAME/
# grealpath is located in coreutils, there is no "realpath" for OS X :(
QT_BREWS_PATH=$(grealpath "$(brew --prefix qt5)")
BREW_PATH=$(brew --prefix)
QT_VERSION_NUM=5
$BREW_PATH/opt/qt5/bin/macdeployqt "${REV_NAME_ALT}yuzu.app" \
-executable="${REV_NAME_ALT}yuzu.app/Contents/MacOS/yuzu"
# These are the files that macdeployqt packed into Contents/Frameworks/ - we don't want those, so we replace them.
declare -a macos_libs=("QtCore" "QtWidgets" "QtGui" "QtOpenGL" "QtPrintSupport")
for macos_lib in "${macos_libs[@]}"
do
SC_FRAMEWORK_PART=$macos_lib.framework/Versions/$QT_VERSION_NUM/$macos_lib
# Replace macdeployqt versions of the Frameworks with our own (from /usr/local/opt/qt5/lib/)
cp "$BREW_PATH/opt/qt5/lib/$SC_FRAMEWORK_PART" "${REV_NAME_ALT}yuzu.app/Contents/Frameworks/$SC_FRAMEWORK_PART"
# Replace references within the embedded Framework files with "internal" versions.
for macos_lib2 in "${macos_libs[@]}"
do
# Since brew references both the non-symlinked and symlink paths of QT5, it needs to be duplicated.
# /usr/local/Cellar/qt5/5.6.1-1/lib and /usr/local/opt/qt5/lib both resolve to the same files.
# So the two lines below are effectively duplicates when resolved as a path, but as strings, they aren't.
RM_FRAMEWORK_PART=$macos_lib2.framework/Versions/$QT_VERSION_NUM/$macos_lib2
install_name_tool -change \
$QT_BREWS_PATH/lib/$RM_FRAMEWORK_PART \
@executable_path/../Frameworks/$RM_FRAMEWORK_PART \
"${REV_NAME_ALT}yuzu.app/Contents/Frameworks/$SC_FRAMEWORK_PART"
install_name_tool -change \
"$BREW_PATH/opt/qt5/lib/$RM_FRAMEWORK_PART" \
@executable_path/../Frameworks/$RM_FRAMEWORK_PART \
"${REV_NAME_ALT}yuzu.app/Contents/Frameworks/$SC_FRAMEWORK_PART"
done
done
# Handles `This application failed to start because it could not find or load the Qt platform plugin "cocoa"`
# Which manifests itself as:
# "Exception Type: EXC_CRASH (SIGABRT) | Exception Codes: 0x0000000000000000, 0x0000000000000000 | Exception Note: EXC_CORPSE_NOTIFY"
# There may be more dylibs needed to be fixed...
declare -a macos_plugins=("Plugins/platforms/libqcocoa.dylib")
for macos_lib in "${macos_plugins[@]}"
do
install_name_tool -id @executable_path/../$macos_lib "${REV_NAME_ALT}yuzu.app/Contents/$macos_lib"
for macos_lib2 in "${macos_libs[@]}"
do
RM_FRAMEWORK_PART=$macos_lib2.framework/Versions/$QT_VERSION_NUM/$macos_lib2
install_name_tool -change \
$QT_BREWS_PATH/lib/$RM_FRAMEWORK_PART \
@executable_path/../Frameworks/$RM_FRAMEWORK_PART \
"${REV_NAME_ALT}yuzu.app/Contents/$macos_lib"
install_name_tool -change \
"$BREW_PATH/opt/qt5/lib/$RM_FRAMEWORK_PART" \
@executable_path/../Frameworks/$RM_FRAMEWORK_PART \
"${REV_NAME_ALT}yuzu.app/Contents/$macos_lib"
done
done
for macos_lib in "${macos_libs[@]}"
do
# Debugging info for Travis-CI
otool -L "${REV_NAME_ALT}yuzu.app/Contents/Frameworks/$macos_lib.framework/Versions/$QT_VERSION_NUM/$macos_lib"
done
# move libs into folder for deployment
macpack "${REV_NAME}/yuzu-cmd" -d "libs"
# Make the yuzu.app application launch a debugging terminal.
# Store away the actual binary
mv ${REV_NAME_ALT}yuzu.app/Contents/MacOS/yuzu ${REV_NAME_ALT}yuzu.app/Contents/MacOS/yuzu-bin
mv ${REV_NAME}/yuzu.app/Contents/MacOS/yuzu ${REV_NAME}/yuzu.app/Contents/MacOS/yuzu-bin
cat > ${REV_NAME_ALT}yuzu.app/Contents/MacOS/yuzu <<EOL
cat > ${REV_NAME}/yuzu.app/Contents/MacOS/yuzu <<EOL
#!/usr/bin/env bash
cd "\`dirname "\$0"\`"
chmod +x yuzu-bin
@@ -105,6 +32,9 @@ EOL
# Content that will serve as the launching script for yuzu (within the .app folder)
# Make the launching script executable
chmod +x ${REV_NAME_ALT}yuzu.app/Contents/MacOS/yuzu
chmod +x ${REV_NAME}/yuzu.app/Contents/MacOS/yuzu
# Verify loader instructions
find "$REV_NAME" -exec otool -L {} \;
. .travis/common/post-upload.sh

View File

@@ -104,78 +104,12 @@ endif()
message(STATUS "Target architecture: ${ARCHITECTURE}")
# Configure compilation flags
# Configure C++ standard
# ===========================
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
if (NOT MSVC)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-attributes")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS}")
if (MINGW)
add_definitions(-DMINGW_HAS_SECURE_API)
if (MINGW_STATIC_BUILD)
add_definitions(-DQT_STATICPLUGIN)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -static")
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -static")
endif()
endif()
else()
# Silence "deprecation" warnings
add_definitions(/D_CRT_SECURE_NO_WARNINGS /D_CRT_NONSTDC_NO_DEPRECATE /D_SCL_SECURE_NO_WARNINGS)
# Avoid windows.h junk
add_definitions(/DNOMINMAX)
# Avoid windows.h from including some usually unused libs like winsocks.h, since this might cause some redefinition errors.
add_definitions(/DWIN32_LEAN_AND_MEAN)
set(CMAKE_CONFIGURATION_TYPES Debug Release CACHE STRING "" FORCE)
# Tweak optimization settings
# As far as I can tell, there's no way to override the CMake defaults while leaving user
# changes intact, so we'll just clobber everything and say sorry.
message(STATUS "Cache compiler flags ignored, please edit CMakeLists.txt to change the flags.")
# /W3 - Level 3 warnings
# /MP - Multi-threaded compilation
# /Zi - Output debugging information
# /Zo - enhanced debug info for optimized builds
# /permissive- - enables stricter C++ standards conformance checks
set(CMAKE_C_FLAGS "/W3 /MP /Zi /Zo /permissive-" CACHE STRING "" FORCE)
# /EHsc - C++-only exception handling semantics
# /Zc:throwingNew - let codegen assume `operator new` will never return null
# /Zc:inline - let codegen omit inline functions in object files
set(CMAKE_CXX_FLAGS "${CMAKE_C_FLAGS} /EHsc /std:c++latest /Zc:throwingNew,inline" CACHE STRING "" FORCE)
# /MDd - Multi-threaded Debug Runtime DLL
set(CMAKE_C_FLAGS_DEBUG "/Od /MDd" CACHE STRING "" FORCE)
set(CMAKE_CXX_FLAGS_DEBUG "${CMAKE_C_FLAGS_DEBUG}" CACHE STRING "" FORCE)
# /O2 - Optimization level 2
# /GS- - No stack buffer overflow checks
# /MD - Multi-threaded runtime DLL
set(CMAKE_C_FLAGS_RELEASE "/O2 /GS- /MD" CACHE STRING "" FORCE)
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE}" CACHE STRING "" FORCE)
set(CMAKE_EXE_LINKER_FLAGS_DEBUG "/DEBUG /MANIFEST:NO" CACHE STRING "" FORCE)
set(CMAKE_EXE_LINKER_FLAGS_RELEASE "/DEBUG /MANIFEST:NO /INCREMENTAL:NO /OPT:REF,ICF" CACHE STRING "" FORCE)
endif()
# Set file offset size to 64 bits.
#
# On modern Unixes, this is typically already the case. The lone exception is
# glibc, which may default to 32 bits. glibc allows this to be configured
# by setting _FILE_OFFSET_BITS.
if(CMAKE_SYSTEM_NAME STREQUAL "Linux" OR MINGW)
add_definitions(-D_FILE_OFFSET_BITS=64)
endif()
# CMake seems to only define _DEBUG on Windows
set_property(DIRECTORY APPEND PROPERTY
COMPILE_DEFINITIONS $<$<CONFIG:Debug>:_DEBUG> $<$<NOT:$<CONFIG:Debug>>:NDEBUG>)
# System imported libraries
# ======================
@@ -326,25 +260,21 @@ endif()
# Platform-specific library requirements
# ======================================
IF (APPLE)
find_library(COCOA_LIBRARY Cocoa) # Umbrella framework for everything GUI-related
if (APPLE)
# Umbrella framework for everything GUI-related
find_library(COCOA_LIBRARY Cocoa)
set(PLATFORM_LIBRARIES ${COCOA_LIBRARY} ${IOKIT_LIBRARY} ${COREVIDEO_LIBRARY})
if (CMAKE_CXX_COMPILER_ID STREQUAL Clang)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -stdlib=libc++")
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -stdlib=libc++")
endif()
ELSEIF (WIN32)
elseif (WIN32)
# WSAPoll and SHGetKnownFolderPath (AppData/Roaming) didn't exist before WinNT 6.x (Vista)
add_definitions(-D_WIN32_WINNT=0x0600 -DWINVER=0x0600)
set(PLATFORM_LIBRARIES winmm ws2_32)
IF (MINGW)
if (MINGW)
# PSAPI is the Process Status API
set(PLATFORM_LIBRARIES ${PLATFORM_LIBRARIES} psapi imm32 version)
ENDIF (MINGW)
ELSEIF (CMAKE_SYSTEM_NAME MATCHES "^(Linux|kFreeBSD|GNU|SunOS)$")
endif()
elseif (CMAKE_SYSTEM_NAME MATCHES "^(Linux|kFreeBSD|GNU|SunOS)$")
set(PLATFORM_LIBRARIES rt)
ENDIF (APPLE)
endif()
# Setup a custom clang-format target (if clang-format can be found) that will run
# against all the src files. This should be used before making a pull request.

View File

@@ -1,18 +1,79 @@
# Enable modules to include each other's files
include_directories(.)
# CMake seems to only define _DEBUG on Windows
set_property(DIRECTORY APPEND PROPERTY
COMPILE_DEFINITIONS $<$<CONFIG:Debug>:_DEBUG> $<$<NOT:$<CONFIG:Debug>>:NDEBUG>)
# Set compilation flags
if (MSVC)
set(CMAKE_CONFIGURATION_TYPES Debug Release CACHE STRING "" FORCE)
# Silence "deprecation" warnings
add_definitions(-D_CRT_SECURE_NO_WARNINGS -D_CRT_NONSTDC_NO_DEPRECATE -D_SCL_SECURE_NO_WARNINGS)
# Avoid windows.h junk
add_definitions(-DNOMINMAX)
# Avoid windows.h from including some usually unused libs like winsocks.h, since this might cause some redefinition errors.
add_definitions(-DWIN32_LEAN_AND_MEAN)
# /W3 - Level 3 warnings
# /MP - Multi-threaded compilation
# /Zi - Output debugging information
# /Zo - enhanced debug info for optimized builds
# /permissive- - enables stricter C++ standards conformance checks
# /EHsc - C++-only exception handling semantics
# /Zc:throwingNew - let codegen assume `operator new` will never return null
# /Zc:inline - let codegen omit inline functions in object files
add_compile_options(/W3 /MP /Zi /Zo /permissive- /EHsc /std:c++latest /Zc:throwingNew,inline)
# /GS- - No stack buffer overflow checks
add_compile_options("$<$<CONFIG:Release>:/GS->")
set(CMAKE_EXE_LINKER_FLAGS_DEBUG "/DEBUG /MANIFEST:NO" CACHE STRING "" FORCE)
set(CMAKE_EXE_LINKER_FLAGS_RELEASE "/DEBUG /MANIFEST:NO /INCREMENTAL:NO /OPT:REF,ICF" CACHE STRING "" FORCE)
else()
add_compile_options("-Wno-attributes")
if (APPLE AND CMAKE_CXX_COMPILER_ID STREQUAL Clang)
add_compile_options("-stdlib=libc++")
endif()
# Set file offset size to 64 bits.
#
# On modern Unixes, this is typically already the case. The lone exception is
# glibc, which may default to 32 bits. glibc allows this to be configured
# by setting _FILE_OFFSET_BITS.
if(CMAKE_SYSTEM_NAME STREQUAL "Linux" OR MINGW)
add_definitions(-D_FILE_OFFSET_BITS=64)
endif()
if (MINGW)
add_definitions(-DMINGW_HAS_SECURE_API)
if (MINGW_STATIC_BUILD)
add_definitions(-DQT_STATICPLUGIN)
add_compile_options("-static")
endif()
endif()
endif()
add_subdirectory(common)
add_subdirectory(core)
add_subdirectory(audio_core)
add_subdirectory(video_core)
add_subdirectory(input_common)
add_subdirectory(tests)
if (ENABLE_SDL2)
add_subdirectory(yuzu_cmd)
endif()
if (ENABLE_QT)
add_subdirectory(yuzu)
endif()
if (ENABLE_WEB_SERVICE)
add_subdirectory(web_service)
endif()

View File

@@ -38,7 +38,7 @@ Stream::Stream(Core::Timing::CoreTiming& core_timing, u32 sample_rate, Format fo
sink_stream{sink_stream}, core_timing{core_timing}, name{std::move(name_)} {
release_event = core_timing.RegisterEvent(
name, [this](u64 userdata, int cycles_late) { ReleaseActiveBuffer(); });
name, [this](u64 userdata, s64 cycles_late) { ReleaseActiveBuffer(); });
}
void Stream::Play() {

View File

@@ -98,6 +98,7 @@ add_library(common STATIC
microprofile.h
microprofileui.h
misc.cpp
multi_level_queue.h
page_table.cpp
page_table.h
param_package.cpp

View File

@@ -58,4 +58,43 @@ inline u64 CountLeadingZeroes64(u64 value) {
return __builtin_clzll(value);
}
#endif
#ifdef _MSC_VER
inline u32 CountTrailingZeroes32(u32 value) {
unsigned long trailing_zero = 0;
if (_BitScanForward(&trailing_zero, value) != 0) {
return trailing_zero;
}
return 32;
}
inline u64 CountTrailingZeroes64(u64 value) {
unsigned long trailing_zero = 0;
if (_BitScanForward64(&trailing_zero, value) != 0) {
return trailing_zero;
}
return 64;
}
#else
inline u32 CountTrailingZeroes32(u32 value) {
if (value == 0) {
return 32;
}
return __builtin_ctz(value);
}
inline u64 CountTrailingZeroes64(u64 value) {
if (value == 0) {
return 64;
}
return __builtin_ctzll(value);
}
#endif
} // namespace Common

View File

@@ -40,10 +40,9 @@ using s64 = std::int64_t; ///< 64-bit signed int
using f32 = float; ///< 32-bit floating point
using f64 = double; ///< 64-bit floating point
// TODO: It would be nice to eventually replace these with strong types that prevent accidental
// conversion between each other.
using VAddr = u64; ///< Represents a pointer in the userspace virtual address space.
using PAddr = u64; ///< Represents a pointer in the ARM11 physical address space.
using VAddr = u64; ///< Represents a pointer in the userspace virtual address space.
using PAddr = u64; ///< Represents a pointer in the ARM11 physical address space.
using GPUVAddr = u64; ///< Represents a pointer in the GPU virtual address space.
using u128 = std::array<std::uint64_t, 2>;
static_assert(sizeof(u128) == 16, "u128 must be 128 bits wide");

View File

@@ -16,22 +16,22 @@ DetachedTasks::DetachedTasks() {
}
void DetachedTasks::WaitForAllTasks() {
std::unique_lock<std::mutex> lock(mutex);
std::unique_lock lock{mutex};
cv.wait(lock, [this]() { return count == 0; });
}
DetachedTasks::~DetachedTasks() {
std::unique_lock<std::mutex> lock(mutex);
std::unique_lock lock{mutex};
ASSERT(count == 0);
instance = nullptr;
}
void DetachedTasks::AddTask(std::function<void()> task) {
std::unique_lock<std::mutex> lock(instance->mutex);
std::unique_lock lock{instance->mutex};
++instance->count;
std::thread([task{std::move(task)}]() {
task();
std::unique_lock<std::mutex> lock(instance->mutex);
std::unique_lock lock{instance->mutex};
--instance->count;
std::notify_all_at_thread_exit(instance->cv, std::move(lock));
})

View File

@@ -46,12 +46,12 @@ public:
}
void AddBackend(std::unique_ptr<Backend> backend) {
std::lock_guard<std::mutex> lock(writing_mutex);
std::lock_guard lock{writing_mutex};
backends.push_back(std::move(backend));
}
void RemoveBackend(std::string_view backend_name) {
std::lock_guard<std::mutex> lock(writing_mutex);
std::lock_guard lock{writing_mutex};
const auto it =
std::remove_if(backends.begin(), backends.end(),
[&backend_name](const auto& i) { return backend_name == i->GetName(); });
@@ -80,7 +80,7 @@ private:
backend_thread = std::thread([&] {
Entry entry;
auto write_logs = [&](Entry& e) {
std::lock_guard<std::mutex> lock(writing_mutex);
std::lock_guard lock{writing_mutex};
for (const auto& backend : backends) {
backend->Write(e);
}

View File

@@ -0,0 +1,337 @@
// Copyright 2019 TuxSH
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <array>
#include <iterator>
#include <list>
#include <utility>
#include "common/bit_util.h"
#include "common/common_types.h"
namespace Common {
/**
* A MultiLevelQueue is a type of priority queue which has the following characteristics:
* - iteratable through each of its elements.
* - back can be obtained.
* - O(1) add, lookup (both front and back)
* - discrete priorities and a max of 64 priorities (limited domain)
* This type of priority queue is normaly used for managing threads within an scheduler
*/
template <typename T, std::size_t Depth>
class MultiLevelQueue {
public:
using value_type = T;
using reference = value_type&;
using const_reference = const value_type&;
using pointer = value_type*;
using const_pointer = const value_type*;
using difference_type = typename std::pointer_traits<pointer>::difference_type;
using size_type = std::size_t;
template <bool is_constant>
class iterator_impl {
public:
using iterator_category = std::bidirectional_iterator_tag;
using value_type = T;
using pointer = std::conditional_t<is_constant, T*, const T*>;
using reference = std::conditional_t<is_constant, const T&, T&>;
using difference_type = typename std::pointer_traits<pointer>::difference_type;
friend bool operator==(const iterator_impl& lhs, const iterator_impl& rhs) {
if (lhs.IsEnd() && rhs.IsEnd())
return true;
return std::tie(lhs.current_priority, lhs.it) == std::tie(rhs.current_priority, rhs.it);
}
friend bool operator!=(const iterator_impl& lhs, const iterator_impl& rhs) {
return !operator==(lhs, rhs);
}
reference operator*() const {
return *it;
}
pointer operator->() const {
return it.operator->();
}
iterator_impl& operator++() {
if (IsEnd()) {
return *this;
}
++it;
if (it == GetEndItForPrio()) {
u64 prios = mlq.used_priorities;
prios &= ~((1ULL << (current_priority + 1)) - 1);
if (prios == 0) {
current_priority = mlq.depth();
} else {
current_priority = CountTrailingZeroes64(prios);
it = GetBeginItForPrio();
}
}
return *this;
}
iterator_impl& operator--() {
if (IsEnd()) {
if (mlq.used_priorities != 0) {
current_priority = 63 - CountLeadingZeroes64(mlq.used_priorities);
it = GetEndItForPrio();
--it;
}
} else if (it == GetBeginItForPrio()) {
u64 prios = mlq.used_priorities;
prios &= (1ULL << current_priority) - 1;
if (prios != 0) {
current_priority = CountTrailingZeroes64(prios);
it = GetEndItForPrio();
--it;
}
} else {
--it;
}
return *this;
}
iterator_impl operator++(int) {
const iterator_impl v{*this};
++(*this);
return v;
}
iterator_impl operator--(int) {
const iterator_impl v{*this};
--(*this);
return v;
}
// allow implicit const->non-const
iterator_impl(const iterator_impl<false>& other)
: mlq(other.mlq), it(other.it), current_priority(other.current_priority) {}
iterator_impl(const iterator_impl<true>& other)
: mlq(other.mlq), it(other.it), current_priority(other.current_priority) {}
iterator_impl& operator=(const iterator_impl<false>& other) {
mlq = other.mlq;
it = other.it;
current_priority = other.current_priority;
return *this;
}
friend class iterator_impl<true>;
iterator_impl() = default;
private:
friend class MultiLevelQueue;
using container_ref =
std::conditional_t<is_constant, const MultiLevelQueue&, MultiLevelQueue&>;
using list_iterator = std::conditional_t<is_constant, typename std::list<T>::const_iterator,
typename std::list<T>::iterator>;
explicit iterator_impl(container_ref mlq, list_iterator it, u32 current_priority)
: mlq(mlq), it(it), current_priority(current_priority) {}
explicit iterator_impl(container_ref mlq, u32 current_priority)
: mlq(mlq), it(), current_priority(current_priority) {}
bool IsEnd() const {
return current_priority == mlq.depth();
}
list_iterator GetBeginItForPrio() const {
return mlq.levels[current_priority].begin();
}
list_iterator GetEndItForPrio() const {
return mlq.levels[current_priority].end();
}
container_ref mlq;
list_iterator it;
u32 current_priority;
};
using iterator = iterator_impl<false>;
using const_iterator = iterator_impl<true>;
void add(const T& element, u32 priority, bool send_back = true) {
if (send_back)
levels[priority].push_back(element);
else
levels[priority].push_front(element);
used_priorities |= 1ULL << priority;
}
void remove(const T& element, u32 priority) {
auto it = ListIterateTo(levels[priority], element);
if (it == levels[priority].end())
return;
levels[priority].erase(it);
if (levels[priority].empty()) {
used_priorities &= ~(1ULL << priority);
}
}
void adjust(const T& element, u32 old_priority, u32 new_priority, bool adjust_front = false) {
remove(element, old_priority);
add(element, new_priority, !adjust_front);
}
void adjust(const_iterator it, u32 old_priority, u32 new_priority, bool adjust_front = false) {
adjust(*it, old_priority, new_priority, adjust_front);
}
void transfer_to_front(const T& element, u32 priority, MultiLevelQueue& other) {
ListSplice(other.levels[priority], other.levels[priority].begin(), levels[priority],
ListIterateTo(levels[priority], element));
other.used_priorities |= 1ULL << priority;
if (levels[priority].empty()) {
used_priorities &= ~(1ULL << priority);
}
}
void transfer_to_front(const_iterator it, u32 priority, MultiLevelQueue& other) {
transfer_to_front(*it, priority, other);
}
void transfer_to_back(const T& element, u32 priority, MultiLevelQueue& other) {
ListSplice(other.levels[priority], other.levels[priority].end(), levels[priority],
ListIterateTo(levels[priority], element));
other.used_priorities |= 1ULL << priority;
if (levels[priority].empty()) {
used_priorities &= ~(1ULL << priority);
}
}
void transfer_to_back(const_iterator it, u32 priority, MultiLevelQueue& other) {
transfer_to_back(*it, priority, other);
}
void yield(u32 priority, std::size_t n = 1) {
ListShiftForward(levels[priority], n);
}
std::size_t depth() const {
return Depth;
}
std::size_t size(u32 priority) const {
return levels[priority].size();
}
std::size_t size() const {
u64 priorities = used_priorities;
std::size_t size = 0;
while (priorities != 0) {
const u64 current_priority = CountTrailingZeroes64(priorities);
size += levels[current_priority].size();
priorities &= ~(1ULL << current_priority);
}
return size;
}
bool empty() const {
return used_priorities == 0;
}
bool empty(u32 priority) const {
return (used_priorities & (1ULL << priority)) == 0;
}
u32 highest_priority_set(u32 max_priority = 0) const {
const u64 priorities =
max_priority == 0 ? used_priorities : (used_priorities & ~((1ULL << max_priority) - 1));
return priorities == 0 ? Depth : static_cast<u32>(CountTrailingZeroes64(priorities));
}
u32 lowest_priority_set(u32 min_priority = Depth - 1) const {
const u64 priorities = min_priority >= Depth - 1
? used_priorities
: (used_priorities & ((1ULL << (min_priority + 1)) - 1));
return priorities == 0 ? Depth : 63 - CountLeadingZeroes64(priorities);
}
const_iterator cbegin(u32 max_prio = 0) const {
const u32 priority = highest_priority_set(max_prio);
return priority == Depth ? cend()
: const_iterator{*this, levels[priority].cbegin(), priority};
}
const_iterator begin(u32 max_prio = 0) const {
return cbegin(max_prio);
}
iterator begin(u32 max_prio = 0) {
const u32 priority = highest_priority_set(max_prio);
return priority == Depth ? end() : iterator{*this, levels[priority].begin(), priority};
}
const_iterator cend(u32 min_prio = Depth - 1) const {
return min_prio == Depth - 1 ? const_iterator{*this, Depth} : cbegin(min_prio + 1);
}
const_iterator end(u32 min_prio = Depth - 1) const {
return cend(min_prio);
}
iterator end(u32 min_prio = Depth - 1) {
return min_prio == Depth - 1 ? iterator{*this, Depth} : begin(min_prio + 1);
}
T& front(u32 max_priority = 0) {
const u32 priority = highest_priority_set(max_priority);
return levels[priority == Depth ? 0 : priority].front();
}
const T& front(u32 max_priority = 0) const {
const u32 priority = highest_priority_set(max_priority);
return levels[priority == Depth ? 0 : priority].front();
}
T back(u32 min_priority = Depth - 1) {
const u32 priority = lowest_priority_set(min_priority); // intended
return levels[priority == Depth ? 63 : priority].back();
}
const T& back(u32 min_priority = Depth - 1) const {
const u32 priority = lowest_priority_set(min_priority); // intended
return levels[priority == Depth ? 63 : priority].back();
}
private:
using const_list_iterator = typename std::list<T>::const_iterator;
static void ListShiftForward(std::list<T>& list, const std::size_t shift = 1) {
if (shift >= list.size()) {
return;
}
const auto begin_range = list.begin();
const auto end_range = std::next(begin_range, shift);
list.splice(list.end(), list, begin_range, end_range);
}
static void ListSplice(std::list<T>& in_list, const_list_iterator position,
std::list<T>& out_list, const_list_iterator element) {
in_list.splice(position, out_list, element);
}
static const_list_iterator ListIterateTo(const std::list<T>& list, const T& element) {
auto it = list.cbegin();
while (it != list.cend() && *it != element) {
++it;
}
return it;
}
std::array<std::list<T>, Depth> levels;
u64 used_priorities = 0;
};
} // namespace Common

View File

@@ -16,6 +16,7 @@ void PageTable::Resize(std::size_t address_space_width_in_bits) {
pointers.resize(num_page_table_entries);
attributes.resize(num_page_table_entries);
backing_addr.resize(num_page_table_entries);
// The default is a 39-bit address space, which causes an initial 1GB allocation size. If the
// vector size is subsequently decreased (via resize), the vector might not automatically
@@ -24,6 +25,7 @@ void PageTable::Resize(std::size_t address_space_width_in_bits) {
pointers.shrink_to_fit();
attributes.shrink_to_fit();
backing_addr.shrink_to_fit();
}
} // namespace Common

View File

@@ -21,6 +21,8 @@ enum class PageType : u8 {
RasterizerCachedMemory,
/// Page is mapped to a I/O region. Writing and reading to this page is handled by functions.
Special,
/// Page is allocated for use.
Allocated,
};
struct SpecialRegion {
@@ -66,7 +68,7 @@ struct PageTable {
* Contains MMIO handlers that back memory regions whose entries in the `attribute` vector is
* of type `Special`.
*/
boost::icl::interval_map<VAddr, std::set<SpecialRegion>> special_regions;
boost::icl::interval_map<u64, std::set<SpecialRegion>> special_regions;
/**
* Vector of fine grained page attributes. If it is set to any value other than `Memory`, then
@@ -74,6 +76,8 @@ struct PageTable {
*/
std::vector<PageType> attributes;
std::vector<u64> backing_addr;
const std::size_t page_size_in_bits{};
};

View File

@@ -27,18 +27,6 @@ namespace Common {
#ifdef _MSC_VER
void SetThreadAffinity(std::thread::native_handle_type thread, u32 mask) {
SetThreadAffinityMask(thread, mask);
}
void SetCurrentThreadAffinity(u32 mask) {
SetThreadAffinityMask(GetCurrentThread(), mask);
}
void SwitchCurrentThread() {
SwitchToThread();
}
// Sets the debugger-visible name of the current thread.
// Uses undocumented (actually, it is now documented) trick.
// http://msdn.microsoft.com/library/default.asp?url=/library/en-us/vsdebug/html/vxtsksettingthreadname.asp
@@ -70,31 +58,6 @@ void SetCurrentThreadName(const char* name) {
#else // !MSVC_VER, so must be POSIX threads
void SetThreadAffinity(std::thread::native_handle_type thread, u32 mask) {
#ifdef __APPLE__
thread_policy_set(pthread_mach_thread_np(thread), THREAD_AFFINITY_POLICY, (integer_t*)&mask, 1);
#elif (defined __linux__ || defined __FreeBSD__) && !(defined ANDROID)
cpu_set_t cpu_set;
CPU_ZERO(&cpu_set);
for (int i = 0; i != sizeof(mask) * 8; ++i)
if ((mask >> i) & 1)
CPU_SET(i, &cpu_set);
pthread_setaffinity_np(thread, sizeof(cpu_set), &cpu_set);
#endif
}
void SetCurrentThreadAffinity(u32 mask) {
SetThreadAffinity(pthread_self(), mask);
}
#ifndef _WIN32
void SwitchCurrentThread() {
usleep(1000 * 1);
}
#endif
// MinGW with the POSIX threading model does not support pthread_setname_np
#if !defined(_WIN32) || defined(_MSC_VER)
void SetCurrentThreadName(const char* name) {

View File

@@ -9,14 +9,13 @@
#include <cstddef>
#include <mutex>
#include <thread>
#include "common/common_types.h"
namespace Common {
class Event {
public:
void Set() {
std::lock_guard<std::mutex> lk(mutex);
std::lock_guard lk{mutex};
if (!is_set) {
is_set = true;
condvar.notify_one();
@@ -24,14 +23,14 @@ public:
}
void Wait() {
std::unique_lock<std::mutex> lk(mutex);
std::unique_lock lk{mutex};
condvar.wait(lk, [&] { return is_set; });
is_set = false;
}
template <class Clock, class Duration>
bool WaitUntil(const std::chrono::time_point<Clock, Duration>& time) {
std::unique_lock<std::mutex> lk(mutex);
std::unique_lock lk{mutex};
if (!condvar.wait_until(lk, time, [this] { return is_set; }))
return false;
is_set = false;
@@ -39,7 +38,7 @@ public:
}
void Reset() {
std::unique_lock<std::mutex> lk(mutex);
std::unique_lock lk{mutex};
// no other action required, since wait loops on the predicate and any lingering signal will
// get cleared on the first iteration
is_set = false;
@@ -57,7 +56,7 @@ public:
/// Blocks until all "count" threads have called Sync()
void Sync() {
std::unique_lock<std::mutex> lk(mutex);
std::unique_lock lk{mutex};
const std::size_t current_generation = generation;
if (++waiting == count) {
@@ -78,9 +77,6 @@ private:
std::size_t generation = 0; // Incremented once each time the barrier is used
};
void SetThreadAffinity(std::thread::native_handle_type thread, u32 mask);
void SetCurrentThreadAffinity(u32 mask);
void SwitchCurrentThread(); // On Linux, this is equal to sleep 1ms
void SetCurrentThreadName(const char* name);
} // namespace Common

View File

@@ -78,7 +78,7 @@ public:
T PopWait() {
if (Empty()) {
std::unique_lock<std::mutex> lock(cv_mutex);
std::unique_lock lock{cv_mutex};
cv.wait(lock, [this]() { return !Empty(); });
}
T t;
@@ -137,7 +137,7 @@ public:
template <typename Arg>
void Push(Arg&& t) {
std::lock_guard<std::mutex> lock(write_lock);
std::lock_guard lock{write_lock};
spsc_queue.Push(t);
}

View File

@@ -31,6 +31,8 @@ add_library(core STATIC
file_sys/bis_factory.h
file_sys/card_image.cpp
file_sys/card_image.h
file_sys/cheat_engine.cpp
file_sys/cheat_engine.h
file_sys/content_archive.cpp
file_sys/content_archive.h
file_sys/control_metadata.cpp
@@ -68,6 +70,8 @@ add_library(core STATIC
file_sys/system_archive/ng_word.h
file_sys/system_archive/system_archive.cpp
file_sys/system_archive/system_archive.h
file_sys/system_archive/system_version.cpp
file_sys/system_archive/system_version.h
file_sys/vfs.cpp
file_sys/vfs.h
file_sys/vfs_concat.cpp
@@ -142,6 +146,8 @@ add_library(core STATIC
hle/kernel/svc_wrap.h
hle/kernel/thread.cpp
hle/kernel/thread.h
hle/kernel/transfer_memory.cpp
hle/kernel/transfer_memory.h
hle/kernel/vm_manager.cpp
hle/kernel/vm_manager.h
hle/kernel/wait_object.cpp

View File

@@ -32,6 +32,7 @@
#include "core/perf_stats.h"
#include "core/settings.h"
#include "core/telemetry_session.h"
#include "file_sys/cheat_engine.h"
#include "frontend/applets/profile_select.h"
#include "frontend/applets/software_keyboard.h"
#include "frontend/applets/web_browser.h"
@@ -205,6 +206,7 @@ struct System::Impl {
GDBStub::Shutdown();
Service::Shutdown();
service_manager.reset();
cheat_engine.reset();
telemetry_session.reset();
gpu_core.reset();
@@ -255,6 +257,8 @@ struct System::Impl {
CpuCoreManager cpu_core_manager;
bool is_powered_on = false;
std::unique_ptr<FileSys::CheatEngine> cheat_engine;
/// Frontend applets
std::unique_ptr<Core::Frontend::ProfileSelectApplet> profile_selector;
std::unique_ptr<Core::Frontend::SoftwareKeyboardApplet> software_keyboard;
@@ -453,6 +457,13 @@ Tegra::DebugContext* System::GetGPUDebugContext() const {
return impl->debug_context.get();
}
void System::RegisterCheatList(const std::vector<FileSys::CheatList>& list,
const std::string& build_id, VAddr code_region_start,
VAddr code_region_end) {
impl->cheat_engine = std::make_unique<FileSys::CheatEngine>(*this, list, build_id,
code_region_start, code_region_end);
}
void System::SetFilesystem(std::shared_ptr<FileSys::VfsFilesystem> vfs) {
impl->virtual_filesystem = std::move(vfs);
}

View File

@@ -20,6 +20,7 @@ class WebBrowserApplet;
} // namespace Core::Frontend
namespace FileSys {
class CheatList;
class VfsFilesystem;
} // namespace FileSys
@@ -253,6 +254,9 @@ public:
std::shared_ptr<FileSys::VfsFilesystem> GetFilesystem() const;
void RegisterCheatList(const std::vector<FileSys::CheatList>& list, const std::string& build_id,
VAddr code_region_start, VAddr code_region_end);
void SetProfileSelector(std::unique_ptr<Frontend::ProfileSelectApplet> applet);
const Frontend::ProfileSelectApplet& GetProfileSelector() const;

View File

@@ -22,7 +22,7 @@
namespace Core {
void CpuBarrier::NotifyEnd() {
std::unique_lock<std::mutex> lock(mutex);
std::unique_lock lock{mutex};
end = true;
condition.notify_all();
}
@@ -34,7 +34,7 @@ bool CpuBarrier::Rendezvous() {
}
if (!end) {
std::unique_lock<std::mutex> lock(mutex);
std::unique_lock lock{mutex};
--cores_waiting;
if (!cores_waiting) {
@@ -131,7 +131,7 @@ void Cpu::Reschedule() {
reschedule_pending = false;
// Lock the global kernel mutex when we manipulate the HLE state
std::lock_guard<std::recursive_mutex> lock(HLE::g_hle_lock);
std::lock_guard lock{HLE::g_hle_lock};
scheduler->Reschedule();
}

View File

@@ -186,7 +186,7 @@ void CoreTiming::Advance() {
Event evt = std::move(event_queue.front());
std::pop_heap(event_queue.begin(), event_queue.end(), std::greater<>());
event_queue.pop_back();
evt.type->callback(evt.userdata, static_cast<int>(global_timer - evt.time));
evt.type->callback(evt.userdata, global_timer - evt.time);
}
is_global_timer_sane = false;

View File

@@ -15,7 +15,7 @@
namespace Core::Timing {
/// A callback that may be scheduled for a particular core timing event.
using TimedCallback = std::function<void(u64 userdata, int cycles_late)>;
using TimedCallback = std::function<void(u64 userdata, s64 cycles_late)>;
/// Contains the characteristics of a particular event.
struct EventType {

View File

@@ -0,0 +1,492 @@
// Copyright 2018 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include <locale>
#include "common/hex_util.h"
#include "common/microprofile.h"
#include "common/swap.h"
#include "core/core.h"
#include "core/core_timing.h"
#include "core/core_timing_util.h"
#include "core/file_sys/cheat_engine.h"
#include "core/hle/kernel/process.h"
#include "core/hle/service/hid/controllers/npad.h"
#include "core/hle/service/hid/hid.h"
#include "core/hle/service/sm/sm.h"
namespace FileSys {
constexpr s64 CHEAT_ENGINE_TICKS = static_cast<s64>(Core::Timing::BASE_CLOCK_RATE / 60);
constexpr u32 KEYPAD_BITMASK = 0x3FFFFFF;
u64 Cheat::Address() const {
u64 out;
std::memcpy(&out, raw.data(), sizeof(u64));
return Common::swap64(out) & 0xFFFFFFFFFF;
}
u64 Cheat::ValueWidth(u64 offset) const {
return Value(offset, width);
}
u64 Cheat::Value(u64 offset, u64 width) const {
u64 out;
std::memcpy(&out, raw.data() + offset, sizeof(u64));
out = Common::swap64(out);
if (width == 8)
return out;
return out & ((1ull << (width * CHAR_BIT)) - 1);
}
u32 Cheat::KeypadValue() const {
u32 out;
std::memcpy(&out, raw.data(), sizeof(u32));
return Common::swap32(out) & 0x0FFFFFFF;
}
void CheatList::SetMemoryParameters(VAddr main_begin, VAddr heap_begin, VAddr main_end,
VAddr heap_end, MemoryWriter writer, MemoryReader reader) {
this->main_region_begin = main_begin;
this->main_region_end = main_end;
this->heap_region_begin = heap_begin;
this->heap_region_end = heap_end;
this->writer = writer;
this->reader = reader;
}
MICROPROFILE_DEFINE(Cheat_Engine, "Add-Ons", "Cheat Engine", MP_RGB(70, 200, 70));
void CheatList::Execute() {
MICROPROFILE_SCOPE(Cheat_Engine);
std::fill(scratch.begin(), scratch.end(), 0);
in_standard = false;
for (std::size_t i = 0; i < master_list.size(); ++i) {
LOG_DEBUG(Common_Filesystem, "Executing block #{:08X} ({})", i, master_list[i].first);
current_block = i;
ExecuteBlock(master_list[i].second);
}
in_standard = true;
for (std::size_t i = 0; i < standard_list.size(); ++i) {
LOG_DEBUG(Common_Filesystem, "Executing block #{:08X} ({})", i, standard_list[i].first);
current_block = i;
ExecuteBlock(standard_list[i].second);
}
}
CheatList::CheatList(const Core::System& system_, ProgramSegment master, ProgramSegment standard)
: master_list{std::move(master)}, standard_list{std::move(standard)}, system{&system_} {}
bool CheatList::EvaluateConditional(const Cheat& cheat) const {
using ComparisonFunction = bool (*)(u64, u64);
constexpr std::array<ComparisonFunction, 6> comparison_functions{
[](u64 a, u64 b) { return a > b; }, [](u64 a, u64 b) { return a >= b; },
[](u64 a, u64 b) { return a < b; }, [](u64 a, u64 b) { return a <= b; },
[](u64 a, u64 b) { return a == b; }, [](u64 a, u64 b) { return a != b; },
};
if (cheat.type == CodeType::ConditionalInput) {
const auto applet_resource =
system->ServiceManager().GetService<Service::HID::Hid>("hid")->GetAppletResource();
if (applet_resource == nullptr) {
LOG_WARNING(
Common_Filesystem,
"Attempted to evaluate input conditional, but applet resource is not initialized!");
return false;
}
const auto press_state =
applet_resource
->GetController<Service::HID::Controller_NPad>(Service::HID::HidController::NPad)
.GetAndResetPressState();
return ((press_state & cheat.KeypadValue()) & KEYPAD_BITMASK) != 0;
}
ASSERT(cheat.type == CodeType::Conditional);
const auto offset =
cheat.memory_type == MemoryType::MainNSO ? main_region_begin : heap_region_begin;
ASSERT(static_cast<u8>(cheat.comparison_op.Value()) < 6);
auto* function = comparison_functions[static_cast<u8>(cheat.comparison_op.Value())];
const auto addr = cheat.Address() + offset;
return function(reader(cheat.width, SanitizeAddress(addr)), cheat.ValueWidth(8));
}
void CheatList::ProcessBlockPairs(const Block& block) {
block_pairs.clear();
u64 scope = 0;
std::map<u64, u64> pairs;
for (std::size_t i = 0; i < block.size(); ++i) {
const auto& cheat = block[i];
switch (cheat.type) {
case CodeType::Conditional:
case CodeType::ConditionalInput:
pairs.insert_or_assign(scope, i);
++scope;
break;
case CodeType::EndConditional: {
--scope;
const auto idx = pairs.at(scope);
block_pairs.insert_or_assign(idx, i);
break;
}
case CodeType::Loop: {
if (cheat.end_of_loop) {
--scope;
const auto idx = pairs.at(scope);
block_pairs.insert_or_assign(idx, i);
} else {
pairs.insert_or_assign(scope, i);
++scope;
}
break;
}
}
}
}
void CheatList::WriteImmediate(const Cheat& cheat) {
const auto offset =
cheat.memory_type == MemoryType::MainNSO ? main_region_begin : heap_region_begin;
const auto& register_3 = scratch.at(cheat.register_3);
const auto addr = cheat.Address() + offset + register_3;
LOG_DEBUG(Common_Filesystem, "writing value={:016X} to addr={:016X}", addr,
cheat.Value(8, cheat.width));
writer(cheat.width, SanitizeAddress(addr), cheat.ValueWidth(8));
}
void CheatList::BeginConditional(const Cheat& cheat) {
if (EvaluateConditional(cheat)) {
return;
}
const auto iter = block_pairs.find(current_index);
ASSERT(iter != block_pairs.end());
current_index = iter->second - 1;
}
void CheatList::EndConditional(const Cheat& cheat) {
LOG_DEBUG(Common_Filesystem, "Ending conditional block.");
}
void CheatList::Loop(const Cheat& cheat) {
if (cheat.end_of_loop.Value())
ASSERT(!cheat.end_of_loop.Value());
auto& register_3 = scratch.at(cheat.register_3);
const auto iter = block_pairs.find(current_index);
ASSERT(iter != block_pairs.end());
ASSERT(iter->first < iter->second);
const s32 initial_value = static_cast<s32>(cheat.Value(4, sizeof(s32)));
for (s32 i = initial_value; i >= 0; --i) {
register_3 = static_cast<u64>(i);
for (std::size_t c = iter->first + 1; c < iter->second; ++c) {
current_index = c;
ExecuteSingleCheat(
(in_standard ? standard_list : master_list)[current_block].second[c]);
}
}
current_index = iter->second;
}
void CheatList::LoadImmediate(const Cheat& cheat) {
auto& register_3 = scratch.at(cheat.register_3);
LOG_DEBUG(Common_Filesystem, "setting register={:01X} equal to value={:016X}", cheat.register_3,
cheat.Value(4, 8));
register_3 = cheat.Value(4, 8);
}
void CheatList::LoadIndexed(const Cheat& cheat) {
const auto offset =
cheat.memory_type == MemoryType::MainNSO ? main_region_begin : heap_region_begin;
auto& register_3 = scratch.at(cheat.register_3);
const auto addr = (cheat.load_from_register.Value() ? register_3 : offset) + cheat.Address();
LOG_DEBUG(Common_Filesystem, "writing indexed value to register={:01X}, addr={:016X}",
cheat.register_3, addr);
register_3 = reader(cheat.width, SanitizeAddress(addr));
}
void CheatList::StoreIndexed(const Cheat& cheat) {
const auto& register_3 = scratch.at(cheat.register_3);
const auto addr =
register_3 + (cheat.add_additional_register.Value() ? scratch.at(cheat.register_6) : 0);
LOG_DEBUG(Common_Filesystem, "writing value={:016X} to addr={:016X}",
cheat.Value(4, cheat.width), addr);
writer(cheat.width, SanitizeAddress(addr), cheat.ValueWidth(4));
}
void CheatList::RegisterArithmetic(const Cheat& cheat) {
using ArithmeticFunction = u64 (*)(u64, u64);
constexpr std::array<ArithmeticFunction, 5> arithmetic_functions{
[](u64 a, u64 b) { return a + b; }, [](u64 a, u64 b) { return a - b; },
[](u64 a, u64 b) { return a * b; }, [](u64 a, u64 b) { return a << b; },
[](u64 a, u64 b) { return a >> b; },
};
using ArithmeticOverflowCheck = bool (*)(u64, u64);
constexpr std::array<ArithmeticOverflowCheck, 5> arithmetic_overflow_checks{
[](u64 a, u64 b) { return a > (std::numeric_limits<u64>::max() - b); }, // a + b
[](u64 a, u64 b) { return a > (std::numeric_limits<u64>::max() + b); }, // a - b
[](u64 a, u64 b) { return a > (std::numeric_limits<u64>::max() / b); }, // a * b
[](u64 a, u64 b) { return b >= 64 || (a & ~((1ull << (64 - b)) - 1)) != 0; }, // a << b
[](u64 a, u64 b) { return b >= 64 || (a & ((1ull << b) - 1)) != 0; }, // a >> b
};
static_assert(sizeof(arithmetic_functions) == sizeof(arithmetic_overflow_checks),
"Missing or have extra arithmetic overflow checks compared to functions!");
auto& register_3 = scratch.at(cheat.register_3);
ASSERT(static_cast<u8>(cheat.arithmetic_op.Value()) < 5);
auto* function = arithmetic_functions[static_cast<u8>(cheat.arithmetic_op.Value())];
auto* overflow_function =
arithmetic_overflow_checks[static_cast<u8>(cheat.arithmetic_op.Value())];
LOG_DEBUG(Common_Filesystem, "performing arithmetic with register={:01X}, value={:016X}",
cheat.register_3, cheat.ValueWidth(4));
if (overflow_function(register_3, cheat.ValueWidth(4))) {
LOG_WARNING(Common_Filesystem,
"overflow will occur when performing arithmetic operation={:02X} with operands "
"a={:016X}, b={:016X}!",
static_cast<u8>(cheat.arithmetic_op.Value()), register_3, cheat.ValueWidth(4));
}
register_3 = function(register_3, cheat.ValueWidth(4));
}
void CheatList::BeginConditionalInput(const Cheat& cheat) {
if (EvaluateConditional(cheat))
return;
const auto iter = block_pairs.find(current_index);
ASSERT(iter != block_pairs.end());
current_index = iter->second - 1;
}
VAddr CheatList::SanitizeAddress(VAddr in) const {
if ((in < main_region_begin || in >= main_region_end) &&
(in < heap_region_begin || in >= heap_region_end)) {
LOG_ERROR(Common_Filesystem,
"Cheat attempting to access memory at invalid address={:016X}, if this persists, "
"the cheat may be incorrect. However, this may be normal early in execution if "
"the game has not properly set up yet.",
in);
return 0; ///< Invalid addresses will hard crash
}
return in;
}
void CheatList::ExecuteSingleCheat(const Cheat& cheat) {
using CheatOperationFunction = void (CheatList::*)(const Cheat&);
constexpr std::array<CheatOperationFunction, 9> cheat_operation_functions{
&CheatList::WriteImmediate, &CheatList::BeginConditional,
&CheatList::EndConditional, &CheatList::Loop,
&CheatList::LoadImmediate, &CheatList::LoadIndexed,
&CheatList::StoreIndexed, &CheatList::RegisterArithmetic,
&CheatList::BeginConditionalInput,
};
const auto index = static_cast<u8>(cheat.type.Value());
ASSERT(index < sizeof(cheat_operation_functions));
const auto op = cheat_operation_functions[index];
(this->*op)(cheat);
}
void CheatList::ExecuteBlock(const Block& block) {
encountered_loops.clear();
ProcessBlockPairs(block);
for (std::size_t i = 0; i < block.size(); ++i) {
current_index = i;
ExecuteSingleCheat(block[i]);
i = current_index;
}
}
CheatParser::~CheatParser() = default;
CheatList CheatParser::MakeCheatList(const Core::System& system, CheatList::ProgramSegment master,
CheatList::ProgramSegment standard) const {
return {system, std::move(master), std::move(standard)};
}
TextCheatParser::~TextCheatParser() = default;
CheatList TextCheatParser::Parse(const Core::System& system, const std::vector<u8>& data) const {
std::stringstream ss;
ss.write(reinterpret_cast<const char*>(data.data()), data.size());
std::vector<std::string> lines;
std::string stream_line;
while (std::getline(ss, stream_line)) {
// Remove a trailing \r
if (!stream_line.empty() && stream_line.back() == '\r')
stream_line.pop_back();
lines.push_back(std::move(stream_line));
}
CheatList::ProgramSegment master_list;
CheatList::ProgramSegment standard_list;
for (std::size_t i = 0; i < lines.size(); ++i) {
auto line = lines[i];
if (!line.empty() && (line[0] == '[' || line[0] == '{')) {
const auto master = line[0] == '{';
const auto begin = master ? line.find('{') : line.find('[');
const auto end = master ? line.rfind('}') : line.rfind(']');
ASSERT(begin != std::string::npos && end != std::string::npos);
const std::string patch_name{line.begin() + begin + 1, line.begin() + end};
CheatList::Block block{};
while (i < lines.size() - 1) {
line = lines[++i];
if (!line.empty() && (line[0] == '[' || line[0] == '{')) {
--i;
break;
}
if (line.size() < 8)
continue;
Cheat out{};
out.raw = ParseSingleLineCheat(line);
block.push_back(out);
}
(master ? master_list : standard_list).emplace_back(patch_name, block);
}
}
return MakeCheatList(system, master_list, standard_list);
}
std::array<u8, 16> TextCheatParser::ParseSingleLineCheat(const std::string& line) const {
std::array<u8, 16> out{};
if (line.size() < 8)
return out;
const auto word1 = Common::HexStringToArray<sizeof(u32)>(std::string_view{line.data(), 8});
std::memcpy(out.data(), word1.data(), sizeof(u32));
if (line.size() < 17 || line[8] != ' ')
return out;
const auto word2 = Common::HexStringToArray<sizeof(u32)>(std::string_view{line.data() + 9, 8});
std::memcpy(out.data() + sizeof(u32), word2.data(), sizeof(u32));
if (line.size() < 26 || line[17] != ' ') {
// Perform shifting in case value is truncated early.
const auto type = static_cast<CodeType>((out[0] & 0xF0) >> 4);
if (type == CodeType::Loop || type == CodeType::LoadImmediate ||
type == CodeType::StoreIndexed || type == CodeType::RegisterArithmetic) {
std::memcpy(out.data() + 8, out.data() + 4, sizeof(u32));
std::memset(out.data() + 4, 0, sizeof(u32));
}
return out;
}
const auto word3 = Common::HexStringToArray<sizeof(u32)>(std::string_view{line.data() + 18, 8});
std::memcpy(out.data() + 2 * sizeof(u32), word3.data(), sizeof(u32));
if (line.size() < 35 || line[26] != ' ') {
// Perform shifting in case value is truncated early.
const auto type = static_cast<CodeType>((out[0] & 0xF0) >> 4);
if (type == CodeType::WriteImmediate || type == CodeType::Conditional) {
std::memcpy(out.data() + 12, out.data() + 8, sizeof(u32));
std::memset(out.data() + 8, 0, sizeof(u32));
}
return out;
}
const auto word4 = Common::HexStringToArray<sizeof(u32)>(std::string_view{line.data() + 27, 8});
std::memcpy(out.data() + 3 * sizeof(u32), word4.data(), sizeof(u32));
return out;
}
namespace {
u64 MemoryReadImpl(u32 width, VAddr addr) {
switch (width) {
case 1:
return Memory::Read8(addr);
case 2:
return Memory::Read16(addr);
case 4:
return Memory::Read32(addr);
case 8:
return Memory::Read64(addr);
default:
UNREACHABLE();
return 0;
}
}
void MemoryWriteImpl(u32 width, VAddr addr, u64 value) {
switch (width) {
case 1:
Memory::Write8(addr, static_cast<u8>(value));
break;
case 2:
Memory::Write16(addr, static_cast<u16>(value));
break;
case 4:
Memory::Write32(addr, static_cast<u32>(value));
break;
case 8:
Memory::Write64(addr, value);
break;
default:
UNREACHABLE();
}
}
} // Anonymous namespace
CheatEngine::CheatEngine(Core::System& system, std::vector<CheatList> cheats_,
const std::string& build_id, VAddr code_region_start,
VAddr code_region_end)
: cheats{std::move(cheats_)}, core_timing{system.CoreTiming()} {
event = core_timing.RegisterEvent(
"CheatEngine::FrameCallback::" + build_id,
[this](u64 userdata, s64 cycles_late) { FrameCallback(userdata, cycles_late); });
core_timing.ScheduleEvent(CHEAT_ENGINE_TICKS, event);
const auto& vm_manager = system.CurrentProcess()->VMManager();
for (auto& list : this->cheats) {
list.SetMemoryParameters(code_region_start, vm_manager.GetHeapRegionBaseAddress(),
code_region_end, vm_manager.GetHeapRegionEndAddress(),
&MemoryWriteImpl, &MemoryReadImpl);
}
}
CheatEngine::~CheatEngine() {
core_timing.UnscheduleEvent(event, 0);
}
void CheatEngine::FrameCallback(u64 userdata, s64 cycles_late) {
for (auto& list : cheats) {
list.Execute();
}
core_timing.ScheduleEvent(CHEAT_ENGINE_TICKS - cycles_late, event);
}
} // namespace FileSys

View File

@@ -0,0 +1,234 @@
// Copyright 2018 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <map>
#include <set>
#include <vector>
#include "common/bit_field.h"
#include "common/common_types.h"
namespace Core {
class System;
}
namespace Core::Timing {
class CoreTiming;
struct EventType;
} // namespace Core::Timing
namespace FileSys {
enum class CodeType : u32 {
// 0TMR00AA AAAAAAAA YYYYYYYY YYYYYYYY
// Writes a T sized value Y to the address A added to the value of register R in memory domain M
WriteImmediate = 0,
// 1TMC00AA AAAAAAAA YYYYYYYY YYYYYYYY
// Compares the T sized value Y to the value at address A in memory domain M using the
// conditional function C. If success, continues execution. If failure, jumps to the matching
// EndConditional statement.
Conditional = 1,
// 20000000
// Terminates a Conditional or ConditionalInput block.
EndConditional = 2,
// 300R0000 VVVVVVVV
// Starts looping V times, storing the current count in register R.
// Loop block is terminated with a matching 310R0000.
Loop = 3,
// 400R0000 VVVVVVVV VVVVVVVV
// Sets the value of register R to the value V.
LoadImmediate = 4,
// 5TMRI0AA AAAAAAAA
// Sets the value of register R to the value of width T at address A in memory domain M, with
// the current value of R added to the address if I == 1.
LoadIndexed = 5,
// 6T0RIFG0 VVVVVVVV VVVVVVVV
// Writes the value V of width T to the memory address stored in register R. Adds the value of
// register G to the final calculation if F is nonzero. Increments the value of register R by T
// after operation if I is nonzero.
StoreIndexed = 6,
// 7T0RA000 VVVVVVVV
// Performs the arithmetic operation A on the value in register R and the value V of width T,
// storing the result in register R.
RegisterArithmetic = 7,
// 8KKKKKKK
// Checks to see if any of the buttons defined by the bitmask K are pressed. If any are,
// execution continues. If none are, execution skips to the next EndConditional command.
ConditionalInput = 8,
};
enum class MemoryType : u32 {
// Addressed relative to start of main NSO
MainNSO = 0,
// Addressed relative to start of heap
Heap = 1,
};
enum class ArithmeticOp : u32 {
Add = 0,
Sub = 1,
Mult = 2,
LShift = 3,
RShift = 4,
};
enum class ComparisonOp : u32 {
GreaterThan = 1,
GreaterThanEqual = 2,
LessThan = 3,
LessThanEqual = 4,
Equal = 5,
Inequal = 6,
};
union Cheat {
std::array<u8, 16> raw;
BitField<4, 4, CodeType> type;
BitField<0, 4, u32> width; // Can be 1, 2, 4, or 8. Measured in bytes.
BitField<0, 4, u32> end_of_loop;
BitField<12, 4, MemoryType> memory_type;
BitField<8, 4, u32> register_3;
BitField<8, 4, ComparisonOp> comparison_op;
BitField<20, 4, u32> load_from_register;
BitField<20, 4, u32> increment_register;
BitField<20, 4, ArithmeticOp> arithmetic_op;
BitField<16, 4, u32> add_additional_register;
BitField<28, 4, u32> register_6;
u64 Address() const;
u64 ValueWidth(u64 offset) const;
u64 Value(u64 offset, u64 width) const;
u32 KeypadValue() const;
};
class CheatParser;
// Represents a full collection of cheats for a game. The Execute function should be called every
// interval that all cheats should be executed. Clients should not directly instantiate this class
// (hence private constructor), they should instead receive an instance from CheatParser, which
// guarantees the list is always in an acceptable state.
class CheatList {
public:
friend class CheatParser;
using Block = std::vector<Cheat>;
using ProgramSegment = std::vector<std::pair<std::string, Block>>;
// (width in bytes, address, value)
using MemoryWriter = void (*)(u32, VAddr, u64);
// (width in bytes, address) -> value
using MemoryReader = u64 (*)(u32, VAddr);
void SetMemoryParameters(VAddr main_begin, VAddr heap_begin, VAddr main_end, VAddr heap_end,
MemoryWriter writer, MemoryReader reader);
void Execute();
private:
CheatList(const Core::System& system_, ProgramSegment master, ProgramSegment standard);
void ProcessBlockPairs(const Block& block);
void ExecuteSingleCheat(const Cheat& cheat);
void ExecuteBlock(const Block& block);
bool EvaluateConditional(const Cheat& cheat) const;
// Individual cheat operations
void WriteImmediate(const Cheat& cheat);
void BeginConditional(const Cheat& cheat);
void EndConditional(const Cheat& cheat);
void Loop(const Cheat& cheat);
void LoadImmediate(const Cheat& cheat);
void LoadIndexed(const Cheat& cheat);
void StoreIndexed(const Cheat& cheat);
void RegisterArithmetic(const Cheat& cheat);
void BeginConditionalInput(const Cheat& cheat);
VAddr SanitizeAddress(VAddr in) const;
// Master Codes are defined as codes that cannot be disabled and are run prior to all
// others.
ProgramSegment master_list;
// All other codes
ProgramSegment standard_list;
bool in_standard = false;
// 16 (0x0-0xF) scratch registers that can be used by cheats
std::array<u64, 16> scratch{};
MemoryWriter writer = nullptr;
MemoryReader reader = nullptr;
u64 main_region_begin{};
u64 heap_region_begin{};
u64 main_region_end{};
u64 heap_region_end{};
u64 current_block{};
// The current index of the cheat within the current Block
u64 current_index{};
// The 'stack' of the program. When a conditional or loop statement is encountered, its index is
// pushed onto this queue. When a end block is encountered, the condition is checked.
std::map<u64, u64> block_pairs;
std::set<u64> encountered_loops;
const Core::System* system;
};
// Intermediary class that parses a text file or other disk format for storing cheats into a
// CheatList object, that can be used for execution.
class CheatParser {
public:
virtual ~CheatParser();
virtual CheatList Parse(const Core::System& system, const std::vector<u8>& data) const = 0;
protected:
CheatList MakeCheatList(const Core::System& system_, CheatList::ProgramSegment master,
CheatList::ProgramSegment standard) const;
};
// CheatParser implementation that parses text files
class TextCheatParser final : public CheatParser {
public:
~TextCheatParser() override;
CheatList Parse(const Core::System& system, const std::vector<u8>& data) const override;
private:
std::array<u8, 16> ParseSingleLineCheat(const std::string& line) const;
};
// Class that encapsulates a CheatList and manages its interaction with memory and CoreTiming
class CheatEngine final {
public:
CheatEngine(Core::System& system_, std::vector<CheatList> cheats_, const std::string& build_id,
VAddr code_region_start, VAddr code_region_end);
~CheatEngine();
private:
void FrameCallback(u64 userdata, s64 cycles_late);
std::vector<CheatList> cheats;
Core::Timing::EventType* event;
Core::Timing::CoreTiming& core_timing;
};
} // namespace FileSys

View File

@@ -11,6 +11,9 @@ namespace FileSys {
constexpr ResultCode ERROR_PATH_NOT_FOUND{ErrorModule::FS, 1};
constexpr ResultCode ERROR_ENTITY_NOT_FOUND{ErrorModule::FS, 1002};
constexpr ResultCode ERROR_SD_CARD_NOT_FOUND{ErrorModule::FS, 2001};
constexpr ResultCode ERROR_OUT_OF_BOUNDS{ErrorModule::FS, 3005};
constexpr ResultCode ERROR_FAILED_MOUNT_ARCHIVE{ErrorModule::FS, 3223};
constexpr ResultCode ERROR_INVALID_ARGUMENT{ErrorModule::FS, 6001};
constexpr ResultCode ERROR_INVALID_OFFSET{ErrorModule::FS, 6061};
constexpr ResultCode ERROR_INVALID_SIZE{ErrorModule::FS, 6062};

View File

@@ -7,6 +7,7 @@
#include <cstddef>
#include <cstring>
#include "common/file_util.h"
#include "common/hex_util.h"
#include "common/logging/log.h"
#include "core/file_sys/content_archive.h"
@@ -19,6 +20,7 @@
#include "core/file_sys/vfs_vector.h"
#include "core/hle/service/filesystem/filesystem.h"
#include "core/loader/loader.h"
#include "core/loader/nso.h"
#include "core/settings.h"
namespace FileSys {
@@ -31,14 +33,6 @@ constexpr std::array<const char*, 14> EXEFS_FILE_NAMES{
"subsdk3", "subsdk4", "subsdk5", "subsdk6", "subsdk7", "subsdk8", "subsdk9",
};
struct NSOBuildHeader {
u32_le magic;
INSERT_PADDING_BYTES(0x3C);
std::array<u8, 0x20> build_id;
INSERT_PADDING_BYTES(0xA0);
};
static_assert(sizeof(NSOBuildHeader) == 0x100, "NSOBuildHeader has incorrect size.");
std::string FormatTitleVersion(u32 version, TitleVersionFormat format) {
std::array<u8, sizeof(u32)> bytes{};
bytes[0] = version % SINGLE_BYTE_MODULUS;
@@ -162,14 +156,16 @@ std::vector<VirtualFile> PatchManager::CollectPatches(const std::vector<VirtualD
}
std::vector<u8> PatchManager::PatchNSO(const std::vector<u8>& nso) const {
if (nso.size() < 0x100)
if (nso.size() < sizeof(Loader::NSOHeader)) {
return nso;
}
NSOBuildHeader header;
std::memcpy(&header, nso.data(), sizeof(NSOBuildHeader));
Loader::NSOHeader header;
std::memcpy(&header, nso.data(), sizeof(header));
if (header.magic != Common::MakeMagic('N', 'S', 'O', '0'))
if (header.magic != Common::MakeMagic('N', 'S', 'O', '0')) {
return nso;
}
const auto build_id_raw = Common::HexArrayToString(header.build_id);
const auto build_id = build_id_raw.substr(0, build_id_raw.find_last_not_of('0') + 1);
@@ -212,9 +208,11 @@ std::vector<u8> PatchManager::PatchNSO(const std::vector<u8>& nso) const {
}
}
if (out.size() < 0x100)
if (out.size() < sizeof(Loader::NSOHeader)) {
return nso;
std::memcpy(out.data(), &header, sizeof(NSOBuildHeader));
}
std::memcpy(out.data(), &header, sizeof(header));
return out;
}
@@ -232,6 +230,57 @@ bool PatchManager::HasNSOPatch(const std::array<u8, 32>& build_id_) const {
return !CollectPatches(patch_dirs, build_id).empty();
}
static std::optional<CheatList> ReadCheatFileFromFolder(const Core::System& system, u64 title_id,
const std::array<u8, 0x20>& build_id_,
const VirtualDir& base_path, bool upper) {
const auto build_id_raw = Common::HexArrayToString(build_id_, upper);
const auto build_id = build_id_raw.substr(0, sizeof(u64) * 2);
const auto file = base_path->GetFile(fmt::format("{}.txt", build_id));
if (file == nullptr) {
LOG_INFO(Common_Filesystem, "No cheats file found for title_id={:016X}, build_id={}",
title_id, build_id);
return std::nullopt;
}
std::vector<u8> data(file->GetSize());
if (file->Read(data.data(), data.size()) != data.size()) {
LOG_INFO(Common_Filesystem, "Failed to read cheats file for title_id={:016X}, build_id={}",
title_id, build_id);
return std::nullopt;
}
TextCheatParser parser;
return parser.Parse(system, data);
}
std::vector<CheatList> PatchManager::CreateCheatList(const Core::System& system,
const std::array<u8, 32>& build_id_) const {
const auto load_dir = Service::FileSystem::GetModificationLoadRoot(title_id);
auto patch_dirs = load_dir->GetSubdirectories();
std::sort(patch_dirs.begin(), patch_dirs.end(),
[](const VirtualDir& l, const VirtualDir& r) { return l->GetName() < r->GetName(); });
std::vector<CheatList> out;
out.reserve(patch_dirs.size());
for (const auto& subdir : patch_dirs) {
auto cheats_dir = subdir->GetSubdirectory("cheats");
if (cheats_dir != nullptr) {
auto res = ReadCheatFileFromFolder(system, title_id, build_id_, cheats_dir, true);
if (res.has_value()) {
out.push_back(std::move(*res));
continue;
}
res = ReadCheatFileFromFolder(system, title_id, build_id_, cheats_dir, false);
if (res.has_value())
out.push_back(std::move(*res));
}
}
return out;
}
static void ApplyLayeredFS(VirtualFile& romfs, u64 title_id, ContentRecordType type) {
const auto load_dir = Service::FileSystem::GetModificationLoadRoot(title_id);
if ((type != ContentRecordType::Program && type != ContentRecordType::Data) ||
@@ -403,6 +452,8 @@ std::map<std::string, std::string, std::less<>> PatchManager::GetPatchVersionNam
}
if (IsDirValidAndNonEmpty(mod->GetSubdirectory("romfs")))
AppendCommaIfNotEmpty(types, "LayeredFS");
if (IsDirValidAndNonEmpty(mod->GetSubdirectory("cheats")))
AppendCommaIfNotEmpty(types, "Cheats");
if (types.empty())
continue;

View File

@@ -8,9 +8,14 @@
#include <memory>
#include <string>
#include "common/common_types.h"
#include "core/file_sys/cheat_engine.h"
#include "core/file_sys/nca_metadata.h"
#include "core/file_sys/vfs.h"
namespace Core {
class System;
}
namespace FileSys {
class NCA;
@@ -45,6 +50,10 @@ public:
// Used to prevent expensive copies in NSO loader.
bool HasNSOPatch(const std::array<u8, 0x20>& build_id) const;
// Creates a CheatList object with all
std::vector<CheatList> CreateCheatList(const Core::System& system,
const std::array<u8, 0x20>& build_id) const;
// Currently tracked RomFS patches:
// - Game Updates
// - LayeredFS

View File

@@ -6,6 +6,7 @@
#include "core/file_sys/romfs.h"
#include "core/file_sys/system_archive/ng_word.h"
#include "core/file_sys/system_archive/system_archive.h"
#include "core/file_sys/system_archive/system_version.h"
namespace FileSys::SystemArchive {
@@ -30,7 +31,7 @@ constexpr std::array<SystemArchiveDescriptor, SYSTEM_ARCHIVE_COUNT> SYSTEM_ARCHI
{0x0100000000000806, "NgWord", &NgWord1},
{0x0100000000000807, "SsidList", nullptr},
{0x0100000000000808, "Dictionary", nullptr},
{0x0100000000000809, "SystemVersion", nullptr},
{0x0100000000000809, "SystemVersion", &SystemVersion},
{0x010000000000080A, "AvatarImage", nullptr},
{0x010000000000080B, "LocalNews", nullptr},
{0x010000000000080C, "Eula", nullptr},

View File

@@ -0,0 +1,52 @@
// Copyright 2019 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include "core/file_sys/system_archive/system_version.h"
#include "core/file_sys/vfs_vector.h"
namespace FileSys::SystemArchive {
namespace SystemVersionData {
// This section should reflect the best system version to describe yuzu's HLE api.
// TODO(DarkLordZach): Update when HLE gets better.
constexpr u8 VERSION_MAJOR = 5;
constexpr u8 VERSION_MINOR = 1;
constexpr u8 VERSION_MICRO = 0;
constexpr u8 REVISION_MAJOR = 3;
constexpr u8 REVISION_MINOR = 0;
constexpr char PLATFORM_STRING[] = "NX";
constexpr char VERSION_HASH[] = "23f9df53e25709d756e0c76effcb2473bd3447dd";
constexpr char DISPLAY_VERSION[] = "5.1.0";
constexpr char DISPLAY_TITLE[] = "NintendoSDK Firmware for NX 5.1.0-3.0";
} // namespace SystemVersionData
std::string GetLongDisplayVersion() {
return SystemVersionData::DISPLAY_TITLE;
}
VirtualDir SystemVersion() {
VirtualFile file = std::make_shared<VectorVfsFile>(std::vector<u8>(0x100), "file");
file->WriteObject(SystemVersionData::VERSION_MAJOR, 0);
file->WriteObject(SystemVersionData::VERSION_MINOR, 1);
file->WriteObject(SystemVersionData::VERSION_MICRO, 2);
file->WriteObject(SystemVersionData::REVISION_MAJOR, 4);
file->WriteObject(SystemVersionData::REVISION_MINOR, 5);
file->WriteArray(SystemVersionData::PLATFORM_STRING,
std::min<u64>(sizeof(SystemVersionData::PLATFORM_STRING), 0x20ULL), 0x8);
file->WriteArray(SystemVersionData::VERSION_HASH,
std::min<u64>(sizeof(SystemVersionData::VERSION_HASH), 0x40ULL), 0x28);
file->WriteArray(SystemVersionData::DISPLAY_VERSION,
std::min<u64>(sizeof(SystemVersionData::DISPLAY_VERSION), 0x18ULL), 0x68);
file->WriteArray(SystemVersionData::DISPLAY_TITLE,
std::min<u64>(sizeof(SystemVersionData::DISPLAY_TITLE), 0x80ULL), 0x80);
return std::make_shared<VectorVfsDirectory>(std::vector<VirtualFile>{file},
std::vector<VirtualDir>{}, "data");
}
} // namespace FileSys::SystemArchive

View File

@@ -0,0 +1,16 @@
// Copyright 2019 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <string>
#include "core/file_sys/vfs_types.h"
namespace FileSys::SystemArchive {
std::string GetLongDisplayVersion();
VirtualDir SystemVersion();
} // namespace FileSys::SystemArchive

View File

@@ -30,7 +30,7 @@ private:
explicit Device(std::weak_ptr<TouchState>&& touch_state) : touch_state(touch_state) {}
std::tuple<float, float, bool> GetStatus() const override {
if (auto state = touch_state.lock()) {
std::lock_guard<std::mutex> guard(state->mutex);
std::lock_guard guard{state->mutex};
return std::make_tuple(state->touch_x, state->touch_y, state->touch_pressed);
}
return std::make_tuple(0.0f, 0.0f, false);
@@ -81,7 +81,7 @@ void EmuWindow::TouchPressed(unsigned framebuffer_x, unsigned framebuffer_y) {
if (!IsWithinTouchscreen(framebuffer_layout, framebuffer_x, framebuffer_y))
return;
std::lock_guard<std::mutex> guard(touch_state->mutex);
std::lock_guard guard{touch_state->mutex};
touch_state->touch_x = static_cast<float>(framebuffer_x - framebuffer_layout.screen.left) /
(framebuffer_layout.screen.right - framebuffer_layout.screen.left);
touch_state->touch_y = static_cast<float>(framebuffer_y - framebuffer_layout.screen.top) /
@@ -91,7 +91,7 @@ void EmuWindow::TouchPressed(unsigned framebuffer_x, unsigned framebuffer_y) {
}
void EmuWindow::TouchReleased() {
std::lock_guard<std::mutex> guard(touch_state->mutex);
std::lock_guard guard{touch_state->mutex};
touch_state->touch_pressed = false;
touch_state->touch_x = 0;
touch_state->touch_y = 0;

View File

@@ -26,7 +26,7 @@ void WakeThreads(const std::vector<SharedPtr<Thread>>& waiting_threads, s32 num_
// them all.
std::size_t last = waiting_threads.size();
if (num_to_wake > 0) {
last = num_to_wake;
last = std::min(last, static_cast<std::size_t>(num_to_wake));
}
// Signal the waiting threads.
@@ -90,9 +90,9 @@ ResultCode AddressArbiter::ModifyByWaitingCountAndSignalToAddressIfEqual(VAddr a
// Determine the modified value depending on the waiting count.
s32 updated_value;
if (waiting_threads.empty()) {
updated_value = value - 1;
} else if (num_to_wake <= 0 || waiting_threads.size() <= static_cast<u32>(num_to_wake)) {
updated_value = value + 1;
} else if (num_to_wake <= 0 || waiting_threads.size() <= static_cast<u32>(num_to_wake)) {
updated_value = value - 1;
} else {
updated_value = value;
}

View File

@@ -5,7 +5,6 @@
#pragma once
#include <cstddef>
#include <memory>
#include <vector>
#include "common/common_types.h"
@@ -78,7 +77,7 @@ struct CodeSet final {
}
/// The overall data that backs this code set.
std::shared_ptr<std::vector<u8>> memory;
std::vector<u8> memory;
/// The segments that comprise this code set.
std::array<Segment, 3> segments;

View File

@@ -29,12 +29,12 @@ namespace Kernel {
* @param thread_handle The handle of the thread that's been awoken
* @param cycles_late The number of CPU cycles that have passed since the desired wakeup time
*/
static void ThreadWakeupCallback(u64 thread_handle, [[maybe_unused]] int cycles_late) {
static void ThreadWakeupCallback(u64 thread_handle, [[maybe_unused]] s64 cycles_late) {
const auto proper_handle = static_cast<Handle>(thread_handle);
const auto& system = Core::System::GetInstance();
// Lock the global kernel mutex when we enter the kernel HLE.
std::lock_guard<std::recursive_mutex> lock(HLE::g_hle_lock);
std::lock_guard lock{HLE::g_hle_lock};
SharedPtr<Thread> thread =
system.Kernel().RetrieveThreadFromWakeupCallbackHandleTable(proper_handle);
@@ -62,7 +62,8 @@ static void ThreadWakeupCallback(u64 thread_handle, [[maybe_unused]] int cycles_
if (thread->GetMutexWaitAddress() != 0 || thread->GetCondVarWaitAddress() != 0 ||
thread->GetWaitHandle() != 0) {
ASSERT(thread->GetStatus() == ThreadStatus::WaitMutex);
ASSERT(thread->GetStatus() == ThreadStatus::WaitMutex ||
thread->GetStatus() == ThreadStatus::WaitCondVar);
thread->SetMutexWaitAddress(0);
thread->SetCondVarWaitAddress(0);
thread->SetWaitHandle(0);
@@ -190,6 +191,10 @@ const Process* KernelCore::CurrentProcess() const {
return impl->current_process;
}
const std::vector<SharedPtr<Process>>& KernelCore::GetProcessList() const {
return impl->process_list;
}
void KernelCore::AddNamedPort(std::string name, SharedPtr<ClientPort> port) {
impl->named_ports.emplace(std::move(name), std::move(port));
}

View File

@@ -8,9 +8,6 @@
#include <unordered_map>
#include "core/hle/kernel/object.h"
template <typename T>
class ResultVal;
namespace Core {
class System;
}
@@ -75,6 +72,9 @@ public:
/// Retrieves a const pointer to the current process.
const Process* CurrentProcess() const;
/// Retrieves the list of processes.
const std::vector<SharedPtr<Process>>& GetProcessList() const;
/// Adds a port to the named port table
void AddNamedPort(std::string name, SharedPtr<ClientPort> port);

View File

@@ -2,7 +2,6 @@
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include <map>
#include <utility>
#include <vector>
@@ -10,8 +9,11 @@
#include "core/core.h"
#include "core/hle/kernel/errors.h"
#include "core/hle/kernel/handle_table.h"
#include "core/hle/kernel/kernel.h"
#include "core/hle/kernel/mutex.h"
#include "core/hle/kernel/object.h"
#include "core/hle/kernel/process.h"
#include "core/hle/kernel/scheduler.h"
#include "core/hle/kernel/thread.h"
#include "core/hle/result.h"
#include "core/memory.h"
@@ -57,41 +59,47 @@ static void TransferMutexOwnership(VAddr mutex_addr, SharedPtr<Thread> current_t
}
}
ResultCode Mutex::TryAcquire(HandleTable& handle_table, VAddr address, Handle holding_thread_handle,
Mutex::Mutex(Core::System& system) : system{system} {}
Mutex::~Mutex() = default;
ResultCode Mutex::TryAcquire(VAddr address, Handle holding_thread_handle,
Handle requesting_thread_handle) {
// The mutex address must be 4-byte aligned
if ((address % sizeof(u32)) != 0) {
return ERR_INVALID_ADDRESS;
}
const auto& handle_table = system.Kernel().CurrentProcess()->GetHandleTable();
Thread* const current_thread = system.CurrentScheduler().GetCurrentThread();
SharedPtr<Thread> holding_thread = handle_table.Get<Thread>(holding_thread_handle);
SharedPtr<Thread> requesting_thread = handle_table.Get<Thread>(requesting_thread_handle);
// TODO(Subv): It is currently unknown if it is possible to lock a mutex in behalf of another
// thread.
ASSERT(requesting_thread == GetCurrentThread());
ASSERT(requesting_thread == current_thread);
u32 addr_value = Memory::Read32(address);
const u32 addr_value = Memory::Read32(address);
// If the mutex isn't being held, just return success.
if (addr_value != (holding_thread_handle | Mutex::MutexHasWaitersFlag)) {
return RESULT_SUCCESS;
}
if (holding_thread == nullptr)
if (holding_thread == nullptr) {
return ERR_INVALID_HANDLE;
}
// Wait until the mutex is released
GetCurrentThread()->SetMutexWaitAddress(address);
GetCurrentThread()->SetWaitHandle(requesting_thread_handle);
current_thread->SetMutexWaitAddress(address);
current_thread->SetWaitHandle(requesting_thread_handle);
GetCurrentThread()->SetStatus(ThreadStatus::WaitMutex);
GetCurrentThread()->InvalidateWakeupCallback();
current_thread->SetStatus(ThreadStatus::WaitMutex);
current_thread->InvalidateWakeupCallback();
// Update the lock holder thread's priority to prevent priority inversion.
holding_thread->AddMutexWaiter(GetCurrentThread());
holding_thread->AddMutexWaiter(current_thread);
Core::System::GetInstance().PrepareReschedule();
system.PrepareReschedule();
return RESULT_SUCCESS;
}
@@ -102,7 +110,8 @@ ResultCode Mutex::Release(VAddr address) {
return ERR_INVALID_ADDRESS;
}
auto [thread, num_waiters] = GetHighestPriorityMutexWaitingThread(GetCurrentThread(), address);
auto* const current_thread = system.CurrentScheduler().GetCurrentThread();
auto [thread, num_waiters] = GetHighestPriorityMutexWaitingThread(current_thread, address);
// There are no more threads waiting for the mutex, release it completely.
if (thread == nullptr) {
@@ -111,7 +120,7 @@ ResultCode Mutex::Release(VAddr address) {
}
// Transfer the ownership of the mutex from the previous owner to the new one.
TransferMutexOwnership(address, GetCurrentThread(), thread);
TransferMutexOwnership(address, current_thread, thread);
u32 mutex_value = thread->GetWaitHandle();

View File

@@ -5,32 +5,34 @@
#pragma once
#include "common/common_types.h"
#include "core/hle/kernel/object.h"
union ResultCode;
namespace Kernel {
namespace Core {
class System;
}
class HandleTable;
class Thread;
namespace Kernel {
class Mutex final {
public:
explicit Mutex(Core::System& system);
~Mutex();
/// Flag that indicates that a mutex still has threads waiting for it.
static constexpr u32 MutexHasWaitersFlag = 0x40000000;
/// Mask of the bits in a mutex address value that contain the mutex owner.
static constexpr u32 MutexOwnerMask = 0xBFFFFFFF;
/// Attempts to acquire a mutex at the specified address.
static ResultCode TryAcquire(HandleTable& handle_table, VAddr address,
Handle holding_thread_handle, Handle requesting_thread_handle);
ResultCode TryAcquire(VAddr address, Handle holding_thread_handle,
Handle requesting_thread_handle);
/// Releases the mutex at the specified address.
static ResultCode Release(VAddr address);
ResultCode Release(VAddr address);
private:
Mutex() = default;
~Mutex() = default;
Core::System& system;
};
} // namespace Kernel

View File

@@ -23,6 +23,7 @@ bool Object::IsWaitable() const {
case HandleType::Unknown:
case HandleType::WritableEvent:
case HandleType::SharedMemory:
case HandleType::TransferMemory:
case HandleType::AddressArbiter:
case HandleType::ResourceLimit:
case HandleType::ClientPort:

View File

@@ -22,6 +22,7 @@ enum class HandleType : u32 {
WritableEvent,
ReadableEvent,
SharedMemory,
TransferMemory,
Thread,
Process,
AddressArbiter,

View File

@@ -5,6 +5,7 @@
#include <algorithm>
#include <memory>
#include <random>
#include "common/alignment.h"
#include "common/assert.h"
#include "common/logging/log.h"
#include "core/core.h"
@@ -75,6 +76,18 @@ SharedPtr<ResourceLimit> Process::GetResourceLimit() const {
return resource_limit;
}
u64 Process::GetTotalPhysicalMemoryUsed() const {
return vm_manager.GetCurrentHeapSize() + main_thread_stack_size + code_memory_size;
}
void Process::RegisterThread(const Thread* thread) {
thread_list.push_back(thread);
}
void Process::UnregisterThread(const Thread* thread) {
thread_list.remove(thread);
}
ResultCode Process::ClearSignalState() {
if (status == ProcessStatus::Exited) {
LOG_ERROR(Kernel, "called on a terminated process instance.");
@@ -107,14 +120,17 @@ ResultCode Process::LoadFromMetadata(const FileSys::ProgramMetadata& metadata) {
return handle_table.SetSize(capabilities.GetHandleTableSize());
}
void Process::Run(VAddr entry_point, s32 main_thread_priority, u32 stack_size) {
void Process::Run(VAddr entry_point, s32 main_thread_priority, u64 stack_size) {
// The kernel always ensures that the given stack size is page aligned.
main_thread_stack_size = Common::AlignUp(stack_size, Memory::PAGE_SIZE);
// Allocate and map the main thread stack
// TODO(bunnei): This is heap area that should be allocated by the kernel and not mapped as part
// of the user address space.
const VAddr mapping_address = vm_manager.GetTLSIORegionEndAddress() - main_thread_stack_size;
vm_manager
.MapMemoryBlock(vm_manager.GetTLSIORegionEndAddress() - stack_size,
std::make_shared<std::vector<u8>>(stack_size, 0), 0, stack_size,
MemoryState::Stack)
.MapMemoryBlock(mapping_address, std::make_shared<std::vector<u8>>(main_thread_stack_size),
0, main_thread_stack_size, MemoryState::Stack)
.Unwrap();
vm_manager.LogLayout();
@@ -210,26 +226,31 @@ void Process::FreeTLSSlot(VAddr tls_address) {
}
void Process::LoadModule(CodeSet module_, VAddr base_addr) {
const auto memory = std::make_shared<std::vector<u8>>(std::move(module_.memory));
const auto MapSegment = [&](const CodeSet::Segment& segment, VMAPermission permissions,
MemoryState memory_state) {
const auto vma = vm_manager
.MapMemoryBlock(segment.addr + base_addr, module_.memory,
segment.offset, segment.size, memory_state)
.MapMemoryBlock(segment.addr + base_addr, memory, segment.offset,
segment.size, memory_state)
.Unwrap();
vm_manager.Reprotect(vma, permissions);
};
// Map CodeSet segments
MapSegment(module_.CodeSegment(), VMAPermission::ReadExecute, MemoryState::CodeStatic);
MapSegment(module_.RODataSegment(), VMAPermission::Read, MemoryState::CodeMutable);
MapSegment(module_.DataSegment(), VMAPermission::ReadWrite, MemoryState::CodeMutable);
MapSegment(module_.CodeSegment(), VMAPermission::ReadExecute, MemoryState::Code);
MapSegment(module_.RODataSegment(), VMAPermission::Read, MemoryState::CodeData);
MapSegment(module_.DataSegment(), VMAPermission::ReadWrite, MemoryState::CodeData);
code_memory_size += module_.memory.size();
// Clear instruction cache in CPU JIT
system.InvalidateCpuInstructionCaches();
}
Process::Process(Core::System& system)
: WaitObject{system.Kernel()}, address_arbiter{system}, system{system} {}
: WaitObject{system.Kernel()}, address_arbiter{system}, mutex{system}, system{system} {}
Process::~Process() = default;
void Process::Acquire(Thread* thread) {

View File

@@ -7,12 +7,14 @@
#include <array>
#include <bitset>
#include <cstddef>
#include <list>
#include <string>
#include <vector>
#include <boost/container/static_vector.hpp>
#include "common/common_types.h"
#include "core/hle/kernel/address_arbiter.h"
#include "core/hle/kernel/handle_table.h"
#include "core/hle/kernel/mutex.h"
#include "core/hle/kernel/process_capability.h"
#include "core/hle/kernel/vm_manager.h"
#include "core/hle/kernel/wait_object.h"
@@ -34,14 +36,6 @@ class Thread;
struct CodeSet;
struct AddressMapping {
// Address and size must be page-aligned
VAddr address;
u64 size;
bool read_only;
bool unk_flag;
};
enum class MemoryRegion : u16 {
APPLICATION = 1,
SYSTEM = 2,
@@ -126,6 +120,16 @@ public:
return address_arbiter;
}
/// Gets a reference to the process' mutex lock.
Mutex& GetMutex() {
return mutex;
}
/// Gets a const reference to the process' mutex lock
const Mutex& GetMutex() const {
return mutex;
}
/// Gets the current status of the process
ProcessStatus GetStatus() const {
return status;
@@ -183,6 +187,22 @@ public:
return random_entropy.at(index);
}
/// Retrieves the total physical memory used by this process in bytes.
u64 GetTotalPhysicalMemoryUsed() const;
/// Gets the list of all threads created with this process as their owner.
const std::list<const Thread*>& GetThreadList() const {
return thread_list;
}
/// Registers a thread as being created under this process,
/// adding it to this process' thread list.
void RegisterThread(const Thread* thread);
/// Unregisters a thread from this process, removing it
/// from this process' thread list.
void UnregisterThread(const Thread* thread);
/// Clears the signaled state of the process if and only if it's signaled.
///
/// @pre The process must not be already terminated. If this is called on a
@@ -207,7 +227,7 @@ public:
/**
* Applies address space changes and launches the process main thread.
*/
void Run(VAddr entry_point, s32 main_thread_priority, u32 stack_size);
void Run(VAddr entry_point, s32 main_thread_priority, u64 stack_size);
/**
* Prepares a process for termination by stopping all of its threads
@@ -244,6 +264,12 @@ private:
/// Memory manager for this process.
Kernel::VMManager vm_manager;
/// Size of the main thread's stack in bytes.
u64 main_thread_stack_size = 0;
/// Size of the loaded code memory in bytes.
u64 code_memory_size = 0;
/// Current status of the process
ProcessStatus status;
@@ -288,9 +314,17 @@ private:
/// Per-process address arbiter.
AddressArbiter address_arbiter;
/// The per-process mutex lock instance used for handling various
/// forms of services, such as lock arbitration, and condition
/// variable related facilities.
Mutex mutex;
/// Random values for svcGetInfo RandomEntropy
std::array<u64, RANDOM_ENTROPY_SIZE> random_entropy;
/// List of threads that are running with this process as their owner.
std::list<const Thread*> thread_list;
/// System context
Core::System& system;

View File

@@ -29,8 +29,8 @@ Scheduler::~Scheduler() {
}
bool Scheduler::HaveReadyThreads() const {
std::lock_guard<std::mutex> lock(scheduler_mutex);
return ready_queue.get_first() != nullptr;
std::lock_guard lock{scheduler_mutex};
return !ready_queue.empty();
}
Thread* Scheduler::GetCurrentThread() const {
@@ -46,22 +46,27 @@ Thread* Scheduler::PopNextReadyThread() {
Thread* thread = GetCurrentThread();
if (thread && thread->GetStatus() == ThreadStatus::Running) {
if (ready_queue.empty()) {
return thread;
}
// We have to do better than the current thread.
// This call returns null when that's not possible.
next = ready_queue.pop_first_better(thread->GetPriority());
if (!next) {
// Otherwise just keep going with the current thread
next = ready_queue.front();
if (next == nullptr || next->GetPriority() >= thread->GetPriority()) {
next = thread;
}
} else {
next = ready_queue.pop_first();
if (ready_queue.empty()) {
return nullptr;
}
next = ready_queue.front();
}
return next;
}
void Scheduler::SwitchContext(Thread* new_thread) {
Thread* const previous_thread = GetCurrentThread();
Thread* previous_thread = GetCurrentThread();
Process* const previous_process = system.Kernel().CurrentProcess();
UpdateLastContextSwitchTime(previous_thread, previous_process);
@@ -75,7 +80,7 @@ void Scheduler::SwitchContext(Thread* new_thread) {
if (previous_thread->GetStatus() == ThreadStatus::Running) {
// This is only the case when a reschedule is triggered without the current thread
// yielding execution (i.e. an event triggered, system core time-sliced, etc)
ready_queue.push_front(previous_thread->GetPriority(), previous_thread);
ready_queue.add(previous_thread, previous_thread->GetPriority(), false);
previous_thread->SetStatus(ThreadStatus::Ready);
}
}
@@ -90,7 +95,7 @@ void Scheduler::SwitchContext(Thread* new_thread) {
current_thread = new_thread;
ready_queue.remove(new_thread->GetPriority(), new_thread);
ready_queue.remove(new_thread, new_thread->GetPriority());
new_thread->SetStatus(ThreadStatus::Running);
auto* const thread_owner_process = current_thread->GetOwnerProcess();
@@ -127,7 +132,7 @@ void Scheduler::UpdateLastContextSwitchTime(Thread* thread, Process* process) {
}
void Scheduler::Reschedule() {
std::lock_guard<std::mutex> lock(scheduler_mutex);
std::lock_guard lock{scheduler_mutex};
Thread* cur = GetCurrentThread();
Thread* next = PopNextReadyThread();
@@ -143,51 +148,54 @@ void Scheduler::Reschedule() {
SwitchContext(next);
}
void Scheduler::AddThread(SharedPtr<Thread> thread, u32 priority) {
std::lock_guard<std::mutex> lock(scheduler_mutex);
void Scheduler::AddThread(SharedPtr<Thread> thread) {
std::lock_guard lock{scheduler_mutex};
thread_list.push_back(std::move(thread));
ready_queue.prepare(priority);
}
void Scheduler::RemoveThread(Thread* thread) {
std::lock_guard<std::mutex> lock(scheduler_mutex);
std::lock_guard lock{scheduler_mutex};
thread_list.erase(std::remove(thread_list.begin(), thread_list.end(), thread),
thread_list.end());
}
void Scheduler::ScheduleThread(Thread* thread, u32 priority) {
std::lock_guard<std::mutex> lock(scheduler_mutex);
std::lock_guard lock{scheduler_mutex};
ASSERT(thread->GetStatus() == ThreadStatus::Ready);
ready_queue.push_back(priority, thread);
ready_queue.add(thread, priority);
}
void Scheduler::UnscheduleThread(Thread* thread, u32 priority) {
std::lock_guard<std::mutex> lock(scheduler_mutex);
std::lock_guard lock{scheduler_mutex};
ASSERT(thread->GetStatus() == ThreadStatus::Ready);
ready_queue.remove(priority, thread);
ready_queue.remove(thread, priority);
}
void Scheduler::SetThreadPriority(Thread* thread, u32 priority) {
std::lock_guard<std::mutex> lock(scheduler_mutex);
std::lock_guard lock{scheduler_mutex};
if (thread->GetPriority() == priority) {
return;
}
// If thread was ready, adjust queues
if (thread->GetStatus() == ThreadStatus::Ready)
ready_queue.move(thread, thread->GetPriority(), priority);
else
ready_queue.prepare(priority);
ready_queue.adjust(thread, thread->GetPriority(), priority);
}
Thread* Scheduler::GetNextSuggestedThread(u32 core, u32 maximum_priority) const {
std::lock_guard<std::mutex> lock(scheduler_mutex);
std::lock_guard lock{scheduler_mutex};
const u32 mask = 1U << core;
return ready_queue.get_first_filter([mask, maximum_priority](Thread const* thread) {
return (thread->GetAffinityMask() & mask) != 0 && thread->GetPriority() < maximum_priority;
});
for (auto* thread : ready_queue) {
if ((thread->GetAffinityMask() & mask) != 0 && thread->GetPriority() < maximum_priority) {
return thread;
}
}
return nullptr;
}
void Scheduler::YieldWithoutLoadBalancing(Thread* thread) {

View File

@@ -7,7 +7,7 @@
#include <mutex>
#include <vector>
#include "common/common_types.h"
#include "common/thread_queue_list.h"
#include "common/multi_level_queue.h"
#include "core/hle/kernel/object.h"
#include "core/hle/kernel/thread.h"
@@ -38,7 +38,7 @@ public:
u64 GetLastContextSwitchTicks() const;
/// Adds a new thread to the scheduler
void AddThread(SharedPtr<Thread> thread, u32 priority);
void AddThread(SharedPtr<Thread> thread);
/// Removes a thread from the scheduler
void RemoveThread(Thread* thread);
@@ -156,7 +156,7 @@ private:
std::vector<SharedPtr<Thread>> thread_list;
/// Lists only ready thread ids.
Common::ThreadQueueList<Thread*, THREADPRIO_LOWEST + 1> ready_queue;
Common::MultiLevelQueue<Thread*, THREADPRIO_LOWEST + 1> ready_queue;
SharedPtr<Thread> current_thread = nullptr;

View File

@@ -32,6 +32,7 @@
#include "core/hle/kernel/svc.h"
#include "core/hle/kernel/svc_wrap.h"
#include "core/hle/kernel/thread.h"
#include "core/hle/kernel/transfer_memory.h"
#include "core/hle/kernel/writable_event.h"
#include "core/hle/lock.h"
#include "core/hle/result.h"
@@ -174,11 +175,8 @@ static ResultCode SetHeapSize(VAddr* heap_addr, u64 heap_size) {
return ERR_INVALID_SIZE;
}
auto& vm_manager = Core::CurrentProcess()->VMManager();
const VAddr heap_base = vm_manager.GetHeapRegionBaseAddress();
const auto alloc_result =
vm_manager.HeapAllocate(heap_base, heap_size, VMAPermission::ReadWrite);
auto& vm_manager = Core::System::GetInstance().Kernel().CurrentProcess()->VMManager();
const auto alloc_result = vm_manager.SetHeapSize(heap_size);
if (alloc_result.Failed()) {
return alloc_result.Code();
}
@@ -551,9 +549,9 @@ static ResultCode ArbitrateLock(Handle holding_thread_handle, VAddr mutex_addr,
return ERR_INVALID_ADDRESS;
}
auto& handle_table = Core::CurrentProcess()->GetHandleTable();
return Mutex::TryAcquire(handle_table, mutex_addr, holding_thread_handle,
requesting_thread_handle);
auto* const current_process = Core::System::GetInstance().Kernel().CurrentProcess();
return current_process->GetMutex().TryAcquire(mutex_addr, holding_thread_handle,
requesting_thread_handle);
}
/// Unlock a mutex
@@ -571,7 +569,8 @@ static ResultCode ArbitrateUnlock(VAddr mutex_addr) {
return ERR_INVALID_ADDRESS;
}
return Mutex::Release(mutex_addr);
auto* const current_process = Core::System::GetInstance().Kernel().CurrentProcess();
return current_process->GetMutex().Release(mutex_addr);
}
enum class BreakType : u32 {
@@ -710,7 +709,7 @@ static ResultCode GetInfo(u64* result, u64 info_id, u64 handle, u64 info_sub_id)
HeapRegionBaseAddr = 4,
HeapRegionSize = 5,
TotalMemoryUsage = 6,
TotalHeapUsage = 7,
TotalPhysicalMemoryUsed = 7,
IsCurrentProcessBeingDebugged = 8,
RegisterResourceLimit = 9,
IdleTickCount = 10,
@@ -746,7 +745,7 @@ static ResultCode GetInfo(u64* result, u64 info_id, u64 handle, u64 info_sub_id)
case GetInfoType::NewMapRegionBaseAddr:
case GetInfoType::NewMapRegionSize:
case GetInfoType::TotalMemoryUsage:
case GetInfoType::TotalHeapUsage:
case GetInfoType::TotalPhysicalMemoryUsed:
case GetInfoType::IsVirtualAddressMemoryEnabled:
case GetInfoType::PersonalMmHeapUsage:
case GetInfoType::TitleId:
@@ -806,8 +805,8 @@ static ResultCode GetInfo(u64* result, u64 info_id, u64 handle, u64 info_sub_id)
*result = process->VMManager().GetTotalMemoryUsage();
return RESULT_SUCCESS;
case GetInfoType::TotalHeapUsage:
*result = process->VMManager().GetTotalHeapUsage();
case GetInfoType::TotalPhysicalMemoryUsed:
*result = process->GetTotalPhysicalMemoryUsed();
return RESULT_SUCCESS;
case GetInfoType::IsVirtualAddressMemoryEnabled:
@@ -1340,17 +1339,21 @@ static ResultCode WaitProcessWideKeyAtomic(VAddr mutex_addr, VAddr condition_var
"called mutex_addr={:X}, condition_variable_addr={:X}, thread_handle=0x{:08X}, timeout={}",
mutex_addr, condition_variable_addr, thread_handle, nano_seconds);
const auto& handle_table = Core::CurrentProcess()->GetHandleTable();
auto* const current_process = Core::System::GetInstance().Kernel().CurrentProcess();
const auto& handle_table = current_process->GetHandleTable();
SharedPtr<Thread> thread = handle_table.Get<Thread>(thread_handle);
ASSERT(thread);
CASCADE_CODE(Mutex::Release(mutex_addr));
const auto release_result = current_process->GetMutex().Release(mutex_addr);
if (release_result.IsError()) {
return release_result;
}
SharedPtr<Thread> current_thread = GetCurrentThread();
current_thread->SetCondVarWaitAddress(condition_variable_addr);
current_thread->SetMutexWaitAddress(mutex_addr);
current_thread->SetWaitHandle(thread_handle);
current_thread->SetStatus(ThreadStatus::WaitMutex);
current_thread->SetStatus(ThreadStatus::WaitCondVar);
current_thread->InvalidateWakeupCallback();
current_thread->WakeAfterDelay(nano_seconds);
@@ -1394,10 +1397,10 @@ static ResultCode SignalProcessWideKey(VAddr condition_variable_addr, s32 target
// them all.
std::size_t last = waiting_threads.size();
if (target != -1)
last = target;
last = std::min(waiting_threads.size(), static_cast<std::size_t>(target));
// If there are no threads waiting on this condition variable, just exit
if (last > waiting_threads.size())
if (last == 0)
return RESULT_SUCCESS;
for (std::size_t index = 0; index < last; ++index) {
@@ -1405,6 +1408,9 @@ static ResultCode SignalProcessWideKey(VAddr condition_variable_addr, s32 target
ASSERT(thread->GetCondVarWaitAddress() == condition_variable_addr);
// liberate Cond Var Thread.
thread->SetCondVarWaitAddress(0);
std::size_t current_core = Core::System::GetInstance().CurrentCoreIndex();
auto& monitor = Core::System::GetInstance().Monitor();
@@ -1423,10 +1429,9 @@ static ResultCode SignalProcessWideKey(VAddr condition_variable_addr, s32 target
}
} while (!monitor.ExclusiveWrite32(current_core, thread->GetMutexWaitAddress(),
thread->GetWaitHandle()));
if (mutex_val == 0) {
// We were able to acquire the mutex, resume this thread.
ASSERT(thread->GetStatus() == ThreadStatus::WaitMutex);
ASSERT(thread->GetStatus() == ThreadStatus::WaitCondVar);
thread->ResumeFromWait();
auto* const lock_owner = thread->GetLockOwner();
@@ -1436,8 +1441,8 @@ static ResultCode SignalProcessWideKey(VAddr condition_variable_addr, s32 target
thread->SetLockOwner(nullptr);
thread->SetMutexWaitAddress(0);
thread->SetCondVarWaitAddress(0);
thread->SetWaitHandle(0);
Core::System::GetInstance().CpuCore(thread->GetProcessorID()).PrepareReschedule();
} else {
// Atomically signal that the mutex now has a waiting thread.
do {
@@ -1456,12 +1461,11 @@ static ResultCode SignalProcessWideKey(VAddr condition_variable_addr, s32 target
const auto& handle_table = Core::CurrentProcess()->GetHandleTable();
auto owner = handle_table.Get<Thread>(owner_handle);
ASSERT(owner);
ASSERT(thread->GetStatus() == ThreadStatus::WaitMutex);
ASSERT(thread->GetStatus() == ThreadStatus::WaitCondVar);
thread->InvalidateWakeupCallback();
thread->SetStatus(ThreadStatus::WaitMutex);
owner->AddMutexWaiter(thread);
Core::System::GetInstance().CpuCore(thread->GetProcessorID()).PrepareReschedule();
}
}
@@ -1581,14 +1585,121 @@ static ResultCode CreateTransferMemory(Handle* handle, VAddr addr, u64 size, u32
}
auto& kernel = Core::System::GetInstance().Kernel();
auto process = kernel.CurrentProcess();
auto& handle_table = process->GetHandleTable();
const auto shared_mem_handle = SharedMemory::Create(kernel, process, size, perms, perms, addr);
auto transfer_mem_handle = TransferMemory::Create(kernel, addr, size, perms);
CASCADE_RESULT(*handle, handle_table.Create(shared_mem_handle));
auto& handle_table = kernel.CurrentProcess()->GetHandleTable();
const auto result = handle_table.Create(std::move(transfer_mem_handle));
if (result.Failed()) {
return result.Code();
}
*handle = *result;
return RESULT_SUCCESS;
}
static ResultCode MapTransferMemory(Handle handle, VAddr address, u64 size, u32 permission_raw) {
LOG_DEBUG(Kernel_SVC,
"called. handle=0x{:08X}, address=0x{:016X}, size=0x{:016X}, permissions=0x{:08X}",
handle, address, size, permission_raw);
if (!Common::Is4KBAligned(address)) {
LOG_ERROR(Kernel_SVC, "Transfer memory addresses must be 4KB aligned (size=0x{:016X}).",
address);
return ERR_INVALID_ADDRESS;
}
if (size == 0 || !Common::Is4KBAligned(size)) {
LOG_ERROR(Kernel_SVC,
"Transfer memory sizes must be 4KB aligned and not be zero (size=0x{:016X}).",
size);
return ERR_INVALID_SIZE;
}
if (!IsValidAddressRange(address, size)) {
LOG_ERROR(Kernel_SVC,
"Given address and size overflows the 64-bit range (address=0x{:016X}, "
"size=0x{:016X}).",
address, size);
return ERR_INVALID_ADDRESS_STATE;
}
const auto permissions = static_cast<MemoryPermission>(permission_raw);
if (permissions != MemoryPermission::None && permissions != MemoryPermission::Read &&
permissions != MemoryPermission::ReadWrite) {
LOG_ERROR(Kernel_SVC, "Invalid transfer memory permissions given (permissions=0x{:08X}).",
permission_raw);
return ERR_INVALID_STATE;
}
const auto& kernel = Core::System::GetInstance().Kernel();
const auto* const current_process = kernel.CurrentProcess();
const auto& handle_table = current_process->GetHandleTable();
auto transfer_memory = handle_table.Get<TransferMemory>(handle);
if (!transfer_memory) {
LOG_ERROR(Kernel_SVC, "Nonexistent transfer memory handle given (handle=0x{:08X}).",
handle);
return ERR_INVALID_HANDLE;
}
if (!current_process->VMManager().IsWithinASLRRegion(address, size)) {
LOG_ERROR(Kernel_SVC,
"Given address and size don't fully fit within the ASLR region "
"(address=0x{:016X}, size=0x{:016X}).",
address, size);
return ERR_INVALID_MEMORY_RANGE;
}
return transfer_memory->MapMemory(address, size, permissions);
}
static ResultCode UnmapTransferMemory(Handle handle, VAddr address, u64 size) {
LOG_DEBUG(Kernel_SVC, "called. handle=0x{:08X}, address=0x{:016X}, size=0x{:016X}", handle,
address, size);
if (!Common::Is4KBAligned(address)) {
LOG_ERROR(Kernel_SVC, "Transfer memory addresses must be 4KB aligned (size=0x{:016X}).",
address);
return ERR_INVALID_ADDRESS;
}
if (size == 0 || !Common::Is4KBAligned(size)) {
LOG_ERROR(Kernel_SVC,
"Transfer memory sizes must be 4KB aligned and not be zero (size=0x{:016X}).",
size);
return ERR_INVALID_SIZE;
}
if (!IsValidAddressRange(address, size)) {
LOG_ERROR(Kernel_SVC,
"Given address and size overflows the 64-bit range (address=0x{:016X}, "
"size=0x{:016X}).",
address, size);
return ERR_INVALID_ADDRESS_STATE;
}
const auto& kernel = Core::System::GetInstance().Kernel();
const auto* const current_process = kernel.CurrentProcess();
const auto& handle_table = current_process->GetHandleTable();
auto transfer_memory = handle_table.Get<TransferMemory>(handle);
if (!transfer_memory) {
LOG_ERROR(Kernel_SVC, "Nonexistent transfer memory handle given (handle=0x{:08X}).",
handle);
return ERR_INVALID_HANDLE;
}
if (!current_process->VMManager().IsWithinASLRRegion(address, size)) {
LOG_ERROR(Kernel_SVC,
"Given address and size don't fully fit within the ASLR region "
"(address=0x{:016X}, size=0x{:016X}).",
address, size);
return ERR_INVALID_MEMORY_RANGE;
}
return transfer_memory->UnmapMemory(address, size);
}
static ResultCode GetThreadCoreMask(Handle thread_handle, u32* core, u64* mask) {
LOG_TRACE(Kernel_SVC, "called, handle=0x{:08X}", thread_handle);
@@ -1872,6 +1983,83 @@ static ResultCode SetResourceLimitLimitValue(Handle resource_limit, u32 resource
return RESULT_SUCCESS;
}
static ResultCode GetProcessList(u32* out_num_processes, VAddr out_process_ids,
u32 out_process_ids_size) {
LOG_DEBUG(Kernel_SVC, "called. out_process_ids=0x{:016X}, out_process_ids_size={}",
out_process_ids, out_process_ids_size);
// If the supplied size is negative or greater than INT32_MAX / sizeof(u64), bail.
if ((out_process_ids_size & 0xF0000000) != 0) {
LOG_ERROR(Kernel_SVC,
"Supplied size outside [0, 0x0FFFFFFF] range. out_process_ids_size={}",
out_process_ids_size);
return ERR_OUT_OF_RANGE;
}
const auto& kernel = Core::System::GetInstance().Kernel();
const auto& vm_manager = kernel.CurrentProcess()->VMManager();
const auto total_copy_size = out_process_ids_size * sizeof(u64);
if (out_process_ids_size > 0 &&
!vm_manager.IsWithinAddressSpace(out_process_ids, total_copy_size)) {
LOG_ERROR(Kernel_SVC, "Address range outside address space. begin=0x{:016X}, end=0x{:016X}",
out_process_ids, out_process_ids + total_copy_size);
return ERR_INVALID_ADDRESS_STATE;
}
const auto& process_list = kernel.GetProcessList();
const auto num_processes = process_list.size();
const auto copy_amount = std::min(std::size_t{out_process_ids_size}, num_processes);
for (std::size_t i = 0; i < copy_amount; ++i) {
Memory::Write64(out_process_ids, process_list[i]->GetProcessID());
out_process_ids += sizeof(u64);
}
*out_num_processes = static_cast<u32>(num_processes);
return RESULT_SUCCESS;
}
ResultCode GetThreadList(u32* out_num_threads, VAddr out_thread_ids, u32 out_thread_ids_size,
Handle debug_handle) {
// TODO: Handle this case when debug events are supported.
UNIMPLEMENTED_IF(debug_handle != InvalidHandle);
LOG_DEBUG(Kernel_SVC, "called. out_thread_ids=0x{:016X}, out_thread_ids_size={}",
out_thread_ids, out_thread_ids_size);
// If the size is negative or larger than INT32_MAX / sizeof(u64)
if ((out_thread_ids_size & 0xF0000000) != 0) {
LOG_ERROR(Kernel_SVC, "Supplied size outside [0, 0x0FFFFFFF] range. size={}",
out_thread_ids_size);
return ERR_OUT_OF_RANGE;
}
const auto* const current_process = Core::System::GetInstance().Kernel().CurrentProcess();
const auto& vm_manager = current_process->VMManager();
const auto total_copy_size = out_thread_ids_size * sizeof(u64);
if (out_thread_ids_size > 0 &&
!vm_manager.IsWithinAddressSpace(out_thread_ids, total_copy_size)) {
LOG_ERROR(Kernel_SVC, "Address range outside address space. begin=0x{:016X}, end=0x{:016X}",
out_thread_ids, out_thread_ids + total_copy_size);
return ERR_INVALID_ADDRESS_STATE;
}
const auto& thread_list = current_process->GetThreadList();
const auto num_threads = thread_list.size();
const auto copy_amount = std::min(std::size_t{out_thread_ids_size}, num_threads);
auto list_iter = thread_list.cbegin();
for (std::size_t i = 0; i < copy_amount; ++i, ++list_iter) {
Memory::Write64(out_thread_ids, (*list_iter)->GetThreadID());
out_thread_ids += sizeof(u64);
}
*out_num_threads = static_cast<u32>(num_threads);
return RESULT_SUCCESS;
}
namespace {
struct FunctionDef {
using Func = void();
@@ -1964,8 +2152,8 @@ static const FunctionDef SVC_Table[] = {
{0x4E, nullptr, "ReadWriteRegister"},
{0x4F, nullptr, "SetProcessActivity"},
{0x50, SvcWrap<CreateSharedMemory>, "CreateSharedMemory"},
{0x51, nullptr, "MapTransferMemory"},
{0x52, nullptr, "UnmapTransferMemory"},
{0x51, SvcWrap<MapTransferMemory>, "MapTransferMemory"},
{0x52, SvcWrap<UnmapTransferMemory>, "UnmapTransferMemory"},
{0x53, nullptr, "CreateInterruptEvent"},
{0x54, nullptr, "QueryPhysicalAddress"},
{0x55, nullptr, "QueryIoMapping"},
@@ -1984,8 +2172,8 @@ static const FunctionDef SVC_Table[] = {
{0x62, nullptr, "TerminateDebugProcess"},
{0x63, nullptr, "GetDebugEvent"},
{0x64, nullptr, "ContinueDebugEvent"},
{0x65, nullptr, "GetProcessList"},
{0x66, nullptr, "GetThreadList"},
{0x65, SvcWrap<GetProcessList>, "GetProcessList"},
{0x66, SvcWrap<GetThreadList>, "GetThreadList"},
{0x67, nullptr, "GetDebugThreadContext"},
{0x68, nullptr, "SetDebugThreadContext"},
{0x69, nullptr, "QueryDebugProcessMemory"},
@@ -2027,7 +2215,7 @@ void CallSVC(u32 immediate) {
MICROPROFILE_SCOPE(Kernel_SVC);
// Lock the global kernel mutex when we enter the kernel HLE.
std::lock_guard<std::recursive_mutex> lock(HLE::g_hle_lock);
std::lock_guard lock{HLE::g_hle_lock};
const FunctionDef* info = GetSVCInfo(immediate);
if (info) {

View File

@@ -78,6 +78,14 @@ void SvcWrap() {
FuncReturn(retval);
}
template <ResultCode func(u32*, u64, u32)>
void SvcWrap() {
u32 param_1 = 0;
const u32 retval = func(&param_1, Param(1), static_cast<u32>(Param(2))).raw;
Core::CurrentArmInterface().SetReg(1, param_1);
FuncReturn(retval);
}
template <ResultCode func(u64*, u32)>
void SvcWrap() {
u64 param_1 = 0;

View File

@@ -62,6 +62,8 @@ void Thread::Stop() {
}
wait_objects.clear();
owner_process->UnregisterThread(this);
// Mark the TLS slot in the thread's page as free.
owner_process->FreeTLSSlot(tls_address);
}
@@ -105,6 +107,7 @@ void Thread::ResumeFromWait() {
case ThreadStatus::WaitSleep:
case ThreadStatus::WaitIPC:
case ThreadStatus::WaitMutex:
case ThreadStatus::WaitCondVar:
case ThreadStatus::WaitArb:
break;
@@ -198,9 +201,11 @@ ResultVal<SharedPtr<Thread>> Thread::Create(KernelCore& kernel, std::string name
thread->callback_handle = kernel.ThreadWakeupCallbackHandleTable().Create(thread).Unwrap();
thread->owner_process = &owner_process;
thread->scheduler = &system.Scheduler(processor_id);
thread->scheduler->AddThread(thread, priority);
thread->scheduler->AddThread(thread);
thread->tls_address = thread->owner_process->MarkNextAvailableTLSSlotAsUsed(*thread);
thread->owner_process->RegisterThread(thread.get());
// TODO(peachum): move to ScheduleThread() when scheduler is added so selected core is used
// to initialize the context
ResetThreadContext(thread->context, stack_top, entry_point, arg);
@@ -351,7 +356,7 @@ void Thread::ChangeScheduler() {
if (*new_processor_id != processor_id) {
// Remove thread from previous core's scheduler
scheduler->RemoveThread(this);
next_scheduler.AddThread(this, current_priority);
next_scheduler.AddThread(this);
}
processor_id = *new_processor_id;

View File

@@ -51,7 +51,8 @@ enum class ThreadStatus {
WaitIPC, ///< Waiting for the reply from an IPC request
WaitSynchAny, ///< Waiting due to WaitSynch1 or WaitSynchN with wait_all = false
WaitSynchAll, ///< Waiting due to WaitSynchronizationN with wait_all = true
WaitMutex, ///< Waiting due to an ArbitrateLock/WaitProcessWideKey svc
WaitMutex, ///< Waiting due to an ArbitrateLock svc
WaitCondVar, ///< Waiting due to an WaitProcessWideKey svc
WaitArb, ///< Waiting due to a SignalToAddress/WaitForAddress svc
Dormant, ///< Created but not yet made ready
Dead ///< Run to completion, or forcefully terminated

View File

@@ -0,0 +1,73 @@
// Copyright 2019 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include "core/hle/kernel/errors.h"
#include "core/hle/kernel/kernel.h"
#include "core/hle/kernel/process.h"
#include "core/hle/kernel/shared_memory.h"
#include "core/hle/kernel/transfer_memory.h"
#include "core/hle/result.h"
namespace Kernel {
TransferMemory::TransferMemory(KernelCore& kernel) : Object{kernel} {}
TransferMemory::~TransferMemory() = default;
SharedPtr<TransferMemory> TransferMemory::Create(KernelCore& kernel, VAddr base_address,
size_t size, MemoryPermission permissions) {
SharedPtr<TransferMemory> transfer_memory{new TransferMemory(kernel)};
transfer_memory->base_address = base_address;
transfer_memory->memory_size = size;
transfer_memory->owner_permissions = permissions;
transfer_memory->owner_process = kernel.CurrentProcess();
return transfer_memory;
}
ResultCode TransferMemory::MapMemory(VAddr address, size_t size, MemoryPermission permissions) {
if (memory_size != size) {
return ERR_INVALID_SIZE;
}
if (owner_permissions != permissions) {
return ERR_INVALID_STATE;
}
if (is_mapped) {
return ERR_INVALID_STATE;
}
const auto map_state = owner_permissions == MemoryPermission::None
? MemoryState::TransferMemoryIsolated
: MemoryState::TransferMemory;
auto& vm_manager = owner_process->VMManager();
const auto map_result = vm_manager.MapMemoryBlock(
address, std::make_shared<std::vector<u8>>(size), 0, size, map_state);
if (map_result.Failed()) {
return map_result.Code();
}
is_mapped = true;
return RESULT_SUCCESS;
}
ResultCode TransferMemory::UnmapMemory(VAddr address, size_t size) {
if (memory_size != size) {
return ERR_INVALID_SIZE;
}
auto& vm_manager = owner_process->VMManager();
const auto result = vm_manager.UnmapRange(address, size);
if (result.IsError()) {
return result;
}
is_mapped = false;
return RESULT_SUCCESS;
}
} // namespace Kernel

View File

@@ -0,0 +1,91 @@
// Copyright 2019 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include "core/hle/kernel/object.h"
union ResultCode;
namespace Kernel {
class KernelCore;
class Process;
enum class MemoryPermission : u32;
/// Defines the interface for transfer memory objects.
///
/// Transfer memory is typically used for the purpose of
/// transferring memory between separate process instances,
/// thus the name.
///
class TransferMemory final : public Object {
public:
static constexpr HandleType HANDLE_TYPE = HandleType::TransferMemory;
static SharedPtr<TransferMemory> Create(KernelCore& kernel, VAddr base_address, size_t size,
MemoryPermission permissions);
TransferMemory(const TransferMemory&) = delete;
TransferMemory& operator=(const TransferMemory&) = delete;
TransferMemory(TransferMemory&&) = delete;
TransferMemory& operator=(TransferMemory&&) = delete;
std::string GetTypeName() const override {
return "TransferMemory";
}
std::string GetName() const override {
return GetTypeName();
}
HandleType GetHandleType() const override {
return HANDLE_TYPE;
}
/// Attempts to map transfer memory with the given range and memory permissions.
///
/// @param address The base address to being mapping memory at.
/// @param size The size of the memory to map, in bytes.
/// @param permissions The memory permissions to check against when mapping memory.
///
/// @pre The given address, size, and memory permissions must all match
/// the same values that were given when creating the transfer memory
/// instance.
///
ResultCode MapMemory(VAddr address, size_t size, MemoryPermission permissions);
/// Unmaps the transfer memory with the given range
///
/// @param address The base address to begin unmapping memory at.
/// @param size The size of the memory to unmap, in bytes.
///
/// @pre The given address and size must be the same as the ones used
/// to create the transfer memory instance.
///
ResultCode UnmapMemory(VAddr address, size_t size);
private:
explicit TransferMemory(KernelCore& kernel);
~TransferMemory() override;
/// The base address for the memory managed by this instance.
VAddr base_address = 0;
/// Size of the memory, in bytes, that this instance manages.
size_t memory_size = 0;
/// The memory permissions that are applied to this instance.
MemoryPermission owner_permissions{};
/// The process that this transfer memory instance was created under.
Process* owner_process = nullptr;
/// Whether or not this transfer memory instance has mapped memory.
bool is_mapped = false;
};
} // namespace Kernel

View File

@@ -20,16 +20,16 @@ namespace Kernel {
namespace {
const char* GetMemoryStateName(MemoryState state) {
static constexpr const char* names[] = {
"Unmapped", "Io",
"Normal", "CodeStatic",
"CodeMutable", "Heap",
"Shared", "Unknown1",
"ModuleCodeStatic", "ModuleCodeMutable",
"IpcBuffer0", "Stack",
"ThreadLocal", "TransferMemoryIsolated",
"TransferMemory", "ProcessMemory",
"Inaccessible", "IpcBuffer1",
"IpcBuffer3", "KernelStack",
"Unmapped", "Io",
"Normal", "Code",
"CodeData", "Heap",
"Shared", "Unknown1",
"ModuleCode", "ModuleCodeData",
"IpcBuffer0", "Stack",
"ThreadLocal", "TransferMemoryIsolated",
"TransferMemory", "ProcessMemory",
"Inaccessible", "IpcBuffer1",
"IpcBuffer3", "KernelStack",
};
return names[ToSvcMemoryState(state)];
@@ -256,57 +256,50 @@ ResultCode VMManager::ReprotectRange(VAddr target, u64 size, VMAPermission new_p
return RESULT_SUCCESS;
}
ResultVal<VAddr> VMManager::HeapAllocate(VAddr target, u64 size, VMAPermission perms) {
if (!IsWithinHeapRegion(target, size)) {
return ERR_INVALID_ADDRESS;
ResultVal<VAddr> VMManager::SetHeapSize(u64 size) {
if (size > GetHeapRegionSize()) {
return ERR_OUT_OF_MEMORY;
}
// No need to do any additional work if the heap is already the given size.
if (size == GetCurrentHeapSize()) {
return MakeResult(heap_region_base);
}
if (heap_memory == nullptr) {
// Initialize heap
heap_memory = std::make_shared<std::vector<u8>>();
heap_start = heap_end = target;
heap_memory = std::make_shared<std::vector<u8>>(size);
heap_end = heap_region_base + size;
} else {
UnmapRange(heap_start, heap_end - heap_start);
UnmapRange(heap_region_base, GetCurrentHeapSize());
}
// If necessary, expand backing vector to cover new heap extents.
if (target < heap_start) {
heap_memory->insert(begin(*heap_memory), heap_start - target, 0);
heap_start = target;
// If necessary, expand backing vector to cover new heap extents in
// the case of allocating. Otherwise, shrink the backing memory,
// if a smaller heap has been requested.
const u64 old_heap_size = GetCurrentHeapSize();
if (size > old_heap_size) {
const u64 alloc_size = size - old_heap_size;
heap_memory->insert(heap_memory->end(), alloc_size, 0);
RefreshMemoryBlockMappings(heap_memory.get());
} else if (size < old_heap_size) {
heap_memory->resize(size);
heap_memory->shrink_to_fit();
RefreshMemoryBlockMappings(heap_memory.get());
}
if (target + size > heap_end) {
heap_memory->insert(end(*heap_memory), (target + size) - heap_end, 0);
heap_end = target + size;
RefreshMemoryBlockMappings(heap_memory.get());
}
ASSERT(heap_end - heap_start == heap_memory->size());
CASCADE_RESULT(auto vma, MapMemoryBlock(target, heap_memory, target - heap_start, size,
MemoryState::Heap));
Reprotect(vma, perms);
heap_end = heap_region_base + size;
ASSERT(GetCurrentHeapSize() == heap_memory->size());
heap_used = size;
return MakeResult<VAddr>(heap_end - size);
}
ResultCode VMManager::HeapFree(VAddr target, u64 size) {
if (!IsWithinHeapRegion(target, size)) {
return ERR_INVALID_ADDRESS;
const auto mapping_result =
MapMemoryBlock(heap_region_base, heap_memory, 0, size, MemoryState::Heap);
if (mapping_result.Failed()) {
return mapping_result.Code();
}
if (size == 0) {
return RESULT_SUCCESS;
}
const ResultCode result = UnmapRange(target, size);
if (result.IsError()) {
return result;
}
heap_used -= size;
return RESULT_SUCCESS;
return MakeResult<VAddr>(heap_region_base);
}
MemoryInfo VMManager::QueryMemory(VAddr address) const {
@@ -598,6 +591,7 @@ void VMManager::InitializeMemoryRegionRanges(FileSys::ProgramAddressSpaceType ty
heap_region_base = map_region_end;
heap_region_end = heap_region_base + heap_region_size;
heap_end = heap_region_base;
new_map_region_base = heap_region_end;
new_map_region_end = new_map_region_base + new_map_region_size;
@@ -692,10 +686,6 @@ u64 VMManager::GetTotalMemoryUsage() const {
return 0xF8000000;
}
u64 VMManager::GetTotalHeapUsage() const {
return heap_used;
}
VAddr VMManager::GetAddressSpaceBaseAddress() const {
return address_space_base;
}
@@ -778,6 +768,10 @@ u64 VMManager::GetHeapRegionSize() const {
return heap_region_end - heap_region_base;
}
u64 VMManager::GetCurrentHeapSize() const {
return heap_end - heap_region_base;
}
bool VMManager::IsWithinHeapRegion(VAddr address, u64 size) const {
return IsInsideAddressRange(address, size, GetHeapRegionBaseAddress(),
GetHeapRegionEndAddress());

View File

@@ -165,12 +165,12 @@ enum class MemoryState : u32 {
Unmapped = 0x00,
Io = 0x01 | FlagMapped,
Normal = 0x02 | FlagMapped | FlagQueryPhysicalAddressAllowed,
CodeStatic = 0x03 | CodeFlags | FlagMapProcess,
CodeMutable = 0x04 | CodeFlags | FlagMapProcess | FlagCodeMemory,
Code = 0x03 | CodeFlags | FlagMapProcess,
CodeData = 0x04 | DataFlags | FlagMapProcess | FlagCodeMemory,
Heap = 0x05 | DataFlags | FlagCodeMemory,
Shared = 0x06 | FlagMapped | FlagMemoryPoolAllocated,
ModuleCodeStatic = 0x08 | CodeFlags | FlagModule | FlagMapProcess,
ModuleCodeMutable = 0x09 | DataFlags | FlagModule | FlagMapProcess | FlagCodeMemory,
ModuleCode = 0x08 | CodeFlags | FlagModule | FlagMapProcess,
ModuleCodeData = 0x09 | DataFlags | FlagModule | FlagMapProcess | FlagCodeMemory,
IpcBuffer0 = 0x0A | FlagMapped | FlagQueryPhysicalAddressAllowed | FlagMemoryPoolAllocated |
IPCFlags | FlagSharedDevice | FlagSharedDeviceAligned,
@@ -380,11 +380,41 @@ public:
/// Changes the permissions of a range of addresses, splitting VMAs as necessary.
ResultCode ReprotectRange(VAddr target, u64 size, VMAPermission new_perms);
ResultVal<VAddr> HeapAllocate(VAddr target, u64 size, VMAPermission perms);
ResultCode HeapFree(VAddr target, u64 size);
ResultCode MirrorMemory(VAddr dst_addr, VAddr src_addr, u64 size, MemoryState state);
/// Attempts to allocate a heap with the given size.
///
/// @param size The size of the heap to allocate in bytes.
///
/// @note If a heap is currently allocated, and this is called
/// with a size that is equal to the size of the current heap,
/// then this function will do nothing and return the current
/// heap's starting address, as there's no need to perform
/// any additional heap allocation work.
///
/// @note If a heap is currently allocated, and this is called
/// with a size less than the current heap's size, then
/// this function will attempt to shrink the heap.
///
/// @note If a heap is currently allocated, and this is called
/// with a size larger than the current heap's size, then
/// this function will attempt to extend the size of the heap.
///
/// @returns A result indicating either success or failure.
/// <p>
/// If successful, this function will return a result
/// containing the starting address to the allocated heap.
/// <p>
/// If unsuccessful, this function will return a result
/// containing an error code.
///
/// @pre The given size must lie within the allowable heap
/// memory region managed by this VMManager instance.
/// Failure to abide by this will result in ERR_OUT_OF_MEMORY
/// being returned as the result.
///
ResultVal<VAddr> SetHeapSize(u64 size);
/// Queries the memory manager for information about the given address.
///
/// @param address The address to query the memory manager about for information.
@@ -418,9 +448,6 @@ public:
/// Gets the total memory usage, used by svcGetInfo
u64 GetTotalMemoryUsage() const;
/// Gets the total heap usage, used by svcGetInfo
u64 GetTotalHeapUsage() const;
/// Gets the address space base address
VAddr GetAddressSpaceBaseAddress() const;
@@ -469,6 +496,13 @@ public:
/// Gets the total size of the heap region in bytes.
u64 GetHeapRegionSize() const;
/// Gets the total size of the current heap in bytes.
///
/// @note This is the current allocated heap size, not the size
/// of the region it's allowed to exist within.
///
u64 GetCurrentHeapSize() const;
/// Determines whether or not the specified range is within the heap region.
bool IsWithinHeapRegion(VAddr address, u64 size) const;
@@ -625,9 +659,9 @@ private:
// This makes deallocation and reallocation of holes fast and keeps process memory contiguous
// in the emulator address space, allowing Memory::GetPointer to be reasonably safe.
std::shared_ptr<std::vector<u8>> heap_memory;
// The left/right bounds of the address space covered by heap_memory.
VAddr heap_start = 0;
// The end of the currently allocated heap. This is not an inclusive
// end of the range. This is essentially 'base_address + current_size'.
VAddr heap_end = 0;
u64 heap_used = 0;
};
} // namespace Kernel

View File

@@ -215,7 +215,21 @@ IDisplayController::IDisplayController() : ServiceFramework("IDisplayController"
IDisplayController::~IDisplayController() = default;
IDebugFunctions::IDebugFunctions() : ServiceFramework("IDebugFunctions") {}
IDebugFunctions::IDebugFunctions() : ServiceFramework{"IDebugFunctions"} {
// clang-format off
static const FunctionInfo functions[] = {
{0, nullptr, "NotifyMessageToHomeMenuForDebug"},
{1, nullptr, "OpenMainApplication"},
{10, nullptr, "EmulateButtonEvent"},
{20, nullptr, "InvalidateTransitionLayer"},
{30, nullptr, "RequestLaunchApplicationWithUserAndArgumentForDebug"},
{40, nullptr, "GetAppletResourceUsageInfo"},
};
// clang-format on
RegisterHandlers(functions);
}
IDebugFunctions::~IDebugFunctions() = default;
ISelfController::ISelfController(std::shared_ptr<NVFlinger::NVFlinger> nvflinger)

View File

@@ -25,21 +25,34 @@ Module::Interface::Interface(std::shared_ptr<Module> module, const char* name)
Module::Interface::~Interface() = default;
struct FatalInfo {
std::array<u64_le, 31> registers{}; // TODO(ogniK): See if this actually is registers or
// not(find a game which has non zero valeus)
u64_le unk0{};
u64_le unk1{};
u64_le unk2{};
u64_le unk3{};
u64_le unk4{};
u64_le unk5{};
u64_le unk6{};
enum class Architecture : s32 {
AArch64,
AArch32,
};
const char* ArchAsString() const {
return arch == Architecture::AArch64 ? "AArch64" : "AArch32";
}
std::array<u64_le, 31> registers{};
u64_le sp{};
u64_le pc{};
u64_le pstate{};
u64_le afsr0{};
u64_le afsr1{};
u64_le esr{};
u64_le far{};
std::array<u64_le, 32> backtrace{};
u64_le unk7{};
u64_le unk8{};
u64_le program_entry_point{};
// Bit flags that indicate which registers have been set with values
// for this context. The service itself uses these to determine which
// registers to specifically print out.
u64_le set_flags{};
u32_le backtrace_size{};
u32_le unk9{};
Architecture arch{};
u32_le unk10{}; // TODO(ogniK): Is this even used or is it just padding?
};
static_assert(sizeof(FatalInfo) == 0x250, "FatalInfo is an invalid size");
@@ -52,36 +65,36 @@ enum class FatalType : u32 {
static void GenerateErrorReport(ResultCode error_code, const FatalInfo& info) {
const auto title_id = Core::CurrentProcess()->GetTitleID();
std::string crash_report =
fmt::format("Yuzu {}-{} crash report\n"
"Title ID: {:016x}\n"
"Result: 0x{:X} ({:04}-{:04d})\n"
"\n",
Common::g_scm_branch, Common::g_scm_desc, title_id, error_code.raw,
2000 + static_cast<u32>(error_code.module.Value()),
static_cast<u32>(error_code.description.Value()), info.unk8, info.unk7);
std::string crash_report = fmt::format(
"Yuzu {}-{} crash report\n"
"Title ID: {:016x}\n"
"Result: 0x{:X} ({:04}-{:04d})\n"
"Set flags: 0x{:16X}\n"
"Program entry point: 0x{:16X}\n"
"\n",
Common::g_scm_branch, Common::g_scm_desc, title_id, error_code.raw,
2000 + static_cast<u32>(error_code.module.Value()),
static_cast<u32>(error_code.description.Value()), info.set_flags, info.program_entry_point);
if (info.backtrace_size != 0x0) {
crash_report += "Registers:\n";
// TODO(ogniK): This is just a guess, find a game which actually has non zero values
for (size_t i = 0; i < info.registers.size(); i++) {
crash_report +=
fmt::format(" X[{:02d}]: {:016x}\n", i, info.registers[i]);
}
crash_report += fmt::format(" Unknown 0: {:016x}\n", info.unk0);
crash_report += fmt::format(" Unknown 1: {:016x}\n", info.unk1);
crash_report += fmt::format(" Unknown 2: {:016x}\n", info.unk2);
crash_report += fmt::format(" Unknown 3: {:016x}\n", info.unk3);
crash_report += fmt::format(" Unknown 4: {:016x}\n", info.unk4);
crash_report += fmt::format(" Unknown 5: {:016x}\n", info.unk5);
crash_report += fmt::format(" Unknown 6: {:016x}\n", info.unk6);
crash_report += fmt::format(" SP: {:016x}\n", info.sp);
crash_report += fmt::format(" PC: {:016x}\n", info.pc);
crash_report += fmt::format(" PSTATE: {:016x}\n", info.pstate);
crash_report += fmt::format(" AFSR0: {:016x}\n", info.afsr0);
crash_report += fmt::format(" AFSR1: {:016x}\n", info.afsr1);
crash_report += fmt::format(" ESR: {:016x}\n", info.esr);
crash_report += fmt::format(" FAR: {:016x}\n", info.far);
crash_report += "\nBacktrace:\n";
for (size_t i = 0; i < info.backtrace_size; i++) {
crash_report +=
fmt::format(" Backtrace[{:02d}]: {:016x}\n", i, info.backtrace[i]);
}
crash_report += fmt::format("\nUnknown 7: 0x{:016x}\n", info.unk7);
crash_report += fmt::format("Unknown 8: 0x{:016x}\n", info.unk8);
crash_report += fmt::format("Unknown 9: 0x{:016x}\n", info.unk9);
crash_report += fmt::format("Architecture: {}\n", info.ArchAsString());
crash_report += fmt::format("Unknown 10: 0x{:016x}\n", info.unk10);
}
@@ -125,13 +138,13 @@ static void ThrowFatalError(ResultCode error_code, FatalType fatal_type, const F
case FatalType::ErrorReport:
GenerateErrorReport(error_code, info);
break;
};
}
}
void Module::Interface::ThrowFatal(Kernel::HLERequestContext& ctx) {
LOG_ERROR(Service_Fatal, "called");
IPC::RequestParser rp{ctx};
auto error_code = rp.Pop<ResultCode>();
const auto error_code = rp.Pop<ResultCode>();
ThrowFatalError(error_code, FatalType::ErrorScreen, {});
IPC::ResponseBuilder rb{ctx, 2};
@@ -141,8 +154,8 @@ void Module::Interface::ThrowFatal(Kernel::HLERequestContext& ctx) {
void Module::Interface::ThrowFatalWithPolicy(Kernel::HLERequestContext& ctx) {
LOG_ERROR(Service_Fatal, "called");
IPC::RequestParser rp(ctx);
auto error_code = rp.Pop<ResultCode>();
auto fatal_type = rp.PopEnum<FatalType>();
const auto error_code = rp.Pop<ResultCode>();
const auto fatal_type = rp.PopEnum<FatalType>();
ThrowFatalError(error_code, fatal_type, {}); // No info is passed with ThrowFatalWithPolicy
IPC::ResponseBuilder rb{ctx, 2};
@@ -152,9 +165,9 @@ void Module::Interface::ThrowFatalWithPolicy(Kernel::HLERequestContext& ctx) {
void Module::Interface::ThrowFatalWithCpuContext(Kernel::HLERequestContext& ctx) {
LOG_ERROR(Service_Fatal, "called");
IPC::RequestParser rp(ctx);
auto error_code = rp.Pop<ResultCode>();
auto fatal_type = rp.PopEnum<FatalType>();
auto fatal_info = ctx.ReadBuffer();
const auto error_code = rp.Pop<ResultCode>();
const auto fatal_type = rp.PopEnum<FatalType>();
const auto fatal_info = ctx.ReadBuffer();
FatalInfo info{};
ASSERT_MSG(fatal_info.size() == sizeof(FatalInfo), "Invalid fatal info buffer size!");

View File

@@ -36,9 +36,9 @@ namespace Service::HID {
// Updating period for each HID device.
// TODO(ogniK): Find actual polling rate of hid
constexpr u64 pad_update_ticks = Core::Timing::BASE_CLOCK_RATE / 66;
constexpr u64 accelerometer_update_ticks = Core::Timing::BASE_CLOCK_RATE / 100;
constexpr u64 gyroscope_update_ticks = Core::Timing::BASE_CLOCK_RATE / 100;
constexpr s64 pad_update_ticks = static_cast<s64>(Core::Timing::BASE_CLOCK_RATE / 66);
constexpr s64 accelerometer_update_ticks = static_cast<s64>(Core::Timing::BASE_CLOCK_RATE / 100);
constexpr s64 gyroscope_update_ticks = static_cast<s64>(Core::Timing::BASE_CLOCK_RATE / 100);
constexpr std::size_t SHARED_MEMORY_SIZE = 0x40000;
IAppletResource::IAppletResource() : ServiceFramework("IAppletResource") {
@@ -75,7 +75,7 @@ IAppletResource::IAppletResource() : ServiceFramework("IAppletResource") {
// Register update callbacks
auto& core_timing = Core::System::GetInstance().CoreTiming();
pad_update_event =
core_timing.RegisterEvent("HID::UpdatePadCallback", [this](u64 userdata, int cycles_late) {
core_timing.RegisterEvent("HID::UpdatePadCallback", [this](u64 userdata, s64 cycles_late) {
UpdateControllers(userdata, cycles_late);
});
@@ -106,7 +106,7 @@ void IAppletResource::GetSharedMemoryHandle(Kernel::HLERequestContext& ctx) {
rb.PushCopyObjects(shared_mem);
}
void IAppletResource::UpdateControllers(u64 userdata, int cycles_late) {
void IAppletResource::UpdateControllers(u64 userdata, s64 cycles_late) {
auto& core_timing = Core::System::GetInstance().CoreTiming();
const bool should_reload = Settings::values.is_device_reload_pending.exchange(false);

View File

@@ -4,6 +4,9 @@
#pragma once
#include "core/hle/service/hid/controllers/controller_base.h"
#include "core/hle/service/service.h"
#include "controllers/controller_base.h"
#include "core/hle/service/service.h"
@@ -62,7 +65,7 @@ private:
}
void GetSharedMemoryHandle(Kernel::HLERequestContext& ctx);
void UpdateControllers(u64 userdata, int cycles_late);
void UpdateControllers(u64 userdata, s64 cycles_late);
Kernel::SharedPtr<Kernel::SharedMemory> shared_mem;

View File

@@ -319,15 +319,14 @@ public:
}
ASSERT(vm_manager
.MirrorMemory(*map_address, nro_addr, nro_size,
Kernel::MemoryState::ModuleCodeStatic)
.MirrorMemory(*map_address, nro_addr, nro_size, Kernel::MemoryState::ModuleCode)
.IsSuccess());
ASSERT(vm_manager.UnmapRange(nro_addr, nro_size).IsSuccess());
if (bss_size > 0) {
ASSERT(vm_manager
.MirrorMemory(*map_address + nro_size, bss_addr, bss_size,
Kernel::MemoryState::ModuleCodeStatic)
Kernel::MemoryState::ModuleCode)
.IsSuccess());
ASSERT(vm_manager.UnmapRange(bss_addr, bss_size).IsSuccess());
}
@@ -388,8 +387,7 @@ public:
const auto& nro_size = iter->second.size;
ASSERT(vm_manager
.MirrorMemory(heap_addr, mapped_addr, nro_size,
Kernel::MemoryState::ModuleCodeStatic)
.MirrorMemory(heap_addr, mapped_addr, nro_size, Kernel::MemoryState::ModuleCode)
.IsSuccess());
ASSERT(vm_manager.UnmapRange(mapped_addr, nro_size).IsSuccess());

View File

@@ -150,7 +150,7 @@ private:
IPC::ResponseBuilder rb{ctx, 3};
rb.Push(RESULT_SUCCESS);
rb.PushRaw<u8>(Settings::values.enable_nfc);
rb.PushRaw<u8>(true);
}
void GetStateOld(Kernel::HLERequestContext& ctx) {

View File

@@ -335,7 +335,7 @@ void Module::Interface::CreateUserInterface(Kernel::HLERequestContext& ctx) {
}
bool Module::Interface::LoadAmiibo(const std::vector<u8>& buffer) {
std::lock_guard<std::recursive_mutex> lock(HLE::g_hle_lock);
std::lock_guard lock{HLE::g_hle_lock};
if (buffer.size() < sizeof(AmiiboFile)) {
return false;
}

View File

@@ -89,7 +89,7 @@ u32 nvhost_as_gpu::Remap(const std::vector<u8>& input, std::vector<u8>& output)
for (const auto& entry : entries) {
LOG_WARNING(Service_NVDRV, "remap entry, offset=0x{:X} handle=0x{:X} pages=0x{:X}",
entry.offset, entry.nvmap_handle, entry.pages);
Tegra::GPUVAddr offset = static_cast<Tegra::GPUVAddr>(entry.offset) << 0x10;
GPUVAddr offset = static_cast<GPUVAddr>(entry.offset) << 0x10;
auto object = nvmap_dev->GetObject(entry.nvmap_handle);
if (!object) {
LOG_CRITICAL(Service_NVDRV, "nvmap {} is an invalid handle!", entry.nvmap_handle);
@@ -102,7 +102,7 @@ u32 nvhost_as_gpu::Remap(const std::vector<u8>& input, std::vector<u8>& output)
u64 size = static_cast<u64>(entry.pages) << 0x10;
ASSERT(size <= object->size);
Tegra::GPUVAddr returned = gpu.MemoryManager().MapBufferEx(object->addr, offset, size);
GPUVAddr returned = gpu.MemoryManager().MapBufferEx(object->addr, offset, size);
ASSERT(returned == offset);
}
std::memcpy(output.data(), entries.data(), output.size());
@@ -173,16 +173,8 @@ u32 nvhost_as_gpu::UnmapBuffer(const std::vector<u8>& input, std::vector<u8>& ou
return 0;
}
auto& system_instance = Core::System::GetInstance();
// Remove this memory region from the rasterizer cache.
auto& gpu = system_instance.GPU();
auto cpu_addr = gpu.MemoryManager().GpuToCpuAddress(params.offset);
ASSERT(cpu_addr);
gpu.FlushAndInvalidateRegion(ToCacheAddr(Memory::GetPointer(*cpu_addr)), itr->second.size);
params.offset = gpu.MemoryManager().UnmapBuffer(params.offset, itr->second.size);
params.offset = Core::System::GetInstance().GPU().MemoryManager().UnmapBuffer(params.offset,
itr->second.size);
buffer_mappings.erase(itr->second.offset);
std::memcpy(output.data(), &params, output.size());

View File

@@ -26,7 +26,7 @@
namespace Service::NVFlinger {
constexpr std::size_t SCREEN_REFRESH_RATE = 60;
constexpr u64 frame_ticks = static_cast<u64>(Core::Timing::BASE_CLOCK_RATE / SCREEN_REFRESH_RATE);
constexpr s64 frame_ticks = static_cast<s64>(Core::Timing::BASE_CLOCK_RATE / SCREEN_REFRESH_RATE);
NVFlinger::NVFlinger(Core::Timing::CoreTiming& core_timing) : core_timing{core_timing} {
displays.emplace_back(0, "Default");
@@ -37,7 +37,7 @@ NVFlinger::NVFlinger(Core::Timing::CoreTiming& core_timing) : core_timing{core_t
// Schedule the screen composition events
composition_event =
core_timing.RegisterEvent("ScreenComposition", [this](u64 userdata, int cycles_late) {
core_timing.RegisterEvent("ScreenComposition", [this](u64 userdata, s64 cycles_late) {
Compose();
this->core_timing.ScheduleEvent(frame_ticks - cycles_late, composition_event);
});

View File

@@ -2,13 +2,88 @@
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include "common/assert.h"
#include "common/logging/log.h"
#include "core/file_sys/errors.h"
#include "core/file_sys/system_archive/system_version.h"
#include "core/hle/ipc_helpers.h"
#include "core/hle/kernel/client_port.h"
#include "core/hle/service/filesystem/filesystem.h"
#include "core/hle/service/set/set_sys.h"
namespace Service::Set {
namespace {
constexpr u64 SYSTEM_VERSION_FILE_MINOR_REVISION_OFFSET = 0x05;
enum class GetFirmwareVersionType {
Version1,
Version2,
};
void GetFirmwareVersionImpl(Kernel::HLERequestContext& ctx, GetFirmwareVersionType type) {
LOG_WARNING(Service_SET, "called - Using hardcoded firmware version '{}'",
FileSys::SystemArchive::GetLongDisplayVersion());
ASSERT_MSG(ctx.GetWriteBufferSize() == 0x100,
"FirmwareVersion output buffer must be 0x100 bytes in size!");
// Instead of using the normal procedure of checking for the real system archive and if it
// doesn't exist, synthesizing one, I feel that that would lead to strange bugs because a
// used is using a really old or really new SystemVersion title. The synthesized one ensures
// consistence (currently reports as 5.1.0-0.0)
const auto archive = FileSys::SystemArchive::SystemVersion();
const auto early_exit_failure = [&ctx](const std::string& desc, ResultCode code) {
LOG_ERROR(Service_SET, "General failure while attempting to resolve firmware version ({}).",
desc.c_str());
IPC::ResponseBuilder rb{ctx, 2};
rb.Push(code);
};
if (archive == nullptr) {
early_exit_failure("The system version archive couldn't be synthesized.",
FileSys::ERROR_FAILED_MOUNT_ARCHIVE);
return;
}
const auto ver_file = archive->GetFile("file");
if (ver_file == nullptr) {
early_exit_failure("The system version archive didn't contain the file 'file'.",
FileSys::ERROR_INVALID_ARGUMENT);
return;
}
auto data = ver_file->ReadAllBytes();
if (data.size() != 0x100) {
early_exit_failure("The system version file 'file' was not the correct size.",
FileSys::ERROR_OUT_OF_BOUNDS);
return;
}
// If the command is GetFirmwareVersion (as opposed to GetFirmwareVersion2), hardware will
// zero out the REVISION_MINOR field.
if (type == GetFirmwareVersionType::Version1) {
data[SYSTEM_VERSION_FILE_MINOR_REVISION_OFFSET] = 0;
}
ctx.WriteBuffer(data);
IPC::ResponseBuilder rb{ctx, 2};
rb.Push(RESULT_SUCCESS);
}
} // Anonymous namespace
void SET_SYS::GetFirmwareVersion(Kernel::HLERequestContext& ctx) {
LOG_DEBUG(Service_SET, "called");
GetFirmwareVersionImpl(ctx, GetFirmwareVersionType::Version1);
}
void SET_SYS::GetFirmwareVersion2(Kernel::HLERequestContext& ctx) {
LOG_DEBUG(Service_SET, "called");
GetFirmwareVersionImpl(ctx, GetFirmwareVersionType::Version2);
}
void SET_SYS::GetColorSetId(Kernel::HLERequestContext& ctx) {
LOG_DEBUG(Service_SET, "called");
@@ -33,8 +108,8 @@ SET_SYS::SET_SYS() : ServiceFramework("set:sys") {
{0, nullptr, "SetLanguageCode"},
{1, nullptr, "SetNetworkSettings"},
{2, nullptr, "GetNetworkSettings"},
{3, nullptr, "GetFirmwareVersion"},
{4, nullptr, "GetFirmwareVersion2"},
{3, &SET_SYS::GetFirmwareVersion, "GetFirmwareVersion"},
{4, &SET_SYS::GetFirmwareVersion2, "GetFirmwareVersion2"},
{5, nullptr, "GetFirmwareVersionDigest"},
{7, nullptr, "GetLockScreenFlag"},
{8, nullptr, "SetLockScreenFlag"},

View File

@@ -20,6 +20,8 @@ private:
BasicBlack = 1,
};
void GetFirmwareVersion(Kernel::HLERequestContext& ctx);
void GetFirmwareVersion2(Kernel::HLERequestContext& ctx);
void GetColorSetId(Kernel::HLERequestContext& ctx);
void SetColorSetId(Kernel::HLERequestContext& ctx);

View File

@@ -341,7 +341,7 @@ Kernel::CodeSet ElfReader::LoadInto(VAddr vaddr) {
}
codeset.entrypoint = base_addr + header->e_entry;
codeset.memory = std::make_shared<std::vector<u8>>(std::move(program_image));
codeset.memory = std::move(program_image);
LOG_DEBUG(Loader, "Done loading.");

View File

@@ -187,7 +187,7 @@ static bool LoadNroImpl(Kernel::Process& process, const std::vector<u8>& data,
program_image.resize(static_cast<u32>(program_image.size()) + bss_size);
// Load codeset for current process
codeset.memory = std::make_shared<std::vector<u8>>(std::move(program_image));
codeset.memory = std::move(program_image);
process.LoadModule(std::move(codeset), load_base);
// Register module with GDBStub

View File

@@ -7,8 +7,10 @@
#include <lz4.h>
#include "common/common_funcs.h"
#include "common/file_util.h"
#include "common/hex_util.h"
#include "common/logging/log.h"
#include "common/swap.h"
#include "core/core.h"
#include "core/file_sys/patch_manager.h"
#include "core/gdbstub/gdbstub.h"
#include "core/hle/kernel/code_set.h"
@@ -19,36 +21,8 @@
#include "core/settings.h"
namespace Loader {
struct NsoSegmentHeader {
u32_le offset;
u32_le location;
u32_le size;
union {
u32_le alignment;
u32_le bss_size;
};
};
static_assert(sizeof(NsoSegmentHeader) == 0x10, "NsoSegmentHeader has incorrect size.");
struct NsoHeader {
u32_le magic;
u32_le version;
INSERT_PADDING_WORDS(1);
u8 flags;
std::array<NsoSegmentHeader, 3> segments; // Text, RoData, Data (in that order)
std::array<u8, 0x20> build_id;
std::array<u32_le, 3> segments_compressed_size;
bool IsSegmentCompressed(size_t segment_num) const {
ASSERT_MSG(segment_num < 3, "Invalid segment {}", segment_num);
return ((flags >> segment_num) & 1);
}
};
static_assert(sizeof(NsoHeader) == 0x6c, "NsoHeader has incorrect size.");
static_assert(std::is_trivially_copyable_v<NsoHeader>, "NsoHeader isn't trivially copyable.");
struct ModHeader {
namespace {
struct MODHeader {
u32_le magic;
u32_le dynamic_offset;
u32_le bss_start_offset;
@@ -57,7 +31,32 @@ struct ModHeader {
u32_le eh_frame_hdr_end_offset;
u32_le module_offset; // Offset to runtime-generated module object. typically equal to .bss base
};
static_assert(sizeof(ModHeader) == 0x1c, "ModHeader has incorrect size.");
static_assert(sizeof(MODHeader) == 0x1c, "MODHeader has incorrect size.");
std::vector<u8> DecompressSegment(const std::vector<u8>& compressed_data,
const NSOSegmentHeader& header) {
std::vector<u8> uncompressed_data(header.size);
const int bytes_uncompressed =
LZ4_decompress_safe(reinterpret_cast<const char*>(compressed_data.data()),
reinterpret_cast<char*>(uncompressed_data.data()),
static_cast<int>(compressed_data.size()), header.size);
ASSERT_MSG(bytes_uncompressed == static_cast<int>(header.size) &&
bytes_uncompressed == static_cast<int>(uncompressed_data.size()),
"{} != {} != {}", bytes_uncompressed, header.size, uncompressed_data.size());
return uncompressed_data;
}
constexpr u32 PageAlignSize(u32 size) {
return (size + Memory::PAGE_MASK) & ~Memory::PAGE_MASK;
}
} // Anonymous namespace
bool NSOHeader::IsSegmentCompressed(size_t segment_num) const {
ASSERT_MSG(segment_num < 3, "Invalid segment {}", segment_num);
return ((flags >> segment_num) & 1) != 0;
}
AppLoader_NSO::AppLoader_NSO(FileSys::VirtualFile file) : AppLoader(std::move(file)) {}
@@ -74,38 +73,22 @@ FileType AppLoader_NSO::IdentifyType(const FileSys::VirtualFile& file) {
return FileType::NSO;
}
static std::vector<u8> DecompressSegment(const std::vector<u8>& compressed_data,
const NsoSegmentHeader& header) {
std::vector<u8> uncompressed_data(header.size);
const int bytes_uncompressed =
LZ4_decompress_safe(reinterpret_cast<const char*>(compressed_data.data()),
reinterpret_cast<char*>(uncompressed_data.data()),
static_cast<int>(compressed_data.size()), header.size);
ASSERT_MSG(bytes_uncompressed == static_cast<int>(header.size) &&
bytes_uncompressed == static_cast<int>(uncompressed_data.size()),
"{} != {} != {}", bytes_uncompressed, header.size, uncompressed_data.size());
return uncompressed_data;
}
static constexpr u32 PageAlignSize(u32 size) {
return (size + Memory::PAGE_MASK) & ~Memory::PAGE_MASK;
}
std::optional<VAddr> AppLoader_NSO::LoadModule(Kernel::Process& process,
const FileSys::VfsFile& file, VAddr load_base,
bool should_pass_arguments,
std::optional<FileSys::PatchManager> pm) {
if (file.GetSize() < sizeof(NsoHeader))
if (file.GetSize() < sizeof(NSOHeader)) {
return {};
}
NsoHeader nso_header{};
if (sizeof(NsoHeader) != file.ReadObject(&nso_header))
NSOHeader nso_header{};
if (sizeof(NSOHeader) != file.ReadObject(&nso_header)) {
return {};
}
if (nso_header.magic != Common::MakeMagic('N', 'S', 'O', '0'))
if (nso_header.magic != Common::MakeMagic('N', 'S', 'O', '0')) {
return {};
}
// Build program image
Kernel::CodeSet codeset;
@@ -141,10 +124,10 @@ std::optional<VAddr> AppLoader_NSO::LoadModule(Kernel::Process& process,
std::memcpy(&module_offset, program_image.data() + 4, sizeof(u32));
// Read MOD header
ModHeader mod_header{};
MODHeader mod_header{};
// Default .bss to size in segment header if MOD0 section doesn't exist
u32 bss_size{PageAlignSize(nso_header.segments[2].bss_size)};
std::memcpy(&mod_header, program_image.data() + module_offset, sizeof(ModHeader));
std::memcpy(&mod_header, program_image.data() + module_offset, sizeof(MODHeader));
const bool has_mod_header{mod_header.magic == Common::MakeMagic('M', 'O', 'D', '0')};
if (has_mod_header) {
// Resize program image to include .bss section and page align each section
@@ -156,17 +139,29 @@ std::optional<VAddr> AppLoader_NSO::LoadModule(Kernel::Process& process,
// Apply patches if necessary
if (pm && (pm->HasNSOPatch(nso_header.build_id) || Settings::values.dump_nso)) {
std::vector<u8> pi_header(program_image.size() + 0x100);
std::memcpy(pi_header.data(), &nso_header, sizeof(NsoHeader));
std::memcpy(pi_header.data() + 0x100, program_image.data(), program_image.size());
std::vector<u8> pi_header(sizeof(NSOHeader) + program_image.size());
pi_header.insert(pi_header.begin(), reinterpret_cast<u8*>(&nso_header),
reinterpret_cast<u8*>(&nso_header) + sizeof(NSOHeader));
pi_header.insert(pi_header.begin() + sizeof(NSOHeader), program_image.begin(),
program_image.end());
pi_header = pm->PatchNSO(pi_header);
std::memcpy(program_image.data(), pi_header.data() + 0x100, program_image.size());
std::copy(pi_header.begin() + sizeof(NSOHeader), pi_header.end(), program_image.begin());
}
// Apply cheats if they exist and the program has a valid title ID
if (pm) {
auto& system = Core::System::GetInstance();
const auto cheats = pm->CreateCheatList(system, nso_header.build_id);
if (!cheats.empty()) {
system.RegisterCheatList(cheats, Common::HexArrayToString(nso_header.build_id),
load_base, load_base + program_image.size());
}
}
// Load codeset for current process
codeset.memory = std::make_shared<std::vector<u8>>(std::move(program_image));
codeset.memory = std::move(program_image);
process.LoadModule(std::move(codeset), load_base);
// Register module with GDBStub

View File

@@ -4,7 +4,9 @@
#pragma once
#include <array>
#include <optional>
#include <type_traits>
#include "common/common_types.h"
#include "common/swap.h"
#include "core/file_sys/patch_manager.h"
@@ -16,6 +18,43 @@ class Process;
namespace Loader {
struct NSOSegmentHeader {
u32_le offset;
u32_le location;
u32_le size;
union {
u32_le alignment;
u32_le bss_size;
};
};
static_assert(sizeof(NSOSegmentHeader) == 0x10, "NsoSegmentHeader has incorrect size.");
struct NSOHeader {
using SHA256Hash = std::array<u8, 0x20>;
struct RODataRelativeExtent {
u32_le data_offset;
u32_le size;
};
u32_le magic;
u32_le version;
u32 reserved;
u32_le flags;
std::array<NSOSegmentHeader, 3> segments; // Text, RoData, Data (in that order)
std::array<u8, 0x20> build_id;
std::array<u32_le, 3> segments_compressed_size;
std::array<u8, 0x1C> padding;
RODataRelativeExtent api_info_extent;
RODataRelativeExtent dynstr_extent;
RODataRelativeExtent dynsyn_extent;
std::array<SHA256Hash, 3> segment_hashes;
bool IsSegmentCompressed(size_t segment_num) const;
};
static_assert(sizeof(NSOHeader) == 0x100, "NSOHeader has incorrect size.");
static_assert(std::is_trivially_copyable_v<NSOHeader>, "NSOHeader must be trivially copyable.");
constexpr u64 NSO_ARGUMENT_DATA_ALLOCATION_SIZE = 0x9000;
struct NSOArgumentHeader {

View File

@@ -48,7 +48,7 @@ static void MapPages(Common::PageTable& page_table, VAddr base, u64 size, u8* me
(base + size) * PAGE_SIZE);
// During boot, current_page_table might not be set yet, in which case we need not flush
if (current_page_table) {
if (Core::System::GetInstance().IsPoweredOn()) {
Core::System::GetInstance().GPU().FlushAndInvalidateRegion(base << PAGE_BITS,
size * PAGE_SIZE);
}

View File

@@ -6,9 +6,6 @@
#include <cstddef>
#include <string>
#include <tuple>
#include <vector>
#include <boost/icl/interval_map.hpp>
#include "common/common_types.h"
namespace Common {

View File

@@ -18,13 +18,13 @@ using std::chrono::microseconds;
namespace Core {
void PerfStats::BeginSystemFrame() {
std::lock_guard<std::mutex> lock(object_mutex);
std::lock_guard lock{object_mutex};
frame_begin = Clock::now();
}
void PerfStats::EndSystemFrame() {
std::lock_guard<std::mutex> lock(object_mutex);
std::lock_guard lock{object_mutex};
auto frame_end = Clock::now();
accumulated_frametime += frame_end - frame_begin;
@@ -35,13 +35,13 @@ void PerfStats::EndSystemFrame() {
}
void PerfStats::EndGameFrame() {
std::lock_guard<std::mutex> lock(object_mutex);
std::lock_guard lock{object_mutex};
game_frames += 1;
}
PerfStatsResults PerfStats::GetAndResetStats(microseconds current_system_time_us) {
std::lock_guard<std::mutex> lock(object_mutex);
std::lock_guard lock{object_mutex};
const auto now = Clock::now();
// Walltime elapsed since stats were reset
@@ -67,7 +67,7 @@ PerfStatsResults PerfStats::GetAndResetStats(microseconds current_system_time_us
}
double PerfStats::GetLastFrameTimeScale() {
std::lock_guard<std::mutex> lock(object_mutex);
std::lock_guard lock{object_mutex};
constexpr double FRAME_LENGTH = 1.0 / 60;
return duration_cast<DoubleSecs>(previous_frame_length).count() / FRAME_LENGTH;

View File

@@ -82,7 +82,6 @@ void LogSetting(const std::string& name, const T& value) {
void LogSettings() {
LOG_INFO(Config, "yuzu Configuration:");
LogSetting("System_UseDockedMode", Settings::values.use_docked_mode);
LogSetting("System_EnableNfc", Settings::values.enable_nfc);
LogSetting("System_RngSeed", Settings::values.rng_seed.value_or(0));
LogSetting("System_CurrentUser", Settings::values.current_user);
LogSetting("System_LanguageIndex", Settings::values.language_index);

View File

@@ -349,7 +349,6 @@ struct TouchscreenInput {
struct Values {
// System
bool use_docked_mode;
bool enable_nfc;
std::optional<u32> rng_seed;
// Measured in seconds since epoch
std::optional<std::chrono::seconds> custom_rtc;

View File

@@ -36,18 +36,18 @@ struct KeyButtonPair {
class KeyButtonList {
public:
void AddKeyButton(int key_code, KeyButton* key_button) {
std::lock_guard<std::mutex> guard(mutex);
std::lock_guard guard{mutex};
list.push_back(KeyButtonPair{key_code, key_button});
}
void RemoveKeyButton(const KeyButton* key_button) {
std::lock_guard<std::mutex> guard(mutex);
std::lock_guard guard{mutex};
list.remove_if(
[key_button](const KeyButtonPair& pair) { return pair.key_button == key_button; });
}
void ChangeKeyStatus(int key_code, bool pressed) {
std::lock_guard<std::mutex> guard(mutex);
std::lock_guard guard{mutex};
for (const KeyButtonPair& pair : list) {
if (pair.key_code == key_code)
pair.key_button->status.store(pressed);
@@ -55,7 +55,7 @@ public:
}
void ChangeAllKeyStatus(bool pressed) {
std::lock_guard<std::mutex> guard(mutex);
std::lock_guard guard{mutex};
for (const KeyButtonPair& pair : list) {
pair.key_button->status.store(pressed);
}

View File

@@ -39,7 +39,7 @@ public:
void Tilt(int x, int y) {
auto mouse_move = Common::MakeVec(x, y) - mouse_origin;
if (is_tilting) {
std::lock_guard<std::mutex> guard(tilt_mutex);
std::lock_guard guard{tilt_mutex};
if (mouse_move.x == 0 && mouse_move.y == 0) {
tilt_angle = 0;
} else {
@@ -51,13 +51,13 @@ public:
}
void EndTilt() {
std::lock_guard<std::mutex> guard(tilt_mutex);
std::lock_guard guard{tilt_mutex};
tilt_angle = 0;
is_tilting = false;
}
std::tuple<Common::Vec3<float>, Common::Vec3<float>> GetStatus() {
std::lock_guard<std::mutex> guard(status_mutex);
std::lock_guard guard{status_mutex};
return status;
}
@@ -93,7 +93,7 @@ private:
old_q = q;
{
std::lock_guard<std::mutex> guard(tilt_mutex);
std::lock_guard guard{tilt_mutex};
// Find the quaternion describing current 3DS tilting
q = Common::MakeQuaternion(
@@ -115,7 +115,7 @@ private:
// Update the sensor state
{
std::lock_guard<std::mutex> guard(status_mutex);
std::lock_guard guard{status_mutex};
status = std::make_tuple(gravity, angular_rate);
}
}

View File

@@ -55,22 +55,22 @@ public:
: guid{std::move(guid_)}, port{port_}, sdl_joystick{joystick, deleter} {}
void SetButton(int button, bool value) {
std::lock_guard<std::mutex> lock(mutex);
std::lock_guard lock{mutex};
state.buttons[button] = value;
}
bool GetButton(int button) const {
std::lock_guard<std::mutex> lock(mutex);
std::lock_guard lock{mutex};
return state.buttons.at(button);
}
void SetAxis(int axis, Sint16 value) {
std::lock_guard<std::mutex> lock(mutex);
std::lock_guard lock{mutex};
state.axes[axis] = value;
}
float GetAxis(int axis) const {
std::lock_guard<std::mutex> lock(mutex);
std::lock_guard lock{mutex};
return state.axes.at(axis) / 32767.0f;
}
@@ -92,12 +92,12 @@ public:
}
void SetHat(int hat, Uint8 direction) {
std::lock_guard<std::mutex> lock(mutex);
std::lock_guard lock{mutex};
state.hats[hat] = direction;
}
bool GetHatDirection(int hat, Uint8 direction) const {
std::lock_guard<std::mutex> lock(mutex);
std::lock_guard lock{mutex};
return (state.hats.at(hat) & direction) != 0;
}
/**
@@ -140,7 +140,7 @@ private:
* Get the nth joystick with the corresponding GUID
*/
std::shared_ptr<SDLJoystick> SDLState::GetSDLJoystickByGUID(const std::string& guid, int port) {
std::lock_guard<std::mutex> lock(joystick_map_mutex);
std::lock_guard lock{joystick_map_mutex};
const auto it = joystick_map.find(guid);
if (it != joystick_map.end()) {
while (it->second.size() <= port) {
@@ -161,7 +161,8 @@ std::shared_ptr<SDLJoystick> SDLState::GetSDLJoystickByGUID(const std::string& g
std::shared_ptr<SDLJoystick> SDLState::GetSDLJoystickBySDLID(SDL_JoystickID sdl_id) {
auto sdl_joystick = SDL_JoystickFromInstanceID(sdl_id);
const std::string guid = GetGUID(sdl_joystick);
std::lock_guard<std::mutex> lock(joystick_map_mutex);
std::lock_guard lock{joystick_map_mutex};
auto map_it = joystick_map.find(guid);
if (map_it != joystick_map.end()) {
auto vec_it = std::find_if(map_it->second.begin(), map_it->second.end(),
@@ -198,8 +199,9 @@ void SDLState::InitJoystick(int joystick_index) {
LOG_ERROR(Input, "failed to open joystick {}", joystick_index);
return;
}
std::string guid = GetGUID(sdl_joystick);
std::lock_guard<std::mutex> lock(joystick_map_mutex);
const std::string guid = GetGUID(sdl_joystick);
std::lock_guard lock{joystick_map_mutex};
if (joystick_map.find(guid) == joystick_map.end()) {
auto joystick = std::make_shared<SDLJoystick>(guid, 0, sdl_joystick);
joystick_map[guid].emplace_back(std::move(joystick));
@@ -221,7 +223,7 @@ void SDLState::CloseJoystick(SDL_Joystick* sdl_joystick) {
std::string guid = GetGUID(sdl_joystick);
std::shared_ptr<SDLJoystick> joystick;
{
std::lock_guard<std::mutex> lock(joystick_map_mutex);
std::lock_guard lock{joystick_map_mutex};
// This call to guid is safe since the joystick is guaranteed to be in the map
auto& joystick_guid_list = joystick_map[guid];
const auto joystick_it =
@@ -274,7 +276,7 @@ void SDLState::HandleGameControllerEvent(const SDL_Event& event) {
}
void SDLState::CloseJoysticks() {
std::lock_guard<std::mutex> lock(joystick_map_mutex);
std::lock_guard lock{joystick_map_mutex};
joystick_map.clear();
}

View File

@@ -1,5 +1,7 @@
add_executable(tests
common/bit_field.cpp
common/bit_utils.cpp
common/multi_level_queue.cpp
common/param_package.cpp
common/ring_buffer.cpp
core/arm/arm_test_common.cpp

View File

@@ -0,0 +1,23 @@
// Copyright 2017 Citra Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include <catch2/catch.hpp>
#include <math.h>
#include "common/bit_util.h"
namespace Common {
TEST_CASE("BitUtils::CountTrailingZeroes", "[common]") {
REQUIRE(Common::CountTrailingZeroes32(0) == 32);
REQUIRE(Common::CountTrailingZeroes64(0) == 64);
REQUIRE(Common::CountTrailingZeroes32(9) == 0);
REQUIRE(Common::CountTrailingZeroes32(8) == 3);
REQUIRE(Common::CountTrailingZeroes32(0x801000) == 12);
REQUIRE(Common::CountTrailingZeroes64(9) == 0);
REQUIRE(Common::CountTrailingZeroes64(8) == 3);
REQUIRE(Common::CountTrailingZeroes64(0x801000) == 12);
REQUIRE(Common::CountTrailingZeroes64(0x801000000000UL) == 36);
}
} // namespace Common

View File

@@ -0,0 +1,55 @@
// Copyright 2019 Yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include <catch2/catch.hpp>
#include <math.h>
#include "common/common_types.h"
#include "common/multi_level_queue.h"
namespace Common {
TEST_CASE("MultiLevelQueue", "[common]") {
std::array<f32, 8> values = {0.0, 5.0, 1.0, 9.0, 8.0, 2.0, 6.0, 7.0};
Common::MultiLevelQueue<f32, 64> mlq;
REQUIRE(mlq.empty());
mlq.add(values[2], 2);
mlq.add(values[7], 7);
mlq.add(values[3], 3);
mlq.add(values[4], 4);
mlq.add(values[0], 0);
mlq.add(values[5], 5);
mlq.add(values[6], 6);
mlq.add(values[1], 1);
u32 index = 0;
bool all_set = true;
for (auto& f : mlq) {
all_set &= (f == values[index]);
index++;
}
REQUIRE(all_set);
REQUIRE(!mlq.empty());
f32 v = 8.0;
mlq.add(v, 2);
v = -7.0;
mlq.add(v, 2, false);
REQUIRE(mlq.front(2) == -7.0);
mlq.yield(2);
REQUIRE(mlq.front(2) == values[2]);
REQUIRE(mlq.back(2) == -7.0);
REQUIRE(mlq.empty(8));
v = 10.0;
mlq.add(v, 8);
mlq.adjust(v, 8, 9);
REQUIRE(mlq.front(9) == v);
REQUIRE(mlq.empty(8));
REQUIRE(!mlq.empty(9));
mlq.adjust(values[0], 0, 9);
REQUIRE(mlq.highest_priority_set() == 1);
REQUIRE(mlq.lowest_priority_set() == 9);
mlq.remove(values[1], 1);
REQUIRE(mlq.highest_priority_set() == 2);
REQUIRE(mlq.empty(1));
}
} // namespace Common

View File

@@ -106,8 +106,6 @@ add_library(video_core STATIC
textures/decoders.cpp
textures/decoders.h
textures/texture.h
texture_cache.cpp
texture_cache.h
video_core.cpp
video_core.h
)

View File

@@ -10,7 +10,7 @@ namespace Tegra {
void DebugContext::DoOnEvent(Event event, void* data) {
{
std::unique_lock<std::mutex> lock(breakpoint_mutex);
std::unique_lock lock{breakpoint_mutex};
// TODO(Subv): Commit the rasterizer's caches so framebuffers, render targets, etc. will
// show on debug widgets
@@ -32,7 +32,7 @@ void DebugContext::DoOnEvent(Event event, void* data) {
void DebugContext::Resume() {
{
std::lock_guard<std::mutex> lock(breakpoint_mutex);
std::lock_guard lock{breakpoint_mutex};
// Tell all observers that we are about to resume
for (auto& breakpoint_observer : breakpoint_observers) {

View File

@@ -40,7 +40,7 @@ public:
/// Constructs the object such that it observes events of the given DebugContext.
explicit BreakPointObserver(std::shared_ptr<DebugContext> debug_context)
: context_weak(debug_context) {
std::unique_lock<std::mutex> lock(debug_context->breakpoint_mutex);
std::unique_lock lock{debug_context->breakpoint_mutex};
debug_context->breakpoint_observers.push_back(this);
}
@@ -48,7 +48,7 @@ public:
auto context = context_weak.lock();
if (context) {
{
std::unique_lock<std::mutex> lock(context->breakpoint_mutex);
std::unique_lock lock{context->breakpoint_mutex};
context->breakpoint_observers.remove(this);
}

View File

@@ -9,7 +9,6 @@
#include "common/bit_field.h"
#include "common/common_types.h"
#include "video_core/memory_manager.h"
namespace Tegra {

View File

@@ -46,7 +46,7 @@ void KeplerMemory::ProcessData(u32 data) {
// contain a dirty surface that will have to be written back to memory.
const GPUVAddr address{regs.dest.Address() + state.write_offset * sizeof(u32)};
rasterizer.InvalidateRegion(ToCacheAddr(memory_manager.GetPointer(address)), sizeof(u32));
memory_manager.Write32(address, data);
memory_manager.Write<u32>(address, data);
system.GPU().Maxwell3D().dirty_flags.OnMemoryWrite();

View File

@@ -307,7 +307,7 @@ void Maxwell3D::ProcessQueryGet() {
// Write the current query sequence to the sequence address.
// TODO(Subv): Find out what happens if you use a long query type but mark it as a short
// query.
memory_manager.Write32(sequence_address, sequence);
memory_manager.Write<u32>(sequence_address, sequence);
} else {
// Write the 128-bit result structure in long mode. Note: We emulate an infinitely fast
// GPU, this command may actually take a while to complete in real hardware due to GPU
@@ -395,7 +395,7 @@ void Maxwell3D::ProcessCBData(u32 value) {
u8* ptr{memory_manager.GetPointer(address)};
rasterizer.InvalidateRegion(ToCacheAddr(ptr), sizeof(u32));
memory_manager.Write32(address, value);
memory_manager.Write<u32>(address, value);
dirty_flags.OnMemoryWrite();
@@ -447,7 +447,7 @@ std::vector<Texture::FullTextureInfo> Maxwell3D::GetStageTextures(Regs::ShaderSt
for (GPUVAddr current_texture = tex_info_buffer.address + TextureInfoOffset;
current_texture < tex_info_buffer_end; current_texture += sizeof(Texture::TextureHandle)) {
const Texture::TextureHandle tex_handle{memory_manager.Read32(current_texture)};
const Texture::TextureHandle tex_handle{memory_manager.Read<u32>(current_texture)};
Texture::FullTextureInfo tex_info{};
// TODO(Subv): Use the shader to determine which textures are actually accessed.
@@ -482,7 +482,7 @@ Texture::FullTextureInfo Maxwell3D::GetStageTexture(Regs::ShaderStage stage,
ASSERT(tex_info_address < tex_info_buffer.address + tex_info_buffer.size);
const Texture::TextureHandle tex_handle{memory_manager.Read32(tex_info_address)};
const Texture::TextureHandle tex_handle{memory_manager.Read<u32>(tex_info_address)};
Texture::FullTextureInfo tex_info{};
tex_info.index = static_cast<u32>(offset);

View File

@@ -88,6 +88,16 @@ void MaxwellDMA::HandleCopy() {
auto source_ptr{memory_manager.GetPointer(source)};
auto dst_ptr{memory_manager.GetPointer(dest)};
if (!source_ptr) {
LOG_ERROR(HW_GPU, "source_ptr is invalid");
return;
}
if (!dst_ptr) {
LOG_ERROR(HW_GPU, "dst_ptr is invalid");
return;
}
const auto FlushAndInvalidate = [&](u32 src_size, u64 dst_size) {
// TODO(Subv): For now, manually flush the regions until we implement GPU-accelerated
// copying.

View File

@@ -12,6 +12,7 @@
#include "video_core/engines/maxwell_3d.h"
#include "video_core/engines/maxwell_dma.h"
#include "video_core/gpu.h"
#include "video_core/memory_manager.h"
#include "video_core/renderer_base.h"
namespace Tegra {
@@ -285,9 +286,10 @@ void GPU::ProcessSemaphoreTriggerMethod() {
// TODO(Kmather73): Generate a real GPU timestamp and write it here instead of
// CoreTiming
block.timestamp = Core::System::GetInstance().CoreTiming().GetTicks();
memory_manager->WriteBlock(regs.smaphore_address.SmaphoreAddress(), &block, sizeof(block));
memory_manager->WriteBlock(regs.semaphore_address.SemaphoreAddress(), &block,
sizeof(block));
} else {
const u32 word{memory_manager->Read32(regs.smaphore_address.SmaphoreAddress())};
const u32 word{memory_manager->Read<u32>(regs.semaphore_address.SemaphoreAddress())};
if ((op == GpuSemaphoreOperation::AcquireEqual && word == regs.semaphore_sequence) ||
(op == GpuSemaphoreOperation::AcquireGequal &&
static_cast<s32>(word - regs.semaphore_sequence) > 0) ||
@@ -314,11 +316,11 @@ void GPU::ProcessSemaphoreTriggerMethod() {
}
void GPU::ProcessSemaphoreRelease() {
memory_manager->Write32(regs.smaphore_address.SmaphoreAddress(), regs.semaphore_release);
memory_manager->Write<u32>(regs.semaphore_address.SemaphoreAddress(), regs.semaphore_release);
}
void GPU::ProcessSemaphoreAcquire() {
const u32 word = memory_manager->Read32(regs.smaphore_address.SmaphoreAddress());
const u32 word = memory_manager->Read<u32>(regs.semaphore_address.SemaphoreAddress());
const auto value = regs.semaphore_acquire;
if (word != value) {
regs.acquire_active = true;

View File

@@ -9,7 +9,6 @@
#include "common/common_types.h"
#include "core/hle/service/nvflinger/buffer_queue.h"
#include "video_core/dma_pusher.h"
#include "video_core/memory_manager.h"
using CacheAddr = std::uintptr_t;
inline CacheAddr ToCacheAddr(const void* host_ptr) {
@@ -124,6 +123,8 @@ enum class EngineID {
MAXWELL_DMA_COPY_A = 0xB0B5,
};
class MemoryManager;
class GPU {
public:
explicit GPU(Core::System& system, VideoCore::RendererBase& renderer);
@@ -176,11 +177,11 @@ public:
u32 address_high;
u32 address_low;
GPUVAddr SmaphoreAddress() const {
GPUVAddr SemaphoreAddress() const {
return static_cast<GPUVAddr>((static_cast<GPUVAddr>(address_high) << 32) |
address_low);
}
} smaphore_address;
} semaphore_address;
u32 semaphore_sequence;
u32 semaphore_trigger;
@@ -244,9 +245,8 @@ protected:
private:
std::unique_ptr<Tegra::MemoryManager> memory_manager;
/// Mapping of command subchannels to their bound engine ids.
/// Mapping of command subchannels to their bound engine ids
std::array<EngineID, 8> bound_engines = {};
/// 3D engine
std::unique_ptr<Engines::Maxwell3D> maxwell_3d;
/// 2D engine
@@ -263,7 +263,7 @@ private:
static_assert(offsetof(GPU::Regs, field_name) == position * 4, \
"Field " #field_name " has invalid position")
ASSERT_REG_POSITION(smaphore_address, 0x4);
ASSERT_REG_POSITION(semaphore_address, 0x4);
ASSERT_REG_POSITION(semaphore_sequence, 0x6);
ASSERT_REG_POSITION(semaphore_trigger, 0x7);
ASSERT_REG_POSITION(reference_count, 0x14);

View File

@@ -52,8 +52,8 @@ static void RunThread(VideoCore::RendererBase& renderer, Tegra::DmaPusher& dma_p
}
ThreadManager::ThreadManager(VideoCore::RendererBase& renderer, Tegra::DmaPusher& dma_pusher)
: renderer{renderer}, dma_pusher{dma_pusher}, thread{RunThread, std::ref(renderer),
std::ref(dma_pusher), std::ref(state)} {}
: renderer{renderer}, thread{RunThread, std::ref(renderer), std::ref(dma_pusher),
std::ref(state)} {}
ThreadManager::~ThreadManager() {
// Notify GPU thread that a shutdown is pending

View File

@@ -4,10 +4,8 @@
#pragma once
#include <array>
#include <atomic>
#include <condition_variable>
#include <memory>
#include <mutex>
#include <optional>
#include <thread>
@@ -97,13 +95,13 @@ struct SynchState final {
std::condition_variable frames_condition;
void IncrementFramesCounter() {
std::lock_guard<std::mutex> lock{frames_mutex};
std::lock_guard lock{frames_mutex};
++queued_frame_count;
}
void DecrementFramesCounter() {
{
std::lock_guard<std::mutex> lock{frames_mutex};
std::lock_guard lock{frames_mutex};
--queued_frame_count;
if (queued_frame_count) {
@@ -115,7 +113,7 @@ struct SynchState final {
void WaitForFrames() {
{
std::lock_guard<std::mutex> lock{frames_mutex};
std::lock_guard lock{frames_mutex};
if (!queued_frame_count) {
return;
}
@@ -123,14 +121,14 @@ struct SynchState final {
// Wait for the GPU to be idle (all commands to be executed)
{
std::unique_lock<std::mutex> lock{frames_mutex};
std::unique_lock lock{frames_mutex};
frames_condition.wait(lock, [this] { return !queued_frame_count; });
}
}
void SignalCommands() {
{
std::unique_lock<std::mutex> lock{commands_mutex};
std::unique_lock lock{commands_mutex};
if (queue.Empty()) {
return;
}
@@ -140,7 +138,7 @@ struct SynchState final {
}
void WaitForCommands() {
std::unique_lock<std::mutex> lock{commands_mutex};
std::unique_lock lock{commands_mutex};
commands_condition.wait(lock, [this] { return !queue.Empty(); });
}
@@ -177,7 +175,6 @@ private:
private:
SynchState state;
VideoCore::RendererBase& renderer;
Tegra::DmaPusher& dma_pusher;
std::thread thread;
std::thread::id thread_id;
};

View File

@@ -5,198 +5,187 @@
#include "common/alignment.h"
#include "common/assert.h"
#include "common/logging/log.h"
#include "core/core.h"
#include "core/memory.h"
#include "video_core/gpu.h"
#include "video_core/memory_manager.h"
#include "video_core/rasterizer_interface.h"
#include "video_core/renderer_base.h"
namespace Tegra {
MemoryManager::MemoryManager() {
// Mark the first page as reserved, so that 0 is not a valid GPUVAddr. Otherwise, games might
// try to use 0 as a valid address, which is also used to mean nullptr. This fixes a bug with
// Undertale using 0 for a render target.
PageSlot(0) = static_cast<u64>(PageStatus::Reserved);
std::fill(page_table.pointers.begin(), page_table.pointers.end(), nullptr);
std::fill(page_table.attributes.begin(), page_table.attributes.end(),
Common::PageType::Unmapped);
page_table.Resize(address_space_width);
// Initialize the map with a single free region covering the entire managed space.
VirtualMemoryArea initial_vma;
initial_vma.size = address_space_end;
vma_map.emplace(initial_vma.base, initial_vma);
UpdatePageTableForVMA(initial_vma);
}
GPUVAddr MemoryManager::AllocateSpace(u64 size, u64 align) {
const std::optional<GPUVAddr> gpu_addr{FindFreeBlock(0, size, align, PageStatus::Unmapped)};
const u64 aligned_size{Common::AlignUp(size, page_size)};
const GPUVAddr gpu_addr{FindFreeRegion(address_space_base, aligned_size)};
ASSERT_MSG(gpu_addr, "unable to find available GPU memory");
AllocateMemory(gpu_addr, 0, aligned_size);
for (u64 offset{}; offset < size; offset += PAGE_SIZE) {
VAddr& slot{PageSlot(*gpu_addr + offset)};
ASSERT(slot == static_cast<u64>(PageStatus::Unmapped));
slot = static_cast<u64>(PageStatus::Allocated);
}
return *gpu_addr;
return gpu_addr;
}
GPUVAddr MemoryManager::AllocateSpace(GPUVAddr gpu_addr, u64 size, u64 align) {
for (u64 offset{}; offset < size; offset += PAGE_SIZE) {
VAddr& slot{PageSlot(gpu_addr + offset)};
const u64 aligned_size{Common::AlignUp(size, page_size)};
ASSERT(slot == static_cast<u64>(PageStatus::Unmapped));
slot = static_cast<u64>(PageStatus::Allocated);
}
AllocateMemory(gpu_addr, 0, aligned_size);
return gpu_addr;
}
GPUVAddr MemoryManager::MapBufferEx(VAddr cpu_addr, u64 size) {
const std::optional<GPUVAddr> gpu_addr{FindFreeBlock(0, size, PAGE_SIZE, PageStatus::Unmapped)};
const u64 aligned_size{Common::AlignUp(size, page_size)};
const GPUVAddr gpu_addr{FindFreeRegion(address_space_base, aligned_size)};
ASSERT_MSG(gpu_addr, "unable to find available GPU memory");
MapBackingMemory(gpu_addr, Memory::GetPointer(cpu_addr), aligned_size, cpu_addr);
for (u64 offset{}; offset < size; offset += PAGE_SIZE) {
VAddr& slot{PageSlot(*gpu_addr + offset)};
ASSERT(slot == static_cast<u64>(PageStatus::Unmapped));
slot = cpu_addr + offset;
}
const MappedRegion region{cpu_addr, *gpu_addr, size};
mapped_regions.push_back(region);
return *gpu_addr;
return gpu_addr;
}
GPUVAddr MemoryManager::MapBufferEx(VAddr cpu_addr, GPUVAddr gpu_addr, u64 size) {
ASSERT((gpu_addr & PAGE_MASK) == 0);
ASSERT((gpu_addr & page_mask) == 0);
if (PageSlot(gpu_addr) != static_cast<u64>(PageStatus::Allocated)) {
// Page has been already mapped. In this case, we must find a new area of memory to use that
// is different than the specified one. Super Mario Odyssey hits this scenario when changing
// areas, but we do not want to overwrite the old pages.
// TODO(bunnei): We need to write a hardware test to confirm this behavior.
const u64 aligned_size{Common::AlignUp(size, page_size)};
LOG_ERROR(HW_GPU, "attempting to map addr 0x{:016X}, which is not available!", gpu_addr);
const std::optional<GPUVAddr> new_gpu_addr{
FindFreeBlock(gpu_addr, size, PAGE_SIZE, PageStatus::Allocated)};
ASSERT_MSG(new_gpu_addr, "unable to find available GPU memory");
gpu_addr = *new_gpu_addr;
}
for (u64 offset{}; offset < size; offset += PAGE_SIZE) {
VAddr& slot{PageSlot(gpu_addr + offset)};
ASSERT(slot == static_cast<u64>(PageStatus::Allocated));
slot = cpu_addr + offset;
}
const MappedRegion region{cpu_addr, gpu_addr, size};
mapped_regions.push_back(region);
MapBackingMemory(gpu_addr, Memory::GetPointer(cpu_addr), aligned_size, cpu_addr);
return gpu_addr;
}
GPUVAddr MemoryManager::UnmapBuffer(GPUVAddr gpu_addr, u64 size) {
ASSERT((gpu_addr & PAGE_MASK) == 0);
ASSERT((gpu_addr & page_mask) == 0);
for (u64 offset{}; offset < size; offset += PAGE_SIZE) {
VAddr& slot{PageSlot(gpu_addr + offset)};
const u64 aligned_size{Common::AlignUp(size, page_size)};
const CacheAddr cache_addr{ToCacheAddr(GetPointer(gpu_addr))};
ASSERT(slot != static_cast<u64>(PageStatus::Allocated) &&
slot != static_cast<u64>(PageStatus::Unmapped));
Core::System::GetInstance().Renderer().Rasterizer().FlushAndInvalidateRegion(cache_addr,
aligned_size);
UnmapRange(gpu_addr, aligned_size);
slot = static_cast<u64>(PageStatus::Unmapped);
}
// Delete the region mappings that are contained within the unmapped region
mapped_regions.erase(std::remove_if(mapped_regions.begin(), mapped_regions.end(),
[&](const MappedRegion& region) {
return region.gpu_addr <= gpu_addr &&
region.gpu_addr + region.size < gpu_addr + size;
}),
mapped_regions.end());
return gpu_addr;
}
GPUVAddr MemoryManager::GetRegionEnd(GPUVAddr region_start) const {
for (const auto& region : mapped_regions) {
const GPUVAddr region_end{region.gpu_addr + region.size};
if (region_start >= region.gpu_addr && region_start < region_end) {
return region_end;
GPUVAddr MemoryManager::FindFreeRegion(GPUVAddr region_start, u64 size) {
// Find the first Free VMA.
const VMAHandle vma_handle{std::find_if(vma_map.begin(), vma_map.end(), [&](const auto& vma) {
if (vma.second.type != VirtualMemoryArea::Type::Unmapped) {
return false;
}
}
return {};
}
std::optional<GPUVAddr> MemoryManager::FindFreeBlock(GPUVAddr region_start, u64 size, u64 align,
PageStatus status) {
GPUVAddr gpu_addr{region_start};
u64 free_space{};
align = (align + PAGE_MASK) & ~PAGE_MASK;
const VAddr vma_end{vma.second.base + vma.second.size};
return vma_end > region_start && vma_end >= region_start + size;
})};
while (gpu_addr + free_space < MAX_ADDRESS) {
if (PageSlot(gpu_addr + free_space) == static_cast<u64>(status)) {
free_space += PAGE_SIZE;
if (free_space >= size) {
return gpu_addr;
}
} else {
gpu_addr += free_space + PAGE_SIZE;
free_space = 0;
gpu_addr = Common::AlignUp(gpu_addr, align);
}
}
return {};
}
std::optional<VAddr> MemoryManager::GpuToCpuAddress(GPUVAddr gpu_addr) {
const VAddr base_addr{PageSlot(gpu_addr)};
if (base_addr == static_cast<u64>(PageStatus::Allocated) ||
base_addr == static_cast<u64>(PageStatus::Unmapped) ||
base_addr == static_cast<u64>(PageStatus::Reserved)) {
if (vma_handle == vma_map.end()) {
return {};
}
return base_addr + (gpu_addr & PAGE_MASK);
return std::max(region_start, vma_handle->second.base);
}
u8 MemoryManager::Read8(GPUVAddr addr) {
return Memory::Read8(*GpuToCpuAddress(addr));
bool MemoryManager::IsAddressValid(GPUVAddr addr) const {
return (addr >> page_bits) < page_table.pointers.size();
}
u16 MemoryManager::Read16(GPUVAddr addr) {
return Memory::Read16(*GpuToCpuAddress(addr));
std::optional<VAddr> MemoryManager::GpuToCpuAddress(GPUVAddr addr) {
if (!IsAddressValid(addr)) {
return {};
}
VAddr cpu_addr{page_table.backing_addr[addr >> page_bits]};
if (cpu_addr) {
return cpu_addr + (addr & page_mask);
}
return {};
}
u32 MemoryManager::Read32(GPUVAddr addr) {
return Memory::Read32(*GpuToCpuAddress(addr));
template <typename T>
T MemoryManager::Read(GPUVAddr addr) {
if (!IsAddressValid(addr)) {
return {};
}
const u8* page_pointer{page_table.pointers[addr >> page_bits]};
if (page_pointer) {
// NOTE: Avoid adding any extra logic to this fast-path block
T value;
std::memcpy(&value, &page_pointer[addr & page_mask], sizeof(T));
return value;
}
switch (page_table.attributes[addr >> page_bits]) {
case Common::PageType::Unmapped:
LOG_ERROR(HW_GPU, "Unmapped Read{} @ 0x{:08X}", sizeof(T) * 8, addr);
return 0;
case Common::PageType::Memory:
ASSERT_MSG(false, "Mapped memory page without a pointer @ {:016X}", addr);
break;
default:
UNREACHABLE();
}
return {};
}
u64 MemoryManager::Read64(GPUVAddr addr) {
return Memory::Read64(*GpuToCpuAddress(addr));
template <typename T>
void MemoryManager::Write(GPUVAddr addr, T data) {
if (!IsAddressValid(addr)) {
return;
}
u8* page_pointer{page_table.pointers[addr >> page_bits]};
if (page_pointer) {
// NOTE: Avoid adding any extra logic to this fast-path block
std::memcpy(&page_pointer[addr & page_mask], &data, sizeof(T));
return;
}
switch (page_table.attributes[addr >> page_bits]) {
case Common::PageType::Unmapped:
LOG_ERROR(HW_GPU, "Unmapped Write{} 0x{:08X} @ 0x{:016X}", sizeof(data) * 8,
static_cast<u32>(data), addr);
return;
case Common::PageType::Memory:
ASSERT_MSG(false, "Mapped memory page without a pointer @ {:016X}", addr);
break;
default:
UNREACHABLE();
}
}
void MemoryManager::Write8(GPUVAddr addr, u8 data) {
Memory::Write8(*GpuToCpuAddress(addr), data);
}
void MemoryManager::Write16(GPUVAddr addr, u16 data) {
Memory::Write16(*GpuToCpuAddress(addr), data);
}
void MemoryManager::Write32(GPUVAddr addr, u32 data) {
Memory::Write32(*GpuToCpuAddress(addr), data);
}
void MemoryManager::Write64(GPUVAddr addr, u64 data) {
Memory::Write64(*GpuToCpuAddress(addr), data);
}
template u8 MemoryManager::Read<u8>(GPUVAddr addr);
template u16 MemoryManager::Read<u16>(GPUVAddr addr);
template u32 MemoryManager::Read<u32>(GPUVAddr addr);
template u64 MemoryManager::Read<u64>(GPUVAddr addr);
template void MemoryManager::Write<u8>(GPUVAddr addr, u8 data);
template void MemoryManager::Write<u16>(GPUVAddr addr, u16 data);
template void MemoryManager::Write<u32>(GPUVAddr addr, u32 data);
template void MemoryManager::Write<u64>(GPUVAddr addr, u64 data);
u8* MemoryManager::GetPointer(GPUVAddr addr) {
return Memory::GetPointer(*GpuToCpuAddress(addr));
if (!IsAddressValid(addr)) {
return {};
}
u8* page_pointer{page_table.pointers[addr >> page_bits]};
if (page_pointer) {
return page_pointer + (addr & page_mask);
}
LOG_ERROR(HW_GPU, "Unknown GetPointer @ 0x{:016X}", addr);
return {};
}
void MemoryManager::ReadBlock(GPUVAddr src_addr, void* dest_buffer, std::size_t size) {
@@ -210,13 +199,252 @@ void MemoryManager::CopyBlock(GPUVAddr dest_addr, GPUVAddr src_addr, std::size_t
std::memcpy(GetPointer(dest_addr), GetPointer(src_addr), size);
}
VAddr& MemoryManager::PageSlot(GPUVAddr gpu_addr) {
auto& block{page_table[(gpu_addr >> (PAGE_BITS + PAGE_TABLE_BITS)) & PAGE_TABLE_MASK]};
if (!block) {
block = std::make_unique<PageBlock>();
block->fill(static_cast<VAddr>(PageStatus::Unmapped));
void MemoryManager::MapPages(GPUVAddr base, u64 size, u8* memory, Common::PageType type,
VAddr backing_addr) {
LOG_DEBUG(HW_GPU, "Mapping {} onto {:016X}-{:016X}", fmt::ptr(memory), base * page_size,
(base + size) * page_size);
const VAddr end{base + size};
ASSERT_MSG(end <= page_table.pointers.size(), "out of range mapping at {:016X}",
base + page_table.pointers.size());
std::fill(page_table.attributes.begin() + base, page_table.attributes.begin() + end, type);
if (memory == nullptr) {
std::fill(page_table.pointers.begin() + base, page_table.pointers.begin() + end, memory);
std::fill(page_table.backing_addr.begin() + base, page_table.backing_addr.begin() + end,
backing_addr);
} else {
while (base != end) {
page_table.pointers[base] = memory;
page_table.backing_addr[base] = backing_addr;
base += 1;
memory += page_size;
backing_addr += page_size;
}
}
}
void MemoryManager::MapMemoryRegion(GPUVAddr base, u64 size, u8* target, VAddr backing_addr) {
ASSERT_MSG((size & page_mask) == 0, "non-page aligned size: {:016X}", size);
ASSERT_MSG((base & page_mask) == 0, "non-page aligned base: {:016X}", base);
MapPages(base / page_size, size / page_size, target, Common::PageType::Memory, backing_addr);
}
void MemoryManager::UnmapRegion(GPUVAddr base, u64 size) {
ASSERT_MSG((size & page_mask) == 0, "non-page aligned size: {:016X}", size);
ASSERT_MSG((base & page_mask) == 0, "non-page aligned base: {:016X}", base);
MapPages(base / page_size, size / page_size, nullptr, Common::PageType::Unmapped);
}
bool VirtualMemoryArea::CanBeMergedWith(const VirtualMemoryArea& next) const {
ASSERT(base + size == next.base);
if (type != next.type) {
return {};
}
if (type == VirtualMemoryArea::Type::Allocated && (offset + size != next.offset)) {
return {};
}
if (type == VirtualMemoryArea::Type::Mapped && backing_memory + size != next.backing_memory) {
return {};
}
return true;
}
MemoryManager::VMAHandle MemoryManager::FindVMA(GPUVAddr target) const {
if (target >= address_space_end) {
return vma_map.end();
} else {
return std::prev(vma_map.upper_bound(target));
}
}
MemoryManager::VMAIter MemoryManager::Allocate(VMAIter vma_handle) {
VirtualMemoryArea& vma{vma_handle->second};
vma.type = VirtualMemoryArea::Type::Allocated;
vma.backing_addr = 0;
vma.backing_memory = {};
UpdatePageTableForVMA(vma);
return MergeAdjacent(vma_handle);
}
MemoryManager::VMAHandle MemoryManager::AllocateMemory(GPUVAddr target, std::size_t offset,
u64 size) {
// This is the appropriately sized VMA that will turn into our allocation.
VMAIter vma_handle{CarveVMA(target, size)};
VirtualMemoryArea& vma{vma_handle->second};
ASSERT(vma.size == size);
vma.offset = offset;
return Allocate(vma_handle);
}
MemoryManager::VMAHandle MemoryManager::MapBackingMemory(GPUVAddr target, u8* memory, u64 size,
VAddr backing_addr) {
// This is the appropriately sized VMA that will turn into our allocation.
VMAIter vma_handle{CarveVMA(target, size)};
VirtualMemoryArea& vma{vma_handle->second};
ASSERT(vma.size == size);
vma.type = VirtualMemoryArea::Type::Mapped;
vma.backing_memory = memory;
vma.backing_addr = backing_addr;
UpdatePageTableForVMA(vma);
return MergeAdjacent(vma_handle);
}
void MemoryManager::UnmapRange(GPUVAddr target, u64 size) {
VMAIter vma{CarveVMARange(target, size)};
const VAddr target_end{target + size};
const VMAIter end{vma_map.end()};
// The comparison against the end of the range must be done using addresses since VMAs can be
// merged during this process, causing invalidation of the iterators.
while (vma != end && vma->second.base < target_end) {
// Unmapped ranges return to allocated state and can be reused
// This behavior is used by Super Mario Odyssey, Sonic Forces, and likely other games
vma = std::next(Allocate(vma));
}
ASSERT(FindVMA(target)->second.size >= size);
}
MemoryManager::VMAIter MemoryManager::StripIterConstness(const VMAHandle& iter) {
// This uses a neat C++ trick to convert a const_iterator to a regular iterator, given
// non-const access to its container.
return vma_map.erase(iter, iter); // Erases an empty range of elements
}
MemoryManager::VMAIter MemoryManager::CarveVMA(GPUVAddr base, u64 size) {
ASSERT_MSG((size & page_mask) == 0, "non-page aligned size: 0x{:016X}", size);
ASSERT_MSG((base & page_mask) == 0, "non-page aligned base: 0x{:016X}", base);
VMAIter vma_handle{StripIterConstness(FindVMA(base))};
if (vma_handle == vma_map.end()) {
// Target address is outside the managed range
return {};
}
const VirtualMemoryArea& vma{vma_handle->second};
if (vma.type == VirtualMemoryArea::Type::Mapped) {
// Region is already allocated
return {};
}
const VAddr start_in_vma{base - vma.base};
const VAddr end_in_vma{start_in_vma + size};
ASSERT_MSG(end_in_vma <= vma.size, "region size 0x{:016X} is less than required size 0x{:016X}",
vma.size, end_in_vma);
if (end_in_vma < vma.size) {
// Split VMA at the end of the allocated region
SplitVMA(vma_handle, end_in_vma);
}
if (start_in_vma != 0) {
// Split VMA at the start of the allocated region
vma_handle = SplitVMA(vma_handle, start_in_vma);
}
return vma_handle;
}
MemoryManager::VMAIter MemoryManager::CarveVMARange(GPUVAddr target, u64 size) {
ASSERT_MSG((size & page_mask) == 0, "non-page aligned size: 0x{:016X}", size);
ASSERT_MSG((target & page_mask) == 0, "non-page aligned base: 0x{:016X}", target);
const VAddr target_end{target + size};
ASSERT(target_end >= target);
ASSERT(size > 0);
VMAIter begin_vma{StripIterConstness(FindVMA(target))};
const VMAIter i_end{vma_map.lower_bound(target_end)};
if (std::any_of(begin_vma, i_end, [](const auto& entry) {
return entry.second.type == VirtualMemoryArea::Type::Unmapped;
})) {
return {};
}
if (target != begin_vma->second.base) {
begin_vma = SplitVMA(begin_vma, target - begin_vma->second.base);
}
VMAIter end_vma{StripIterConstness(FindVMA(target_end))};
if (end_vma != vma_map.end() && target_end != end_vma->second.base) {
end_vma = SplitVMA(end_vma, target_end - end_vma->second.base);
}
return begin_vma;
}
MemoryManager::VMAIter MemoryManager::SplitVMA(VMAIter vma_handle, u64 offset_in_vma) {
VirtualMemoryArea& old_vma{vma_handle->second};
VirtualMemoryArea new_vma{old_vma}; // Make a copy of the VMA
// For now, don't allow no-op VMA splits (trying to split at a boundary) because it's probably
// a bug. This restriction might be removed later.
ASSERT(offset_in_vma < old_vma.size);
ASSERT(offset_in_vma > 0);
old_vma.size = offset_in_vma;
new_vma.base += offset_in_vma;
new_vma.size -= offset_in_vma;
switch (new_vma.type) {
case VirtualMemoryArea::Type::Unmapped:
break;
case VirtualMemoryArea::Type::Allocated:
new_vma.offset += offset_in_vma;
break;
case VirtualMemoryArea::Type::Mapped:
new_vma.backing_memory += offset_in_vma;
break;
}
ASSERT(old_vma.CanBeMergedWith(new_vma));
return vma_map.emplace_hint(std::next(vma_handle), new_vma.base, new_vma);
}
MemoryManager::VMAIter MemoryManager::MergeAdjacent(VMAIter iter) {
const VMAIter next_vma{std::next(iter)};
if (next_vma != vma_map.end() && iter->second.CanBeMergedWith(next_vma->second)) {
iter->second.size += next_vma->second.size;
vma_map.erase(next_vma);
}
if (iter != vma_map.begin()) {
VMAIter prev_vma{std::prev(iter)};
if (prev_vma->second.CanBeMergedWith(iter->second)) {
prev_vma->second.size += iter->second.size;
vma_map.erase(iter);
iter = prev_vma;
}
}
return iter;
}
void MemoryManager::UpdatePageTableForVMA(const VirtualMemoryArea& vma) {
switch (vma.type) {
case VirtualMemoryArea::Type::Unmapped:
UnmapRegion(vma.base, vma.size);
break;
case VirtualMemoryArea::Type::Allocated:
MapMemoryRegion(vma.base, vma.size, nullptr, vma.backing_addr);
break;
case VirtualMemoryArea::Type::Mapped:
MapMemoryRegion(vma.base, vma.size, vma.backing_memory, vma.backing_addr);
break;
}
return (*block)[(gpu_addr >> PAGE_BITS) & PAGE_BLOCK_MASK];
}
} // namespace Tegra

View File

@@ -1,82 +1,148 @@
// Copyright 2018 yuzu emulator team
// Copyright 2018 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <array>
#include <memory>
#include <map>
#include <optional>
#include <vector>
#include "common/common_types.h"
#include "common/page_table.h"
namespace Tegra {
/// Virtual addresses in the GPU's memory map are 64 bit.
using GPUVAddr = u64;
/**
* Represents a VMA in an address space. A VMA is a contiguous region of virtual addressing space
* with homogeneous attributes across its extents. In this particular implementation each VMA is
* also backed by a single host memory allocation.
*/
struct VirtualMemoryArea {
enum class Type : u8 {
Unmapped,
Allocated,
Mapped,
};
/// Virtual base address of the region.
GPUVAddr base{};
/// Size of the region.
u64 size{};
/// Memory area mapping type.
Type type{Type::Unmapped};
/// CPU memory mapped address corresponding to this memory area.
VAddr backing_addr{};
/// Offset into the backing_memory the mapping starts from.
std::size_t offset{};
/// Pointer backing this VMA.
u8* backing_memory{};
/// Tests if this area can be merged to the right with `next`.
bool CanBeMergedWith(const VirtualMemoryArea& next) const;
};
class MemoryManager final {
public:
MemoryManager();
GPUVAddr AllocateSpace(u64 size, u64 align);
GPUVAddr AllocateSpace(GPUVAddr gpu_addr, u64 size, u64 align);
GPUVAddr AllocateSpace(GPUVAddr addr, u64 size, u64 align);
GPUVAddr MapBufferEx(VAddr cpu_addr, u64 size);
GPUVAddr MapBufferEx(VAddr cpu_addr, GPUVAddr gpu_addr, u64 size);
GPUVAddr UnmapBuffer(GPUVAddr gpu_addr, u64 size);
GPUVAddr GetRegionEnd(GPUVAddr region_start) const;
std::optional<VAddr> GpuToCpuAddress(GPUVAddr gpu_addr);
GPUVAddr MapBufferEx(VAddr cpu_addr, GPUVAddr addr, u64 size);
GPUVAddr UnmapBuffer(GPUVAddr addr, u64 size);
std::optional<VAddr> GpuToCpuAddress(GPUVAddr addr);
static constexpr u64 PAGE_BITS = 16;
static constexpr u64 PAGE_SIZE = 1 << PAGE_BITS;
static constexpr u64 PAGE_MASK = PAGE_SIZE - 1;
template <typename T>
T Read(GPUVAddr addr);
u8 Read8(GPUVAddr addr);
u16 Read16(GPUVAddr addr);
u32 Read32(GPUVAddr addr);
u64 Read64(GPUVAddr addr);
template <typename T>
void Write(GPUVAddr addr, T data);
void Write8(GPUVAddr addr, u8 data);
void Write16(GPUVAddr addr, u16 data);
void Write32(GPUVAddr addr, u32 data);
void Write64(GPUVAddr addr, u64 data);
u8* GetPointer(GPUVAddr vaddr);
u8* GetPointer(GPUVAddr addr);
void ReadBlock(GPUVAddr src_addr, void* dest_buffer, std::size_t size);
void WriteBlock(GPUVAddr dest_addr, const void* src_buffer, std::size_t size);
void CopyBlock(VAddr dest_addr, VAddr src_addr, std::size_t size);
void CopyBlock(GPUVAddr dest_addr, GPUVAddr src_addr, std::size_t size);
private:
enum class PageStatus : u64 {
Unmapped = 0xFFFFFFFFFFFFFFFFULL,
Allocated = 0xFFFFFFFFFFFFFFFEULL,
Reserved = 0xFFFFFFFFFFFFFFFDULL,
};
using VMAMap = std::map<GPUVAddr, VirtualMemoryArea>;
using VMAHandle = VMAMap::const_iterator;
using VMAIter = VMAMap::iterator;
std::optional<GPUVAddr> FindFreeBlock(GPUVAddr region_start, u64 size, u64 align,
PageStatus status);
VAddr& PageSlot(GPUVAddr gpu_addr);
bool IsAddressValid(GPUVAddr addr) const;
void MapPages(GPUVAddr base, u64 size, u8* memory, Common::PageType type,
VAddr backing_addr = 0);
void MapMemoryRegion(GPUVAddr base, u64 size, u8* target, VAddr backing_addr);
void UnmapRegion(GPUVAddr base, u64 size);
static constexpr u64 MAX_ADDRESS{0x10000000000ULL};
static constexpr u64 PAGE_TABLE_BITS{10};
static constexpr u64 PAGE_TABLE_SIZE{1 << PAGE_TABLE_BITS};
static constexpr u64 PAGE_TABLE_MASK{PAGE_TABLE_SIZE - 1};
static constexpr u64 PAGE_BLOCK_BITS{14};
static constexpr u64 PAGE_BLOCK_SIZE{1 << PAGE_BLOCK_BITS};
static constexpr u64 PAGE_BLOCK_MASK{PAGE_BLOCK_SIZE - 1};
/// Finds the VMA in which the given address is included in, or `vma_map.end()`.
VMAHandle FindVMA(GPUVAddr target) const;
using PageBlock = std::array<VAddr, PAGE_BLOCK_SIZE>;
std::array<std::unique_ptr<PageBlock>, PAGE_TABLE_SIZE> page_table{};
VMAHandle AllocateMemory(GPUVAddr target, std::size_t offset, u64 size);
struct MappedRegion {
VAddr cpu_addr;
GPUVAddr gpu_addr;
u64 size;
};
/**
* Maps an unmanaged host memory pointer at a given address.
*
* @param target The guest address to start the mapping at.
* @param memory The memory to be mapped.
* @param size Size of the mapping.
* @param state MemoryState tag to attach to the VMA.
*/
VMAHandle MapBackingMemory(GPUVAddr target, u8* memory, u64 size, VAddr backing_addr);
std::vector<MappedRegion> mapped_regions;
/// Unmaps a range of addresses, splitting VMAs as necessary.
void UnmapRange(GPUVAddr target, u64 size);
/// Converts a VMAHandle to a mutable VMAIter.
VMAIter StripIterConstness(const VMAHandle& iter);
/// Marks as the specfied VMA as allocated.
VMAIter Allocate(VMAIter vma);
/**
* Carves a VMA of a specific size at the specified address by splitting Free VMAs while doing
* the appropriate error checking.
*/
VMAIter CarveVMA(GPUVAddr base, u64 size);
/**
* Splits the edges of the given range of non-Free VMAs so that there is a VMA split at each
* end of the range.
*/
VMAIter CarveVMARange(GPUVAddr base, u64 size);
/**
* Splits a VMA in two, at the specified offset.
* @returns the right side of the split, with the original iterator becoming the left side.
*/
VMAIter SplitVMA(VMAIter vma, u64 offset_in_vma);
/**
* Checks for and merges the specified VMA with adjacent ones if possible.
* @returns the merged VMA or the original if no merging was possible.
*/
VMAIter MergeAdjacent(VMAIter vma);
/// Updates the pages corresponding to this VMA so they match the VMA's attributes.
void UpdatePageTableForVMA(const VirtualMemoryArea& vma);
/// Finds a free (unmapped region) of the specified size starting at the specified address.
GPUVAddr FindFreeRegion(GPUVAddr region_start, u64 size);
private:
static constexpr u64 page_bits{16};
static constexpr u64 page_size{1 << page_bits};
static constexpr u64 page_mask{page_size - 1};
/// Address space in bits, this is fairly arbitrary but sufficiently large.
static constexpr u32 address_space_width{39};
/// Start address for mapping, this is fairly arbitrary but must be non-zero.
static constexpr GPUVAddr address_space_base{0x100000};
/// End of address space, based on address space in bits.
static constexpr GPUVAddr address_space_end{1ULL << address_space_width};
Common::PageTable page_table{page_bits};
VMAMap vma_map;
};
} // namespace Tegra

View File

@@ -71,8 +71,8 @@ private:
bool is_registered{}; ///< Whether the object is currently registered with the cache
bool is_dirty{}; ///< Whether the object is dirty (out of sync with guest memory)
u64 last_modified_ticks{}; ///< When the object was last modified, used for in-order flushing
CacheAddr cache_addr{}; ///< Cache address memory, unique from emulated virtual address space
const u8* host_ptr{}; ///< Pointer to the memory backing this cached region
CacheAddr cache_addr{}; ///< Cache address memory, unique from emulated virtual address space
};
template <class T>
@@ -84,7 +84,7 @@ public:
/// Write any cached resources overlapping the specified region back to memory
void FlushRegion(CacheAddr addr, std::size_t size) {
std::lock_guard<std::recursive_mutex> lock{mutex};
std::lock_guard lock{mutex};
const auto& objects{GetSortedObjectsFromRegion(addr, size)};
for (auto& object : objects) {
@@ -94,7 +94,7 @@ public:
/// Mark the specified region as being invalidated
void InvalidateRegion(CacheAddr addr, u64 size) {
std::lock_guard<std::recursive_mutex> lock{mutex};
std::lock_guard lock{mutex};
const auto& objects{GetSortedObjectsFromRegion(addr, size)};
for (auto& object : objects) {
@@ -108,7 +108,7 @@ public:
/// Invalidates everything in the cache
void InvalidateAll() {
std::lock_guard<std::recursive_mutex> lock{mutex};
std::lock_guard lock{mutex};
while (interval_cache.begin() != interval_cache.end()) {
Unregister(*interval_cache.begin()->second.begin());
@@ -132,8 +132,8 @@ protected:
}
/// Register an object into the cache
void Register(const T& object) {
std::lock_guard<std::recursive_mutex> lock{mutex};
virtual void Register(const T& object) {
std::lock_guard lock{mutex};
object->SetIsRegistered(true);
interval_cache.add({GetInterval(object), ObjectSet{object}});
@@ -142,8 +142,8 @@ protected:
}
/// Unregisters an object from the cache
void Unregister(const T& object) {
std::lock_guard<std::recursive_mutex> lock{mutex};
virtual void Unregister(const T& object) {
std::lock_guard lock{mutex};
object->SetIsRegistered(false);
rasterizer.UpdatePagesCachedCount(object->GetCpuAddr(), object->GetSizeInBytes(), -1);
@@ -153,14 +153,14 @@ protected:
/// Returns a ticks counter used for tracking when cached objects were last modified
u64 GetModifiedTicks() {
std::lock_guard<std::recursive_mutex> lock{mutex};
std::lock_guard lock{mutex};
return ++modified_ticks;
}
/// Flushes the specified object, updating appropriate cache state as needed
void FlushObject(const T& object) {
std::lock_guard<std::recursive_mutex> lock{mutex};
std::lock_guard lock{mutex};
if (!object->IsDirty()) {
return;

View File

@@ -9,7 +9,6 @@
#include "common/common_types.h"
#include "video_core/engines/fermi_2d.h"
#include "video_core/gpu.h"
#include "video_core/memory_manager.h"
namespace VideoCore {

View File

@@ -15,14 +15,14 @@ namespace OpenGL {
CachedBufferEntry::CachedBufferEntry(VAddr cpu_addr, std::size_t size, GLintptr offset,
std::size_t alignment, u8* host_ptr)
: cpu_addr{cpu_addr}, size{size}, offset{offset}, alignment{alignment}, RasterizerCacheObject{
host_ptr} {}
: RasterizerCacheObject{host_ptr}, cpu_addr{cpu_addr}, size{size}, offset{offset},
alignment{alignment} {}
OGLBufferCache::OGLBufferCache(RasterizerOpenGL& rasterizer, std::size_t size)
: RasterizerCache{rasterizer}, stream_buffer(size, true) {}
GLintptr OGLBufferCache::UploadMemory(Tegra::GPUVAddr gpu_addr, std::size_t size,
std::size_t alignment, bool cache) {
GLintptr OGLBufferCache::UploadMemory(GPUVAddr gpu_addr, std::size_t size, std::size_t alignment,
bool cache) {
auto& memory_manager = Core::System::GetInstance().GPU().MemoryManager();
// Cache management is a big overhead, so only cache entries with a given size.

View File

@@ -58,7 +58,7 @@ public:
/// Uploads data from a guest GPU address. Returns host's buffer offset where it's been
/// allocated.
GLintptr UploadMemory(Tegra::GPUVAddr gpu_addr, std::size_t size, std::size_t alignment = 4,
GLintptr UploadMemory(GPUVAddr gpu_addr, std::size_t size, std::size_t alignment = 4,
bool cache = true);
/// Uploads from a host memory. Returns host's buffer offset where it's been allocated.

View File

@@ -15,7 +15,7 @@
namespace OpenGL {
CachedGlobalRegion::CachedGlobalRegion(VAddr cpu_addr, u32 size, u8* host_ptr)
: cpu_addr{cpu_addr}, size{size}, RasterizerCacheObject{host_ptr} {
: RasterizerCacheObject{host_ptr}, cpu_addr{cpu_addr}, size{size} {
buffer.Create();
// Bind and unbind the buffer so it gets allocated by the driver
glBindBuffer(GL_SHADER_STORAGE_BUFFER, buffer.handle);
@@ -46,7 +46,7 @@ GlobalRegion GlobalRegionCacheOpenGL::TryGetReservedGlobalRegion(CacheAddr addr,
return search->second;
}
GlobalRegion GlobalRegionCacheOpenGL::GetUncachedGlobalRegion(Tegra::GPUVAddr addr, u32 size,
GlobalRegion GlobalRegionCacheOpenGL::GetUncachedGlobalRegion(GPUVAddr addr, u32 size,
u8* host_ptr) {
GlobalRegion region{TryGetReservedGlobalRegion(ToCacheAddr(host_ptr), size)};
if (!region) {
@@ -76,8 +76,8 @@ GlobalRegion GlobalRegionCacheOpenGL::GetGlobalRegion(
const auto cbufs{gpu.Maxwell3D().state.shader_stages[static_cast<u64>(stage)]};
const auto addr{cbufs.const_buffers[global_region.GetCbufIndex()].address +
global_region.GetCbufOffset()};
const auto actual_addr{memory_manager.Read64(addr)};
const auto size{memory_manager.Read32(addr + 8)};
const auto actual_addr{memory_manager.Read<u64>(addr)};
const auto size{memory_manager.Read<u32>(addr + 8)};
// Look up global region in the cache based on address
const auto& host_ptr{memory_manager.GetPointer(actual_addr)};

View File

@@ -66,7 +66,7 @@ public:
private:
GlobalRegion TryGetReservedGlobalRegion(CacheAddr addr, u32 size) const;
GlobalRegion GetUncachedGlobalRegion(Tegra::GPUVAddr addr, u32 size, u8* host_ptr);
GlobalRegion GetUncachedGlobalRegion(GPUVAddr addr, u32 size, u8* host_ptr);
void ReserveGlobalRegion(GlobalRegion region);
std::unordered_map<CacheAddr, GlobalRegion> reserve;

View File

@@ -40,8 +40,7 @@ GLintptr PrimitiveAssembler::MakeQuadArray(u32 first, u32 count) {
return index_offset;
}
GLintptr PrimitiveAssembler::MakeQuadIndexed(Tegra::GPUVAddr gpu_addr, std::size_t index_size,
u32 count) {
GLintptr PrimitiveAssembler::MakeQuadIndexed(GPUVAddr gpu_addr, std::size_t index_size, u32 count) {
const std::size_t map_size{CalculateQuadSize(count)};
auto [dst_pointer, index_offset] = buffer_cache.ReserveMemory(map_size);

View File

@@ -24,7 +24,7 @@ public:
GLintptr MakeQuadArray(u32 first, u32 count);
GLintptr MakeQuadIndexed(Tegra::GPUVAddr gpu_addr, std::size_t index_size, u32 count);
GLintptr MakeQuadIndexed(GPUVAddr gpu_addr, std::size_t index_size, u32 count);
private:
OGLBufferCache& buffer_cache;

View File

@@ -100,11 +100,9 @@ struct FramebufferCacheKey {
}
};
RasterizerOpenGL::RasterizerOpenGL(Core::Frontend::EmuWindow& window, Core::System& system,
ScreenInfo& info)
: res_cache{*this}, shader_cache{*this, system}, global_cache{*this},
emu_window{window}, system{system}, screen_info{info},
buffer_cache(*this, STREAM_BUFFER_SIZE) {
RasterizerOpenGL::RasterizerOpenGL(Core::System& system, ScreenInfo& info)
: res_cache{*this}, shader_cache{*this, system}, global_cache{*this}, system{system},
screen_info{info}, buffer_cache(*this, STREAM_BUFFER_SIZE) {
// Create sampler objects
for (std::size_t i = 0; i < texture_samplers.size(); ++i) {
texture_samplers[i].Create();
@@ -225,8 +223,8 @@ void RasterizerOpenGL::SetupVertexBuffer(GLuint vao) {
if (!vertex_array.IsEnabled())
continue;
const Tegra::GPUVAddr start = vertex_array.StartAddress();
const Tegra::GPUVAddr end = regs.vertex_array_limit[index].LimitAddress();
const GPUVAddr start = vertex_array.StartAddress();
const GPUVAddr end = regs.vertex_array_limit[index].LimitAddress();
ASSERT(end > start);
const u64 size = end - start + 1;
@@ -421,8 +419,8 @@ std::size_t RasterizerOpenGL::CalculateVertexArraysSize() const {
if (!regs.vertex_array[index].IsEnabled())
continue;
const Tegra::GPUVAddr start = regs.vertex_array[index].StartAddress();
const Tegra::GPUVAddr end = regs.vertex_array_limit[index].LimitAddress();
const GPUVAddr start = regs.vertex_array[index].StartAddress();
const GPUVAddr end = regs.vertex_array_limit[index].LimitAddress();
ASSERT(end > start);
size += end - start + 1;

View File

@@ -50,8 +50,7 @@ struct FramebufferCacheKey;
class RasterizerOpenGL : public VideoCore::RasterizerInterface {
public:
explicit RasterizerOpenGL(Core::Frontend::EmuWindow& window, Core::System& system,
ScreenInfo& info);
explicit RasterizerOpenGL(Core::System& system, ScreenInfo& info);
~RasterizerOpenGL() override;
void DrawArrays() override;
@@ -214,7 +213,6 @@ private:
ShaderCacheOpenGL shader_cache;
GlobalRegionCacheOpenGL global_cache;
Core::Frontend::EmuWindow& emu_window;
Core::System& system;
ScreenInfo& screen_info;

Some files were not shown because too many files have changed in this diff Show More