Compare commits

..

156 Commits

Author SHA1 Message Date
Lioncash
9ac176d5a3 hle/service/audio/audout_u: Correct lack of return in failure case of AppendAudioOutBufferImpl()
Previously we were overwriting the error case with a success code
further down (which is definitely not what we should be doing here).
2019-03-06 11:44:32 -05:00
bunnei
234f00bdd4 Merge pull request #2194 from lioncash/mem
svc: Move memory range checking functions to the VMManager class
2019-03-06 11:43:07 -05:00
bunnei
5a57b1a09b Merge pull request #2200 from lioncash/audio
hle/service/audio: Extract audio error codes to a header
2019-03-06 10:52:45 -05:00
bunnei
22f105c06d Merge pull request #2203 from lioncash/engines-include
video_core/engines: Remove unnecessary includes
2019-03-06 10:51:27 -05:00
bunnei
10f08ab9ec Merge pull request #2198 from lioncash/todo
{kernel/thread, video_core/surface}: Remove obsolete TODOs
2019-03-06 10:51:03 -05:00
Lioncash
f9ee0dc7ee video_core/engines: Remove unnecessary includes
Removes a few unnecessary dependencies on core-related machinery, such
as the core.h and memory.h, which reduces the amount of rebuilding
necessary if those files change.

This also uncovered some indirect dependencies within other source
files. This also fixes those.
2019-03-05 20:35:32 -05:00
Lioncash
ad9dbeb44b hle/service/audio: Extract audio error codes to a header
Places all error codes in an easily includable header.

This also corrects the unsupported error code (I accidentally used the
hex value when I meant to use the decimal one).
2019-03-05 16:51:37 -05:00
Lioncash
42085ff110 video_core/surface: Remove obsolete TODO in PixelFormatFromRenderTargetFormat()
This isn't needed anymore, according to Hexagon
2019-03-05 10:15:06 -05:00
Lioncash
79f970e6de kernel/thread: Remove obsolete TODO in Create()
This is a TODO carried over from Citra that doesn't apply here.
2019-03-05 10:05:49 -05:00
bunnei
cc92c054ec Merge pull request #2185 from FearlessTobi/port-4630
Port citra-emu/citra#4630: "Memory: don't lock hle mutex in memory read/write"
2019-03-04 18:44:53 -05:00
Lioncash
40de7f6fe8 vm_manager: Use range helpers in HeapAlloc() and HeapFree()
Significantly tidies up two guard conditionals.
2019-03-04 17:16:52 -05:00
Lioncash
6c42a23550 vm_manager: Provide address range checking functions for other memory regions
Makes the interface uniform when it comes to checking various memory
regions.
2019-03-04 17:08:55 -05:00
Lioncash
0be8fffc99 svc: Migrate address range checking functions to VMManager
Provides a bit of a more proper interface for these functions.
2019-03-04 16:32:03 -05:00
bunnei
07e13d6728 Merge pull request #2165 from ReinUsesLisp/unbind-tex
gl_rasterizer: Unbind textures but don't apply the gl_state
2019-03-04 13:51:59 -05:00
bunnei
6ad66acce2 Merge pull request #2188 from lioncash/log-static
logging/backend: Move CreateEntry into the Impl class. Relocate local static to a class variable
2019-03-04 13:46:01 -05:00
bunnei
be6bf37224 Merge pull request #2189 from lioncash/web
web_service: Remove unnecessary inclusions
2019-03-03 22:56:49 -05:00
Lioncash
aa30fd75cd web_service: Remove unnecessary inclusions
Reduces the potential amount of rebuilding necessary if any headers
change. In particular, we were including a header from the core library
when we don't even link the core library to the web_service library, so
this also gets rid of an indirect dependency.
2019-03-02 14:58:49 -05:00
Mat M
169d19f7b9 Merge pull request #2154 from FearlessTobi/port-4647
Port citra-emu/citra#4647: "citra_qt/main: make SPEED_LIMIT_STEP static constexpr"
2019-03-02 14:46:04 -05:00
Lioncash
f8f1ff0b4f logging/backend: Make time_origin a class variable instead of a local static
Moves local global state into the Impl class itself and initializes it
at the creation of the instance instead of in the function.

This makes it nicer for weakly-ordered architectures, given the
CreateEntry() class won't need to have atomic loads executed for each
individual call to the CreateEntry class.
2019-03-02 14:44:24 -05:00
Lioncash
43c1092031 logging/backend: Move CreateEntry into the Impl class
This function is only ever used within this source file and makes it
easier to remove static state in the following change.
2019-03-02 14:44:24 -05:00
Mat M
a461e266ea Merge pull request #2183 from ReinUsesLisp/vk-buffer-cache-clang
vk_buffer_cache: Fix clang-format
2019-03-02 14:43:15 -05:00
James Rowe
2e2f6aa71a Merge pull request #2186 from honzapatCZ/patch-1
Yuzu can render 3D.
2019-03-02 10:10:01 -07:00
fearlessTobi
71c30a0a89 citra_qt/main: make SPEED_LIMIT_STEP static constexpr
MSVC does not seem to like using constexpr values in a lambda that were declared outside of it.
Previously on MSVC build the hotkeys to inc-/decrease the speed limit were not working correctly because in the lambda the SPEED_LIMIT_STEP had garbage values.
After googling around a bit I found: https://github.com/codeplaysoftware/computecpp-sdk/issues/95 which seems to be a similar issue.
Trying the suggested fix to make the variable static constexpr also fixes the bug here.
2019-03-02 17:43:19 +01:00
Nejcraft
90fd257b47 Yuzu can render 3D.
Yuzu can now render 3D graphics to some degree.
2019-03-02 17:23:05 +01:00
Weiyi Wang
5159f4eee8 Memory: don't lock hle mutex in memory read/write
The comment already invalidates itself: neither MMIO nor rasterizer cache belongsHLE kernel state. This mutex has a too large scope if MMIO or cache is included, which is prone to dead lock when multiple thread acquires these resource at the same time. If necessary, each MMIO component or rasterizer should have their own lock.
2019-03-02 15:20:05 +01:00
bunnei
3c39b39bbc Merge pull request #2182 from bunnei/my-wasted-friday
fuck git for ruining my day, I will learn but I will not forgive
2019-03-02 00:57:15 -05:00
ReinUsesLisp
8e84e81e74 vk_buffer_cache: Fix clang-format 2019-03-02 02:16:45 -03:00
bunnei
e22670fbc3 Merge pull request #2178 from ReinUsesLisp/vk-buffer-cache
vk_buffer_cache: Implement a buffer cache
2019-03-02 00:13:33 -05:00
bunnei
ab70c2583d fuck git for ruining my day, I will learn but I will not forgive 2019-03-02 00:01:34 -05:00
ReinUsesLisp
35c105a108 vk_buffer_cache: Implement a buffer cache
This buffer cache is just like OpenGL's buffer cache with some minor
style changes. It uses VKStreamBuffer.
2019-03-01 17:33:36 -03:00
bunnei
1da8a0c2a8 Merge pull request #2173 from lioncash/capture
yuzu/compatdb: Remove unused lambda capture
2019-03-01 09:55:35 -05:00
bunnei
12e74fe801 Merge pull request #2180 from lioncash/audren
service/audio: Provide an implementation of ExecuteAudioRendererRendering
2019-03-01 09:50:14 -05:00
bunnei
115fc6120c Merge pull request #2181 from lioncash/audren2
service/audio/audren_u: Implement OpenAudioRendererAuto
2019-03-01 09:49:23 -05:00
Lioncash
84aff56644 service/audio/audren_u: Implement OpenAudioRendererAuto
This currently has the same behavior as the regular
OpenAudioRenderer API function, so we can just move the code within
OpenAudioRenderer to an internal function that both service functions
call.
2019-03-01 05:40:29 -05:00
Lioncash
42dc73157c service/audio: Provide an implementation of ExecuteAudioRendererRendering
This service function appears to do nothing noteworthy on the switch.
All it does at the moment is either return an error code or abort the
system. Given we obviously don't want to kill the system, we just opt
for always returning the error code.
2019-03-01 03:37:00 -05:00
ReinUsesLisp
e85066dac7 gl_rasterizer: Remove texture unbinding after dispatching a draw call
Unbinding was required when OpenGL delete operations didn't unbind a
resource if it was bound. This is no longer needed and can be removed.
2019-02-28 00:17:50 -03:00
ReinUsesLisp
bb3ab7d66c gl_state: Fixup multibind bug 2019-02-28 00:17:03 -03:00
bunnei
49c6d21b31 Merge pull request #2174 from lioncash/fwd
service/hid: Amend forward declaration of ServiceManager
2019-02-27 21:20:06 -05:00
bunnei
1b13859af8 Merge pull request #2152 from ReinUsesLisp/vk-stream-buffer
vk_stream_buffer: Implement a stream buffer
2019-02-27 21:19:15 -05:00
bunnei
1f5d6a8fed Merge pull request #2121 from FernandoS27/texception2
Improve the Accuracy of the Rasterizer Cache through a Texception Pass
2019-02-27 21:17:55 -05:00
bunnei
66f4fd4c81 Merge pull request #2172 from lioncash/reorder
gl_rasterizer/vk_memory_manager: Silence -Wreorder warnings
2019-02-27 21:14:20 -05:00
Fernando Sahmkow
7ea097e5c2 Devirtualize Register/Unregister and use a wrapper instead. 2019-02-27 21:58:50 -04:00
Fernando Sahmkow
5a9204dbd7 Corrections and redesign. 2019-02-27 21:58:49 -04:00
Fernando Sahmkow
d6b9b51606 Fix linux compile error. 2019-02-27 21:58:48 -04:00
Fernando Sahmkow
e64fa4d2ea Remove NotifyFrameBuffer as we are doing a texception pass every drawcall. 2019-02-27 21:58:47 -04:00
Fernando Sahmkow
3558c88442 Remove certain optimizations that caused texception to fail in certain scenarios. 2019-02-27 21:58:45 -04:00
Fernando Sahmkow
e9d84ef22c Bug fixes and formatting 2019-02-27 21:58:44 -04:00
Fernando Sahmkow
5bc82d124c rasterizer_cache_gl: Implement Texception Pass 2019-02-27 21:58:43 -04:00
Fernando Sahmkow
8932001610 rasterizer_cache_gl: Implement Partial Reinterpretation of Surfaces. 2019-02-27 21:58:40 -04:00
Fernando Sahmkow
44ea2810e4 rasterizer_cache: mark reinterpreted surfaces and add ability to reload marked surfaces on next use. 2019-02-27 21:58:39 -04:00
Fernando Sahmkow
d583fc1e97 rasterizer_cache_gl: Notify on framebuffer change 2019-02-27 21:58:37 -04:00
Fernando Sahmkow
45b6d2d349 rasterizer_cache: Expose FlushObject to Child classes and allow redefining of Register and Unregister 2019-02-27 21:57:33 -04:00
bunnei
f15e2dd881 Merge pull request #2163 from ReinUsesLisp/bitset-dirty
maxwell_3d: Use std::bitset to manage dirty flags
2019-02-27 20:50:08 -05:00
Annomatg
ef84c70d22 Speed up memory page mapping (#2141)
- Memory::MapPages total samplecount was reduced from 4.6% to 1.06%.
- From main menu into the game from 1.03% to 0.35%
2019-02-27 17:22:47 -05:00
bunnei
532dda0499 Merge pull request #2176 from lioncash/com
audio_core/cubeb_sink: Ensure COM is initialized on Windows prior to calling cubeb_init
2019-02-27 17:12:06 -05:00
Lioncash
1068c1b06f audio_core/cubeb_sink: Ensure COM is initialized on Windows prior to calling cubeb_init
cubeb now requires that COM explicitly be initialized on the thread
prior to calling cubeb_init.
2019-02-27 16:14:53 -05:00
Lioncash
6335bf136f service/hid: Amend forward declaration of ServiceManager
The SM namespace is within the Service namespace, so this was forward
declaring a type that didn't exist.
2019-02-27 11:36:48 -05:00
Lioncash
456c7043bd yuzu/compatdb: Remove unused lambda capture
Silences a compiler warning with clang.
2019-02-27 11:30:36 -05:00
bunnei
42f7c11021 Merge pull request #2169 from lioncash/naming
audio_core/audio_renderer: Provide names for some parameters of AudioRendererParameter
2019-02-27 11:26:54 -05:00
bunnei
14430f7df9 Merge pull request #2170 from lioncash/emu-window
core/frontend/emu_window: Make ClipToTouchScreen a const member function
2019-02-27 11:26:24 -05:00
bunnei
eb5a3dd1c7 Merge pull request #2161 from lioncash/handle-table
kernel/handle_table: Allow process capabilities to limit the handle table size
2019-02-27 11:22:26 -05:00
bunnei
be1a1584fc Merge pull request #2168 from lioncash/cubeb
externals: Update cubeb to the master version
2019-02-27 11:20:14 -05:00
bunnei
66e023fba2 Merge pull request #2167 from lioncash/namespace
common: Move Quaternion, Rectangle, Vec2, Vec3, and Vec4 into the Common namespace
2019-02-27 11:19:53 -05:00
bunnei
b27e6ad912 Merge pull request #2171 from lioncash/pragma
gl_shader_disk_cache: Remove #pragma once from cpp file
2019-02-27 11:19:17 -05:00
Lioncash
16ea93c11e vk_memory_manager: Reorder constructor initializer list in terms of member declaration order
Reorders members in the order that they would actually be initialized
in. Silences a -Wreorder warning.
2019-02-27 11:08:19 -05:00
Lioncash
a6a783b3dc gl_rasterizer: Reorder constructor initializer list in terms of member declaration order
Orders the members in the order they would actually be initialized in.
Silences a -Wreorder warning.
2019-02-27 11:08:19 -05:00
Lioncash
e7eff72e83 gl_shader_disk_cache: Remove #pragma once from cpp file
This is only necessary in headers. Silences a warning with clang.
2019-02-27 11:02:49 -05:00
Lioncash
46b3209abb core/frontend/emu_window: Make ClipToTouchScreen a const member function
This member function doesn't modify instance state, so it can have the
const specifier applied to it.
2019-02-27 08:54:42 -05:00
Lioncash
0e1b5acc6a audio_core/audio_renderer: Name previously unknown parameters of AudioRendererParameter
Provides names for previously unknown entries (aside from the two u8
that appear to be padding bytes, and a single word that also appears
to be reserved or padding).

This will be useful in subsequent changes when unstubbing behavior related
to the audio renderer services.
2019-02-27 06:09:07 -05:00
Lioncash
b9238edd0d common/math_util: Move contents into the Common namespace
These types are within the common library, so they should be within the
Common namespace.
2019-02-27 03:38:39 -05:00
Lioncash
e56f32a071 externals: Update cubeb to 6f2420de8f155b10330cf973900ac7bdbfee589d
Keeps the audio library we use up to date.
2019-02-27 01:21:51 -05:00
Lioncash
1b855efd5e common/vector_math: Move Vec[x] types into the Common namespace
These types are within the common library, so they should be using the
Common namespace.
2019-02-26 22:38:36 -05:00
Lioncash
a1574aabd5 common/quaternion: Move Quaternion into the Common namespace
Quaternion is within the common library, so it should be using the
Common namespace.
2019-02-26 22:31:17 -05:00
bunnei
10d1d58390 Merge pull request #2164 from ReinUsesLisp/configure-blit
renderer_opengl: Update pixel format tracking
2019-02-26 12:12:10 -05:00
ReinUsesLisp
d91e35a50a renderer_opengl: Update pixel format tracking 2019-02-26 03:47:16 -03:00
ReinUsesLisp
5219edd715 maxwell_3d: Use std::bitset to manage dirty flags 2019-02-26 03:01:48 -03:00
ReinUsesLisp
730eb1dad7 vk_stream_buffer: Remove copy code path 2019-02-26 02:09:43 -03:00
bunnei
c3471bf618 Merge pull request #2156 from FreddyFunk/patch-1
file_sys/vfs_vector: Fix ignored offset on Write
2019-02-25 18:28:58 -05:00
bunnei
da1b45de34 Merge pull request #2158 from lioncash/table
service/vi: Update IManagerDisplayService's function table
2019-02-25 18:27:43 -05:00
bunnei
1cffd3848b Merge pull request #2160 from lioncash/audio-warn
audio_core: Resolve compilation warnings
2019-02-25 18:25:36 -05:00
bunnei
93c1630570 Merge pull request #2159 from lioncash/warn
shader/track: Resolve variable shadowing warnings
2019-02-25 13:26:00 -05:00
Lioncash
d29f9e9709 kernel/handle_table: Make local variables as const where applicable
Makes immutable state explicit.
2019-02-25 11:12:38 -05:00
Lioncash
5167d1577d kernel/handle_table: Allow process capabilities to limit the handle table size
The kernel allows restricting the total size of the handle table through
the process capability descriptors. Until now, this functionality wasn't
hooked up. With this, the process handle tables become properly restricted.

In the case of metadata-less executables, the handle table will assume
the maximum size is requested, preserving the behavior that existed
before these changes.
2019-02-25 11:12:32 -05:00
Lioncash
4f8cd74061 kernel/handle-table: In-class initialize data members
Directly initializes members where applicable.
2019-02-25 10:14:05 -05:00
Lioncash
0220862ba5 kernel/handle_table: Resolve truncation warnings
Avoids implicit truncation warnings from u32 -> u16 (the truncation is
desirable behavior here).
2019-02-25 09:53:21 -05:00
Lioncash
04d7b7e09d audio_core/cubeb_sink: Initialize CubebSinkStream's last_frame data member
Ensures that all member variables are initialized in a deterministic
manner across the board.
2019-02-25 09:40:37 -05:00
Lioncash
8250f9bb1c audio_core/cubeb_sink: Add override specifier to destructor
CubebSinkStream inherits from a base class with a virtual destructor, so
override can be appended to CubebSinkStream's destructor.
2019-02-25 09:38:27 -05:00
Lioncash
7cdeec20ec audio_core/cubeb_sink: Resolve variable shadowing warnings in SamplesInQueue
The name of the parameter was shadowing the member variable of the same
name. Instead, alter the name of the parameter to prevent said
shadowing.
2019-02-25 09:28:51 -05:00
Lioncash
a12f4efa2f audio_core/codec: Resolve truncation warnings within DecodeADPCM
The assignments here were performing an implicit truncation from int to
s16. Make it explicit that this is desired behavior.
2019-02-25 09:24:39 -05:00
Lioncash
c1b2e35625 shader/track: Resolve variable shadowing warnings 2019-02-25 09:10:59 -05:00
Lioncash
be7dad5e7e service/vi: Update IManagerDisplayService's function table
Amends it to add the 7.0.0+ CreateStrayLayer function.
2019-02-25 08:09:00 -05:00
bunnei
c07987dfab Merge pull request #2118 from FernandoS27/ipa-improve
shader_decompiler: Improve Accuracy of Attribute Interpolation.
2019-02-24 23:04:22 -05:00
bunnei
c4243c07cc Merge pull request #2119 from FernandoS27/fix-copy
rasterizer_cache_gl: Only do fast layered copy on the same format.
2019-02-24 23:03:52 -05:00
bunnei
c6170565b5 Merge pull request #2155 from FearlessTobi/port-4655
Port citra-emu/citra#4655: "Remove GCC version checks"
2019-02-24 23:03:13 -05:00
bunnei
57985fb16a Merge pull request #2144 from lioncash/factor
service/vi: Convert Display and Layer structs into classes
2019-02-24 23:02:50 -05:00
Frederic L
517933adcb file_sys/vfs_vector: Fix ignored offset on Write 2019-02-25 00:27:49 +01:00
tgsm
030814b1cb Remove GCC version checks
Citra can't be compiled using GCC <7 because of required C++17 support, so these version checks don't need to exist anymore.
2019-02-24 15:24:06 +01:00
bunnei
90c780e6f3 Merge pull request #2139 from degasus/dma_pusher
video_core/dma_pusher: The full list of headers at once.
2019-02-24 04:15:49 -05:00
ReinUsesLisp
33a0597603 vk_stream_buffer: Implement a stream buffer
This manages two kinds of streaming buffers: one for unified memory
models and one for dedicated GPUs. The first one skips the copy from the
staging buffer to the real buffer, since it creates an unified buffer.

This implementation waits for all fences to finish their operation
before "invalidating". This is suboptimal since it should allocate
another buffer or start searching from the beginning. There is room for
improvement here.

This could also handle AMD's "pinned" memory (a heap with 256 MiB) that
seems to be designed for buffer streaming.
2019-02-24 04:27:51 -03:00
ReinUsesLisp
281a8bf259 vk_resource_manager: Minor VKFenceWatch changes 2019-02-24 04:19:04 -03:00
bunnei
f7090bacc5 Merge pull request #2146 from ReinUsesLisp/vulkan-scheduler
vk_scheduler: Implement a scheduler
2019-02-23 23:32:43 -05:00
bunnei
d062991643 Merge pull request #2150 from ReinUsesLisp/fixup-layer-swizzle
gl_rasterizer_cache: Fixup parameter order in layered swizzle
2019-02-23 23:31:34 -05:00
bunnei
4ab978d670 Merge pull request #2151 from ReinUsesLisp/fixup-vk-memory-manager
vk_memory_manager: Fixup commit interval allocation
2019-02-23 23:29:53 -05:00
ReinUsesLisp
92050c4d86 vk_memory_manager: Fixup commit interval allocation
VKMemoryCommitImpl was using as the end of its interval "begin + end".
That ended up wasting memory.
2019-02-24 01:04:41 -03:00
ReinUsesLisp
abef11a540 gl_rasterizer_cache: Fixup parameter order in layered swizzle 2019-02-23 23:27:30 -03:00
ReinUsesLisp
f546fb35ed vk_scheduler: Implement a scheduler
The scheduler abstracts command buffer and fence management with an
interface that's able to do OpenGL-like operations on Vulkan command
buffers.

It returns by value a command buffer and fence that have to be used for
subsequent operations until Flush or Finish is executed, after that the
current execution context (the pair of command buffers and fences) gets
invalidated a new one must be fetched. Thankfully validation layers will
quickly detect if this is skipped throwing an error due to modifications
to a sent command buffer.
2019-02-22 01:33:32 -03:00
bunnei
94b27bb8a5 Merge pull request #2138 from ReinUsesLisp/vulkan-memory-manager
vk_memory_manager: Implement memory manager
2019-02-21 22:26:54 -05:00
Lioncash
90528f1326 service/nvflinger: Store BufferQueue instances as regular data members
The NVFlinger service is already passed into services that need to
guarantee its lifetime, so the BufferQueue instances will already live
as long as they're needed. Making them std::shared_ptr instances in this
case is unnecessary.
2019-02-21 22:09:46 -05:00
Lioncash
fd15730767 service/vi/vi_layer: Convert Layer struct into a class
Like the previous changes made to the Display struct, this prepares the
Layer struct for changes to its interface. Given Layer will be given
more invariants in the future, we convert it into a class to better
signify that.
2019-02-21 12:13:09 -05:00
Lioncash
fa4dc2cf42 service/nvflinger: Move display specifics over to vi_display
With the display and layer structures relocated to the vi service, we
can begin giving these a proper interface before beginning to properly
support the display types.

This converts the display struct into a class and provides it with the
necessary functions to preserve behavior within the NVFlinger class.
2019-02-21 12:13:04 -05:00
bunnei
9539c4203b Merge pull request #2125 from ReinUsesLisp/fixup-glstate
gl_state: Synchronize gl_state even when state is disabled
2019-02-20 21:47:46 -05:00
bunnei
ae437320c8 Merge pull request #2130 from lioncash/system_engine
video_core: Remove usages of System::GetInstance() within the engines
2019-02-20 21:24:56 -05:00
Jungy
3273f93cd5 Fixes Unicode Key File Directories (#2120)
* Fixes Unicode Key File Directories

Adds code so that when loading a file it converts to UTF16 first, to
ensure the files can be opened. Code borrowed from FileUtil::Exists.

* Update src/core/crypto/key_manager.cpp

Co-Authored-By: Jungorend <Jungorend@users.noreply.github.com>

* Update src/core/crypto/key_manager.cpp

Co-Authored-By: Jungorend <Jungorend@users.noreply.github.com>

* Using FileUtil instead to be cleaner.

* Update src/core/crypto/key_manager.cpp

Co-Authored-By: Jungorend <Jungorend@users.noreply.github.com>
2019-02-20 21:24:25 -05:00
bunnei
ef559f5741 Merge pull request #2142 from lioncash/relocate
service/nvflinger: Relocate definitions of Layer and Display to the vi service
2019-02-20 21:21:55 -05:00
Lioncash
8d5d369b54 service/nvflinger: Relocate definitions of Layer and Display to the vi service
These are more closely related to the vi service as opposed to the
intermediary nvflinger.

This also places them in their relevant subfolder, as future changes to
these will likely result in subclassing to represent various displays
and services, as they're done within the service itself on hardware.

The reasoning for prefixing the display and layer source files is to
avoid potential clashing if two files with the same name are compiled
(e.g. if 'display.cpp/.h' or 'layer.cpp/.h' is added to another service
at any point), which MSVC will actually warn against. This prevents that
case from occurring.

This also presently coverts the std::array introduced within
f45c25aaba back to a std::vector to allow
the forward declaration of the Display type. Forward declaring a type
within a std::vector is allowed since the introduction of N4510
(http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/n4510.html) by
Zhihao Yuan.
2019-02-19 18:27:16 -05:00
Markus Wick
6dd40976d0 video_core/dma_pusher: Simplyfy Step() logic.
As fetching command list headers and and the list of command headers is a fixed 1:1 relation now, they can be implemented within a single call.
This cleans up the Step() logic quite a bit.
2019-02-19 10:28:42 +01:00
Markus Wick
717394c980 video_core/dma_pusher: The full list of headers at once.
Fetching every u32 from memory leads to a big overhead. So let's fetch all of them as a block if possible.
This reduces the Memory::* calls by the dma_pusher by a factor of 10.
2019-02-19 09:58:38 +01:00
ReinUsesLisp
b675c97cdd vk_memory_manager: Implement memory manager
A memory manager object handles the memory allocations for a device. It
allocates chunks of Vulkan memory objects and then suballocates.
2019-02-19 03:42:28 -03:00
bunnei
4bce08d497 Merge pull request #2122 from ReinUsesLisp/vulkan-resource-manager
vk_resource_manager: Implement fence and command buffer allocator
2019-02-18 21:05:28 -05:00
bunnei
2bb02a0b78 Merge pull request #2134 from lioncash/naming
audio_core/buffer: Make const and non-const getter for samples consistent
2019-02-17 11:26:33 -05:00
bunnei
e869c5ef1a Merge pull request #2133 from lioncash/arbiter
address_arbiter: Use nested namespaces where applicable
2019-02-16 15:37:21 -05:00
bunnei
4699fdca8f Merge pull request #2127 from FearlessTobi/fix-screenshot-srgb
renderer_opengl: respect the sRGB colorspace for the screenshot feature
2019-02-16 15:36:00 -05:00
bunnei
cd7e1183e2 Merge pull request #2128 from FearlessTobi/port-4197
Port citra-emu/citra#4197: "threadsafe_queue: Add PopWait and use it where possible "
2019-02-16 15:34:49 -05:00
Lioncash
b009bda67a audio_core/buffer: Make const and non-const getter for samples consistent
This way proper const/non-const selection can occur.
2019-02-16 15:21:35 -05:00
Lioncash
0113c36300 address_arbiter: Use nested namespaces where applicable
A fairly trivial change. Other sections of the codebase use nested
namespaces instead of separate namespaces here. This one must have just
been overlooked.
2019-02-16 12:41:30 -05:00
Lioncash
a8fa5019b5 video_core: Remove usages of System::GetInstance() within the engines
Avoids the use of the global accessor in favor of explicitly making the
system a dependency within the interface.
2019-02-15 22:06:23 -05:00
James Rowe
99da6362c4 Merge pull request #2123 from lioncash/coretiming-global
core_timing: De-globalize core_timing facilities
2019-02-15 19:52:11 -07:00
Lioncash
bd983414f6 core_timing: Convert core timing into a class
Gets rid of the largest set of mutable global state within the core.
This also paves a way for eliminating usages of GetInstance() on the
System class as a follow-up.

Note that no behavioral changes have been made, and this simply extracts
the functionality into a class. This also has the benefit of making
dependencies on the core timing functionality explicit within the
relevant interfaces.
2019-02-15 21:50:25 -05:00
B3n30
2195f10d15 Adressed review comments 2019-02-15 22:14:54 +01:00
B3n30
4154936568 threadsafe_queue: Add WaitIfEmpty and use it in logging 2019-02-15 22:12:54 +01:00
fearlessTobi
9a56b99fa4 renderer_opengl: respect the sRGB colorspace for the screenshot feature
Previously, we were completely ignoring for screenshots whether the game uses RGB or sRGB.
This resulted in screenshot colors that looked off for some titles.
2019-02-15 21:27:29 +01:00
ReinUsesLisp
8dfc81239f gl_state: Synchronize gl_state even when state is disabled
There are some potential edge cases where gl_state may fail to track the
state if a related state changes while the toggle is disabled or it
didn't change. This addresses that.
2019-02-15 01:30:14 -03:00
bunnei
4327f430f1 Merge pull request #2112 from lioncash/shadowing
gl_rasterizer_cache: Get rid of variable shadowing
2019-02-14 21:45:20 -05:00
bunnei
a8fc5d6edd Merge pull request #2111 from ReinUsesLisp/fetch-fix
gl_shader_decompiler: Re-implement TLDS lod
2019-02-14 21:42:34 -05:00
ReinUsesLisp
ae6c052ed9 vk_resource_manager: Implement a command buffer pool with VKFencedPool 2019-02-14 18:44:26 -03:00
ReinUsesLisp
a2b6de7e9f vk_resource_manager: Add VKFencedPool interface
Handles a pool of resources protected by fences. Manages resource
overflow allocating more resources.

This class is intended to be used through inheritance.
2019-02-14 18:44:26 -03:00
ReinUsesLisp
0ffdd0a683 vk_resource_manager: Implement VKResourceManager and fence allocator
CommitFence iterates a pool of fences until one is found. If all fences
are being used at the same time, allocate more.
2019-02-14 18:44:26 -03:00
ReinUsesLisp
aa0b6babda vk_resource_manager: Implement VKFenceWatch
A fence watch is used to keep track of the usage of a fence and protect
a resource or set of resources without having to inherit from their
handlers.
2019-02-14 18:44:26 -03:00
ReinUsesLisp
25c2fe1c6b vk_resource_manager: Implement VKFence
Fences take ownership of objects, protecting them from GPU-side or
driver-side concurrent access. They must be commited from the resource
manager. Their usage flow is: commit the fence from the resource
manager, protect resources with it and use them, send the fence to an
execution queue and Wait for it if needed and then call Release. Used
resources will automatically be signaled when they are free to be
reused.
2019-02-14 18:44:26 -03:00
ReinUsesLisp
33a4cebc22 vk_resource_manager: Add VKResource interface
VKResource is an interface that gets signaled by a fence when it is free
to be reused.
2019-02-14 18:36:15 -03:00
bunnei
fcc3aa0bbf Merge pull request #2113 from ReinUsesLisp/vulkan-base
vulkan: Add dependencies and device abstraction
2019-02-14 10:06:48 -05:00
Fernando Sahmkow
10682ad7e0 shader_decompiler: Improve Accuracy of Attribute Interpolation. 2019-02-14 03:25:07 -04:00
bunnei
8490e7746a Merge pull request #2115 from lioncash/local
core_timing: Make EmptyTimedCallback a local variable
2019-02-13 21:42:04 -05:00
bunnei
f0c4ac9abd Merge pull request #2116 from lioncash/size
threadsafe_queue: Remove NeedSize template parameter
2019-02-13 21:41:25 -05:00
Fernando Sahmkow
bb41683394 rasterizer_cache_gl: Only do fast layered copy on the same format. As
glCopyImageSubData does not support different formats.
2019-02-13 16:55:00 -04:00
Lioncash
0829ef97ca threadsafe_queue: Use std::size_t for representing size
Makes it consistent with the regular standard containers in terms of
size representation. This also gets rid of dependence on our own
type aliases, removing the need for an include.
2019-02-12 22:39:53 -05:00
Lioncash
f0bfb24c61 threadsafe_queue: Remove NeedSize template parameter
The necessity of this parameter is dubious at best, and in 2019 probably
offers completely negligible savings as opposed to just leaving this
enabled. This removes it and simplifies the overall interface.
2019-02-12 22:09:51 -05:00
Lioncash
83ba3515ec core_timing: Make EmptyTimedCallback a local variable
Given this is only used in one place, it can be moved closest to its
usage site.
2019-02-12 21:47:18 -05:00
ReinUsesLisp
8beca060d1 vk_device: Abstract device handling into a class
VKDevice contains all the data required to manage and initialize a
physical device. Its intention is to be passed across Vulkan objects to
query device-specific data (for example the logical device and the
dispatch loader).
2019-02-12 21:43:02 -03:00
Lioncash
054e39647c gl_rasterizer_cache: Remove unnecessary newline 2019-02-12 16:56:19 -05:00
Lioncash
e25c464c02 gl_rasterizer_cache: Get rid of variable shadowing
Avoids shadowing the members of the struct itself, which results in a
-Wshadow warning.
2019-02-12 16:46:39 -05:00
ReinUsesLisp
18fe910957 renderer_vulkan: Add declarations file
This file is intended to be included instead of vulkan/vulkan.hpp. It
includes declarations of unique handlers using a dynamic dispatcher
instead of a static one (which would require linking to a Vulkan
library).
2019-02-12 18:33:02 -03:00
ReinUsesLisp
b12ab4d805 logging: Add Vulkan backend logging class type 2019-02-12 18:33:02 -03:00
ReinUsesLisp
cc94a6d101 cmake: Add Vulkan option 2019-02-12 18:33:02 -03:00
ReinUsesLisp
afb8af9853 gitmodules: Add Vulkan headers dependency 2019-02-12 18:33:02 -03:00
ReinUsesLisp
e60d4d70bc gl_shader_decompiler: Re-implement TLDS lod 2019-02-12 17:03:07 -03:00
152 changed files with 3530 additions and 1086 deletions

3
.gitmodules vendored
View File

@@ -37,3 +37,6 @@
[submodule "discord-rpc"]
path = externals/discord-rpc
url = https://github.com/discordapp/discord-rpc.git
[submodule "Vulkan-Headers"]
path = externals/Vulkan-Headers
url = https://github.com/KhronosGroup/Vulkan-Headers.git

View File

@@ -23,6 +23,8 @@ option(YUZU_USE_QT_WEB_ENGINE "Use QtWebEngine for web applet implementation" OF
option(ENABLE_CUBEB "Enables the cubeb audio backend" ON)
option(ENABLE_VULKAN "Enables Vulkan backend" ON)
option(USE_DISCORD_PRESENCE "Enables Discord Rich Presence" OFF)
if(NOT EXISTS ${PROJECT_SOURCE_DIR}/.git/hooks/pre-commit)

View File

@@ -7,7 +7,7 @@ yuzu is an experimental open-source emulator for the Nintendo Switch from the cr
It is written in C++ with portability in mind, with builds actively maintained for Windows, Linux and macOS. The emulator is currently only useful for homebrew development and research purposes.
yuzu only emulates a subset of Switch hardware and therefore is generally only useful for running/debugging homebrew applications. At this time, yuzu cannot play any commercial games without major problems. yuzu can boot some games, to varying degrees of success, but does not implement any of the necessary GPU features to render 3D graphics.
yuzu only emulates a subset of Switch hardware and therefore is generally only useful for running/debugging homebrew applications. At this time, yuzu cannot play any commercial games without major problems. yuzu can boot some games, to varying degrees of success.
yuzu is licensed under the GPLv2 (or any later version). Refer to the license.txt file included.

1
externals/Vulkan-Headers vendored Submodule

View File

@@ -26,14 +26,15 @@ static Stream::Format ChannelsToStreamFormat(u32 num_channels) {
return {};
}
StreamPtr AudioOut::OpenStream(u32 sample_rate, u32 num_channels, std::string&& name,
StreamPtr AudioOut::OpenStream(Core::Timing::CoreTiming& core_timing, u32 sample_rate,
u32 num_channels, std::string&& name,
Stream::ReleaseCallback&& release_callback) {
if (!sink) {
sink = CreateSinkFromID(Settings::values.sink_id, Settings::values.audio_device_id);
}
return std::make_shared<Stream>(
sample_rate, ChannelsToStreamFormat(num_channels), std::move(release_callback),
core_timing, sample_rate, ChannelsToStreamFormat(num_channels), std::move(release_callback),
sink->AcquireSinkStream(sample_rate, num_channels, name), std::move(name));
}

View File

@@ -13,6 +13,10 @@
#include "audio_core/stream.h"
#include "common/common_types.h"
namespace Core::Timing {
class CoreTiming;
}
namespace AudioCore {
/**
@@ -21,8 +25,8 @@ namespace AudioCore {
class AudioOut {
public:
/// Opens a new audio stream
StreamPtr OpenStream(u32 sample_rate, u32 num_channels, std::string&& name,
Stream::ReleaseCallback&& release_callback);
StreamPtr OpenStream(Core::Timing::CoreTiming& core_timing, u32 sample_rate, u32 num_channels,
std::string&& name, Stream::ReleaseCallback&& release_callback);
/// Returns a vector of recently released buffers specified by tag for the specified stream
std::vector<Buffer::Tag> GetTagsAndReleaseBuffers(StreamPtr stream, std::size_t max_count);

View File

@@ -8,6 +8,7 @@
#include "audio_core/codec.h"
#include "common/assert.h"
#include "common/logging/log.h"
#include "core/core.h"
#include "core/hle/kernel/writable_event.h"
#include "core/memory.h"
@@ -71,14 +72,14 @@ private:
EffectOutStatus out_status{};
EffectInStatus info{};
};
AudioRenderer::AudioRenderer(AudioRendererParameter params,
AudioRenderer::AudioRenderer(Core::Timing::CoreTiming& core_timing, AudioRendererParameter params,
Kernel::SharedPtr<Kernel::WritableEvent> buffer_event)
: worker_params{params}, buffer_event{buffer_event}, voices(params.voice_count),
effects(params.effect_count) {
audio_out = std::make_unique<AudioCore::AudioOut>();
stream = audio_out->OpenStream(STREAM_SAMPLE_RATE, STREAM_NUM_CHANNELS, "AudioRenderer",
[=]() { buffer_event->Signal(); });
stream = audio_out->OpenStream(core_timing, STREAM_SAMPLE_RATE, STREAM_NUM_CHANNELS,
"AudioRenderer", [=]() { buffer_event->Signal(); });
audio_out->StartStream(stream);
QueueMixedBuffer(0);

View File

@@ -14,6 +14,10 @@
#include "common/swap.h"
#include "core/hle/kernel/object.h"
namespace Core::Timing {
class CoreTiming;
}
namespace Kernel {
class WritableEvent;
}
@@ -42,16 +46,18 @@ struct AudioRendererParameter {
u32_le sample_rate;
u32_le sample_count;
u32_le mix_buffer_count;
u32_le unknown_c;
u32_le submix_count;
u32_le voice_count;
u32_le sink_count;
u32_le effect_count;
u32_le unknown_1c;
u8 unknown_20;
INSERT_PADDING_BYTES(3);
u32_le performance_frame_count;
u8 is_voice_drop_enabled;
u8 unknown_21;
u8 unknown_22;
u8 execution_mode;
u32_le splitter_count;
u32_le unknown_2c;
INSERT_PADDING_WORDS(1);
u32_le num_splitter_send_channels;
u32_le unknown_30;
u32_le revision;
};
static_assert(sizeof(AudioRendererParameter) == 52, "AudioRendererParameter is an invalid size");
@@ -208,7 +214,7 @@ static_assert(sizeof(UpdateDataHeader) == 0x40, "UpdateDataHeader has wrong size
class AudioRenderer {
public:
AudioRenderer(AudioRendererParameter params,
AudioRenderer(Core::Timing::CoreTiming& core_timing, AudioRendererParameter params,
Kernel::SharedPtr<Kernel::WritableEvent> buffer_event);
~AudioRenderer();

View File

@@ -21,7 +21,7 @@ public:
Buffer(Tag tag, std::vector<s16>&& samples) : tag{tag}, samples{std::move(samples)} {}
/// Returns the raw audio data for the buffer
std::vector<s16>& Samples() {
std::vector<s16>& GetSamples() {
return samples;
}

View File

@@ -68,8 +68,8 @@ std::vector<s16> DecodeADPCM(const u8* const data, std::size_t size, const ADPCM
}
}
state.yn1 = yn1;
state.yn2 = yn2;
state.yn1 = static_cast<s16>(yn1);
state.yn2 = static_cast<s16>(yn2);
return ret;
}

View File

@@ -12,6 +12,10 @@
#include "common/ring_buffer.h"
#include "core/settings.h"
#ifdef _MSC_VER
#include <objbase.h>
#endif
namespace AudioCore {
class CubebSinkStream final : public SinkStream {
@@ -46,7 +50,7 @@ public:
}
}
~CubebSinkStream() {
~CubebSinkStream() override {
if (!ctx) {
return;
}
@@ -75,11 +79,11 @@ public:
queue.Push(samples);
}
std::size_t SamplesInQueue(u32 num_channels) const override {
std::size_t SamplesInQueue(u32 channel_count) const override {
if (!ctx)
return 0;
return queue.Size() / num_channels;
return queue.Size() / channel_count;
}
void Flush() override {
@@ -98,7 +102,7 @@ private:
u32 num_channels{};
Common::RingBuffer<s16, 0x10000> queue;
std::array<s16, 2> last_frame;
std::array<s16, 2> last_frame{};
std::atomic<bool> should_flush{};
TimeStretcher time_stretch;
@@ -108,6 +112,11 @@ private:
};
CubebSink::CubebSink(std::string_view target_device_name) {
// Cubeb requires COM to be initialized on the thread calling cubeb_init on Windows
#ifdef _MSC_VER
com_init_result = CoInitializeEx(nullptr, COINIT_MULTITHREADED);
#endif
if (cubeb_init(&ctx, "yuzu", nullptr) != CUBEB_OK) {
LOG_CRITICAL(Audio_Sink, "cubeb_init failed");
return;
@@ -142,6 +151,12 @@ CubebSink::~CubebSink() {
}
cubeb_destroy(ctx);
#ifdef _MSC_VER
if (SUCCEEDED(com_init_result)) {
CoUninitialize();
}
#endif
}
SinkStream& CubebSink::AcquireSinkStream(u32 sample_rate, u32 num_channels,

View File

@@ -25,6 +25,10 @@ private:
cubeb* ctx{};
cubeb_devid output_device{};
std::vector<SinkStreamPtr> sink_streams;
#ifdef _MSC_VER
u32 com_init_result = 0;
#endif
};
std::vector<std::string> ListCubebSinkDevices();

View File

@@ -32,12 +32,12 @@ u32 Stream::GetNumChannels() const {
return {};
}
Stream::Stream(u32 sample_rate, Format format, ReleaseCallback&& release_callback,
SinkStream& sink_stream, std::string&& name_)
Stream::Stream(Core::Timing::CoreTiming& core_timing, u32 sample_rate, Format format,
ReleaseCallback&& release_callback, SinkStream& sink_stream, std::string&& name_)
: sample_rate{sample_rate}, format{format}, release_callback{std::move(release_callback)},
sink_stream{sink_stream}, name{std::move(name_)} {
sink_stream{sink_stream}, core_timing{core_timing}, name{std::move(name_)} {
release_event = Core::Timing::RegisterEvent(
release_event = core_timing.RegisterEvent(
name, [this](u64 userdata, int cycles_late) { ReleaseActiveBuffer(); });
}
@@ -95,12 +95,11 @@ void Stream::PlayNextBuffer() {
active_buffer = queued_buffers.front();
queued_buffers.pop();
VolumeAdjustSamples(active_buffer->Samples());
VolumeAdjustSamples(active_buffer->GetSamples());
sink_stream.EnqueueSamples(GetNumChannels(), active_buffer->GetSamples());
Core::Timing::ScheduleEventThreadsafe(GetBufferReleaseCycles(*active_buffer), release_event,
{});
core_timing.ScheduleEventThreadsafe(GetBufferReleaseCycles(*active_buffer), release_event, {});
}
void Stream::ReleaseActiveBuffer() {

View File

@@ -14,8 +14,9 @@
#include "common/common_types.h"
namespace Core::Timing {
class CoreTiming;
struct EventType;
}
} // namespace Core::Timing
namespace AudioCore {
@@ -42,8 +43,8 @@ public:
/// Callback function type, used to change guest state on a buffer being released
using ReleaseCallback = std::function<void()>;
Stream(u32 sample_rate, Format format, ReleaseCallback&& release_callback,
SinkStream& sink_stream, std::string&& name_);
Stream(Core::Timing::CoreTiming& core_timing, u32 sample_rate, Format format,
ReleaseCallback&& release_callback, SinkStream& sink_stream, std::string&& name_);
/// Plays the audio stream
void Play();
@@ -100,6 +101,7 @@ private:
std::queue<BufferPtr> queued_buffers; ///< Buffers queued to be played in the stream
std::queue<BufferPtr> released_buffers; ///< Buffers recently released from the stream
SinkStream& sink_stream; ///< Output sink for the stream
Core::Timing::CoreTiming& core_timing; ///< Core timing instance.
std::string name; ///< Name of the stream, must be unique
};

View File

@@ -55,36 +55,36 @@ constexpr u8 Convert8To6(u8 value) {
/**
* Decode a color stored in RGBA8 format
* @param bytes Pointer to encoded source color
* @return Result color decoded as Math::Vec4<u8>
* @return Result color decoded as Common::Vec4<u8>
*/
inline Math::Vec4<u8> DecodeRGBA8(const u8* bytes) {
inline Common::Vec4<u8> DecodeRGBA8(const u8* bytes) {
return {bytes[3], bytes[2], bytes[1], bytes[0]};
}
/**
* Decode a color stored in RGB8 format
* @param bytes Pointer to encoded source color
* @return Result color decoded as Math::Vec4<u8>
* @return Result color decoded as Common::Vec4<u8>
*/
inline Math::Vec4<u8> DecodeRGB8(const u8* bytes) {
inline Common::Vec4<u8> DecodeRGB8(const u8* bytes) {
return {bytes[2], bytes[1], bytes[0], 255};
}
/**
* Decode a color stored in RG8 (aka HILO8) format
* @param bytes Pointer to encoded source color
* @return Result color decoded as Math::Vec4<u8>
* @return Result color decoded as Common::Vec4<u8>
*/
inline Math::Vec4<u8> DecodeRG8(const u8* bytes) {
inline Common::Vec4<u8> DecodeRG8(const u8* bytes) {
return {bytes[1], bytes[0], 0, 255};
}
/**
* Decode a color stored in RGB565 format
* @param bytes Pointer to encoded source color
* @return Result color decoded as Math::Vec4<u8>
* @return Result color decoded as Common::Vec4<u8>
*/
inline Math::Vec4<u8> DecodeRGB565(const u8* bytes) {
inline Common::Vec4<u8> DecodeRGB565(const u8* bytes) {
u16_le pixel;
std::memcpy(&pixel, bytes, sizeof(pixel));
return {Convert5To8((pixel >> 11) & 0x1F), Convert6To8((pixel >> 5) & 0x3F),
@@ -94,9 +94,9 @@ inline Math::Vec4<u8> DecodeRGB565(const u8* bytes) {
/**
* Decode a color stored in RGB5A1 format
* @param bytes Pointer to encoded source color
* @return Result color decoded as Math::Vec4<u8>
* @return Result color decoded as Common::Vec4<u8>
*/
inline Math::Vec4<u8> DecodeRGB5A1(const u8* bytes) {
inline Common::Vec4<u8> DecodeRGB5A1(const u8* bytes) {
u16_le pixel;
std::memcpy(&pixel, bytes, sizeof(pixel));
return {Convert5To8((pixel >> 11) & 0x1F), Convert5To8((pixel >> 6) & 0x1F),
@@ -106,9 +106,9 @@ inline Math::Vec4<u8> DecodeRGB5A1(const u8* bytes) {
/**
* Decode a color stored in RGBA4 format
* @param bytes Pointer to encoded source color
* @return Result color decoded as Math::Vec4<u8>
* @return Result color decoded as Common::Vec4<u8>
*/
inline Math::Vec4<u8> DecodeRGBA4(const u8* bytes) {
inline Common::Vec4<u8> DecodeRGBA4(const u8* bytes) {
u16_le pixel;
std::memcpy(&pixel, bytes, sizeof(pixel));
return {Convert4To8((pixel >> 12) & 0xF), Convert4To8((pixel >> 8) & 0xF),
@@ -138,9 +138,9 @@ inline u32 DecodeD24(const u8* bytes) {
/**
* Decode a depth value and a stencil value stored in D24S8 format
* @param bytes Pointer to encoded source values
* @return Resulting values stored as a Math::Vec2
* @return Resulting values stored as a Common::Vec2
*/
inline Math::Vec2<u32> DecodeD24S8(const u8* bytes) {
inline Common::Vec2<u32> DecodeD24S8(const u8* bytes) {
return {static_cast<u32>((bytes[2] << 16) | (bytes[1] << 8) | bytes[0]), bytes[3]};
}
@@ -149,7 +149,7 @@ inline Math::Vec2<u32> DecodeD24S8(const u8* bytes) {
* @param color Source color to encode
* @param bytes Destination pointer to store encoded color
*/
inline void EncodeRGBA8(const Math::Vec4<u8>& color, u8* bytes) {
inline void EncodeRGBA8(const Common::Vec4<u8>& color, u8* bytes) {
bytes[3] = color.r();
bytes[2] = color.g();
bytes[1] = color.b();
@@ -161,7 +161,7 @@ inline void EncodeRGBA8(const Math::Vec4<u8>& color, u8* bytes) {
* @param color Source color to encode
* @param bytes Destination pointer to store encoded color
*/
inline void EncodeRGB8(const Math::Vec4<u8>& color, u8* bytes) {
inline void EncodeRGB8(const Common::Vec4<u8>& color, u8* bytes) {
bytes[2] = color.r();
bytes[1] = color.g();
bytes[0] = color.b();
@@ -172,7 +172,7 @@ inline void EncodeRGB8(const Math::Vec4<u8>& color, u8* bytes) {
* @param color Source color to encode
* @param bytes Destination pointer to store encoded color
*/
inline void EncodeRG8(const Math::Vec4<u8>& color, u8* bytes) {
inline void EncodeRG8(const Common::Vec4<u8>& color, u8* bytes) {
bytes[1] = color.r();
bytes[0] = color.g();
}
@@ -181,7 +181,7 @@ inline void EncodeRG8(const Math::Vec4<u8>& color, u8* bytes) {
* @param color Source color to encode
* @param bytes Destination pointer to store encoded color
*/
inline void EncodeRGB565(const Math::Vec4<u8>& color, u8* bytes) {
inline void EncodeRGB565(const Common::Vec4<u8>& color, u8* bytes) {
const u16_le data =
(Convert8To5(color.r()) << 11) | (Convert8To6(color.g()) << 5) | Convert8To5(color.b());
@@ -193,7 +193,7 @@ inline void EncodeRGB565(const Math::Vec4<u8>& color, u8* bytes) {
* @param color Source color to encode
* @param bytes Destination pointer to store encoded color
*/
inline void EncodeRGB5A1(const Math::Vec4<u8>& color, u8* bytes) {
inline void EncodeRGB5A1(const Common::Vec4<u8>& color, u8* bytes) {
const u16_le data = (Convert8To5(color.r()) << 11) | (Convert8To5(color.g()) << 6) |
(Convert8To5(color.b()) << 1) | Convert8To1(color.a());
@@ -205,7 +205,7 @@ inline void EncodeRGB5A1(const Math::Vec4<u8>& color, u8* bytes) {
* @param color Source color to encode
* @param bytes Destination pointer to store encoded color
*/
inline void EncodeRGBA4(const Math::Vec4<u8>& color, u8* bytes) {
inline void EncodeRGBA4(const Common::Vec4<u8>& color, u8* bytes) {
const u16 data = (Convert8To4(color.r()) << 12) | (Convert8To4(color.g()) << 8) |
(Convert8To4(color.b()) << 4) | Convert8To4(color.a());

View File

@@ -39,10 +39,10 @@ public:
Impl(Impl const&) = delete;
const Impl& operator=(Impl const&) = delete;
void PushEntry(Entry e) {
std::lock_guard<std::mutex> lock(message_mutex);
message_queue.Push(std::move(e));
message_cv.notify_one();
void PushEntry(Class log_class, Level log_level, const char* filename, unsigned int line_num,
const char* function, std::string message) {
message_queue.Push(
CreateEntry(log_class, log_level, filename, line_num, function, std::move(message)));
}
void AddBackend(std::unique_ptr<Backend> backend) {
@@ -86,15 +86,13 @@ private:
}
};
while (true) {
{
std::unique_lock<std::mutex> lock(message_mutex);
message_cv.wait(lock, [&] { return !running || message_queue.Pop(entry); });
}
if (!running) {
entry = message_queue.PopWait();
if (entry.final_entry) {
break;
}
write_logs(entry);
}
// Drain the logging queue. Only writes out up to MAX_LOGS_TO_WRITE to prevent a case
// where a system is repeatedly spamming logs even on close.
const int MAX_LOGS_TO_WRITE = filter.IsDebug() ? INT_MAX : 100;
@@ -106,18 +104,36 @@ private:
}
~Impl() {
running = false;
message_cv.notify_one();
Entry entry;
entry.final_entry = true;
message_queue.Push(entry);
backend_thread.join();
}
std::atomic_bool running{true};
std::mutex message_mutex, writing_mutex;
std::condition_variable message_cv;
Entry CreateEntry(Class log_class, Level log_level, const char* filename, unsigned int line_nr,
const char* function, std::string message) const {
using std::chrono::duration_cast;
using std::chrono::steady_clock;
Entry entry;
entry.timestamp =
duration_cast<std::chrono::microseconds>(steady_clock::now() - time_origin);
entry.log_class = log_class;
entry.log_level = log_level;
entry.filename = Common::TrimSourcePath(filename);
entry.line_num = line_nr;
entry.function = function;
entry.message = std::move(message);
return entry;
}
std::mutex writing_mutex;
std::thread backend_thread;
std::vector<std::unique_ptr<Backend>> backends;
Common::MPSCQueue<Log::Entry> message_queue;
Filter filter;
std::chrono::steady_clock::time_point time_origin{std::chrono::steady_clock::now()};
};
void ConsoleBackend::Write(const Entry& entry) {
@@ -232,6 +248,7 @@ void DebuggerBackend::Write(const Entry& entry) {
CLS(Render) \
SUB(Render, Software) \
SUB(Render, OpenGL) \
SUB(Render, Vulkan) \
CLS(Audio) \
SUB(Audio, DSP) \
SUB(Audio, Sink) \
@@ -275,25 +292,6 @@ const char* GetLevelName(Level log_level) {
#undef LVL
}
Entry CreateEntry(Class log_class, Level log_level, const char* filename, unsigned int line_nr,
const char* function, std::string message) {
using std::chrono::duration_cast;
using std::chrono::steady_clock;
static steady_clock::time_point time_origin = steady_clock::now();
Entry entry;
entry.timestamp = duration_cast<std::chrono::microseconds>(steady_clock::now() - time_origin);
entry.log_class = log_class;
entry.log_level = log_level;
entry.filename = Common::TrimSourcePath(filename);
entry.line_num = line_nr;
entry.function = function;
entry.message = std::move(message);
return entry;
}
void SetGlobalFilter(const Filter& filter) {
Impl::Instance().SetGlobalFilter(filter);
}
@@ -318,9 +316,7 @@ void FmtLogMessageImpl(Class log_class, Level log_level, const char* filename,
if (!filter.CheckMessage(log_class, log_level))
return;
Entry entry =
CreateEntry(log_class, log_level, filename, line_num, function, fmt::vformat(format, args));
instance.PushEntry(std::move(entry));
instance.PushEntry(log_class, log_level, filename, line_num, function,
fmt::vformat(format, args));
}
} // namespace Log

View File

@@ -27,6 +27,7 @@ struct Entry {
unsigned int line_num;
std::string function;
std::string message;
bool final_entry = false;
Entry() = default;
Entry(Entry&& o) = default;
@@ -134,10 +135,6 @@ const char* GetLogClassName(Class log_class);
*/
const char* GetLevelName(Level log_level);
/// Creates a log entry by formatting the given source location, and message.
Entry CreateEntry(Class log_class, Level log_level, const char* filename, unsigned int line_nr,
const char* function, std::string message);
/**
* The global filter will prevent any messages from even being processed if they are filtered. Each
* backend can have a filter, but if the level is lower than the global filter, the backend will

View File

@@ -112,6 +112,7 @@ enum class Class : ClassType {
Render, ///< Emulator video output and hardware acceleration
Render_Software, ///< Software renderer backend
Render_OpenGL, ///< OpenGL backend
Render_Vulkan, ///< Vulkan backend
Audio, ///< Audio emulation
Audio_DSP, ///< The HLE implementation of the DSP
Audio_Sink, ///< Emulator audio output backend

View File

@@ -7,7 +7,7 @@
#include <cstdlib>
#include <type_traits>
namespace MathUtil {
namespace Common {
constexpr float PI = 3.14159265f;
@@ -41,4 +41,4 @@ struct Rectangle {
}
};
} // namespace MathUtil
} // namespace Common

View File

@@ -6,12 +6,12 @@
#include "common/vector_math.h"
namespace Math {
namespace Common {
template <typename T>
class Quaternion {
public:
Math::Vec3<T> xyz;
Vec3<T> xyz;
T w{};
Quaternion<decltype(-T{})> Inverse() const {
@@ -38,12 +38,12 @@ public:
};
template <typename T>
auto QuaternionRotate(const Quaternion<T>& q, const Math::Vec3<T>& v) {
auto QuaternionRotate(const Quaternion<T>& q, const Vec3<T>& v) {
return v + 2 * Cross(q.xyz, Cross(q.xyz, v) + v * q.w);
}
inline Quaternion<float> MakeQuaternion(const Math::Vec3<float>& axis, float angle) {
inline Quaternion<float> MakeQuaternion(const Vec3<float>& axis, float angle) {
return {axis * std::sin(angle / 2), std::cos(angle / 2)};
}
} // namespace Math
} // namespace Common

View File

@@ -28,8 +28,8 @@
#include <cstring>
#include "common/common_types.h"
// GCC 4.6+
#if __GNUC__ >= 5 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6)
// GCC
#ifdef __GNUC__
#if __BYTE_ORDER__ && (__BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__) && !defined(COMMON_LITTLE_ENDIAN)
#define COMMON_LITTLE_ENDIAN 1
@@ -38,7 +38,7 @@
#endif
// LLVM/clang
#elif __clang__
#elif defined(__clang__)
#if __LITTLE_ENDIAN__ && !defined(COMMON_LITTLE_ENDIAN)
#define COMMON_LITTLE_ENDIAN 1

View File

@@ -7,17 +7,17 @@
// a simple lockless thread-safe,
// single reader, single writer queue
#include <algorithm>
#include <atomic>
#include <condition_variable>
#include <cstddef>
#include <mutex>
#include "common/common_types.h"
#include <utility>
namespace Common {
template <typename T, bool NeedSize = true>
template <typename T>
class SPSCQueue {
public:
SPSCQueue() : size(0) {
SPSCQueue() {
write_ptr = read_ptr = new ElementPtr();
}
~SPSCQueue() {
@@ -25,13 +25,12 @@ public:
delete read_ptr;
}
u32 Size() const {
static_assert(NeedSize, "using Size() on FifoQueue without NeedSize");
std::size_t Size() const {
return size.load();
}
bool Empty() const {
return !read_ptr->next.load();
return Size() == 0;
}
T& Front() const {
@@ -47,13 +46,14 @@ public:
ElementPtr* new_ptr = new ElementPtr();
write_ptr->next.store(new_ptr, std::memory_order_release);
write_ptr = new_ptr;
if (NeedSize)
size++;
cv.notify_one();
++size;
}
void Pop() {
if (NeedSize)
size--;
--size;
ElementPtr* tmpptr = read_ptr;
// advance the read pointer
read_ptr = tmpptr->next.load();
@@ -66,8 +66,7 @@ public:
if (Empty())
return false;
if (NeedSize)
size--;
--size;
ElementPtr* tmpptr = read_ptr;
read_ptr = tmpptr->next.load(std::memory_order_acquire);
@@ -77,6 +76,16 @@ public:
return true;
}
T PopWait() {
if (Empty()) {
std::unique_lock<std::mutex> lock(cv_mutex);
cv.wait(lock, [this]() { return !Empty(); });
}
T t;
Pop(t);
return t;
}
// not thread-safe
void Clear() {
size.store(0);
@@ -89,7 +98,7 @@ private:
// and a pointer to the next ElementPtr
class ElementPtr {
public:
ElementPtr() : next(nullptr) {}
ElementPtr() {}
~ElementPtr() {
ElementPtr* next_ptr = next.load();
@@ -98,21 +107,23 @@ private:
}
T current;
std::atomic<ElementPtr*> next;
std::atomic<ElementPtr*> next{nullptr};
};
ElementPtr* write_ptr;
ElementPtr* read_ptr;
std::atomic<u32> size;
std::atomic_size_t size{0};
std::mutex cv_mutex;
std::condition_variable cv;
};
// a simple thread-safe,
// single reader, multiple writer queue
template <typename T, bool NeedSize = true>
template <typename T>
class MPSCQueue {
public:
u32 Size() const {
std::size_t Size() const {
return spsc_queue.Size();
}
@@ -138,13 +149,17 @@ public:
return spsc_queue.Pop(t);
}
T PopWait() {
return spsc_queue.PopWait();
}
// not thread-safe
void Clear() {
spsc_queue.Clear();
}
private:
SPSCQueue<T, NeedSize> spsc_queue;
SPSCQueue<T> spsc_queue;
std::mutex write_lock;
};
} // namespace Common

View File

@@ -33,7 +33,7 @@
#include <cmath>
#include <type_traits>
namespace Math {
namespace Common {
template <typename T>
class Vec2;
@@ -690,4 +690,4 @@ constexpr Vec4<T> MakeVec(const T& x, const Vec3<T>& yzw) {
return MakeVec(x, yzw[0], yzw[1], yzw[2]);
}
} // namespace Math
} // namespace Common

View File

@@ -217,6 +217,7 @@ add_library(core STATIC
hle/service/audio/audren_u.h
hle/service/audio/codecctl.cpp
hle/service/audio/codecctl.h
hle/service/audio/errors.h
hle/service/audio/hwopus.cpp
hle/service/audio/hwopus.h
hle/service/bcat/bcat.cpp
@@ -400,6 +401,10 @@ add_library(core STATIC
hle/service/time/time.h
hle/service/usb/usb.cpp
hle/service/usb/usb.h
hle/service/vi/display/vi_display.cpp
hle/service/vi/display/vi_display.h
hle/service/vi/layer/vi_layer.cpp
hle/service/vi/layer/vi_layer.h
hle/service/vi/vi.cpp
hle/service/vi/vi.h
hle/service/vi/vi_m.cpp

View File

@@ -112,14 +112,14 @@ public:
// Always execute at least one tick.
amortized_ticks = std::max<u64>(amortized_ticks, 1);
Timing::AddTicks(amortized_ticks);
parent.core_timing.AddTicks(amortized_ticks);
num_interpreted_instructions = 0;
}
u64 GetTicksRemaining() override {
return std::max(Timing::GetDowncount(), 0);
return std::max(parent.core_timing.GetDowncount(), 0);
}
u64 GetCNTPCT() override {
return Timing::GetTicks();
return parent.core_timing.GetTicks();
}
ARM_Dynarmic& parent;
@@ -172,8 +172,10 @@ void ARM_Dynarmic::Step() {
cb->InterpreterFallback(jit->GetPC(), 1);
}
ARM_Dynarmic::ARM_Dynarmic(ExclusiveMonitor& exclusive_monitor, std::size_t core_index)
: cb(std::make_unique<ARM_Dynarmic_Callbacks>(*this)), core_index{core_index},
ARM_Dynarmic::ARM_Dynarmic(Timing::CoreTiming& core_timing, ExclusiveMonitor& exclusive_monitor,
std::size_t core_index)
: cb(std::make_unique<ARM_Dynarmic_Callbacks>(*this)), inner_unicorn{core_timing},
core_index{core_index}, core_timing{core_timing},
exclusive_monitor{dynamic_cast<DynarmicExclusiveMonitor&>(exclusive_monitor)} {
ThreadContext ctx{};
inner_unicorn.SaveContext(ctx);

View File

@@ -16,6 +16,10 @@ namespace Memory {
struct PageTable;
}
namespace Core::Timing {
class CoreTiming;
}
namespace Core {
class ARM_Dynarmic_Callbacks;
@@ -23,7 +27,8 @@ class DynarmicExclusiveMonitor;
class ARM_Dynarmic final : public ARM_Interface {
public:
ARM_Dynarmic(ExclusiveMonitor& exclusive_monitor, std::size_t core_index);
ARM_Dynarmic(Timing::CoreTiming& core_timing, ExclusiveMonitor& exclusive_monitor,
std::size_t core_index);
~ARM_Dynarmic();
void MapBackingMemory(VAddr address, std::size_t size, u8* memory,
@@ -62,6 +67,7 @@ private:
ARM_Unicorn inner_unicorn;
std::size_t core_index;
Timing::CoreTiming& core_timing;
DynarmicExclusiveMonitor& exclusive_monitor;
Memory::PageTable* current_page_table = nullptr;

View File

@@ -72,7 +72,7 @@ static bool UnmappedMemoryHook(uc_engine* uc, uc_mem_type type, u64 addr, int si
return {};
}
ARM_Unicorn::ARM_Unicorn() {
ARM_Unicorn::ARM_Unicorn(Timing::CoreTiming& core_timing) : core_timing{core_timing} {
CHECKED(uc_open(UC_ARCH_ARM64, UC_MODE_ARM, &uc));
auto fpv = 3 << 20;
@@ -177,7 +177,7 @@ void ARM_Unicorn::Run() {
if (GDBStub::IsServerEnabled()) {
ExecuteInstructions(std::max(4000000, 0));
} else {
ExecuteInstructions(std::max(Timing::GetDowncount(), 0));
ExecuteInstructions(std::max(core_timing.GetDowncount(), 0));
}
}
@@ -190,7 +190,7 @@ MICROPROFILE_DEFINE(ARM_Jit_Unicorn, "ARM JIT", "Unicorn", MP_RGB(255, 64, 64));
void ARM_Unicorn::ExecuteInstructions(int num_instructions) {
MICROPROFILE_SCOPE(ARM_Jit_Unicorn);
CHECKED(uc_emu_start(uc, GetPC(), 1ULL << 63, 0, num_instructions));
Timing::AddTicks(num_instructions);
core_timing.AddTicks(num_instructions);
if (GDBStub::IsServerEnabled()) {
if (last_bkpt_hit) {
uc_reg_write(uc, UC_ARM64_REG_PC, &last_bkpt.address);

View File

@@ -9,12 +9,17 @@
#include "core/arm/arm_interface.h"
#include "core/gdbstub/gdbstub.h"
namespace Core::Timing {
class CoreTiming;
}
namespace Core {
class ARM_Unicorn final : public ARM_Interface {
public:
ARM_Unicorn();
explicit ARM_Unicorn(Timing::CoreTiming& core_timing);
~ARM_Unicorn();
void MapBackingMemory(VAddr address, std::size_t size, u8* memory,
Kernel::VMAPermission perms) override;
void UnmapMemory(VAddr address, std::size_t size) override;
@@ -43,6 +48,7 @@ public:
private:
uc_engine* uc{};
Timing::CoreTiming& core_timing;
GDBStub::BreakpointAddress last_bkpt{};
bool last_bkpt_hit;
};

View File

@@ -94,8 +94,8 @@ struct System::Impl {
ResultStatus Init(System& system, Frontend::EmuWindow& emu_window) {
LOG_DEBUG(HW_Memory, "initialized OK");
Timing::Init();
kernel.Initialize();
core_timing.Initialize();
kernel.Initialize(core_timing);
const auto current_time = std::chrono::duration_cast<std::chrono::seconds>(
std::chrono::system_clock::now().time_since_epoch());
@@ -120,7 +120,7 @@ struct System::Impl {
telemetry_session = std::make_unique<Core::TelemetrySession>();
service_manager = std::make_shared<Service::SM::ServiceManager>();
Service::Init(service_manager, *virtual_filesystem);
Service::Init(service_manager, system, *virtual_filesystem);
GDBStub::Init();
renderer = VideoCore::CreateRenderer(emu_window, system);
@@ -128,7 +128,7 @@ struct System::Impl {
return ResultStatus::ErrorVideoCore;
}
gpu_core = std::make_unique<Tegra::GPU>(renderer->Rasterizer());
gpu_core = std::make_unique<Tegra::GPU>(system, renderer->Rasterizer());
cpu_core_manager.Initialize(system);
is_powered_on = true;
@@ -205,7 +205,7 @@ struct System::Impl {
// Shutdown kernel and core timing
kernel.Shutdown();
Timing::Shutdown();
core_timing.Shutdown();
// Close app loader
app_loader.reset();
@@ -232,9 +232,10 @@ struct System::Impl {
}
PerfStatsResults GetAndResetPerfStats() {
return perf_stats.GetAndResetStats(Timing::GetGlobalTimeUs());
return perf_stats.GetAndResetStats(core_timing.GetGlobalTimeUs());
}
Timing::CoreTiming core_timing;
Kernel::KernelCore kernel;
/// RealVfsFilesystem instance
FileSys::VirtualFilesystem virtual_filesystem;
@@ -396,6 +397,14 @@ const Kernel::KernelCore& System::Kernel() const {
return impl->kernel;
}
Timing::CoreTiming& System::CoreTiming() {
return impl->core_timing;
}
const Timing::CoreTiming& System::CoreTiming() const {
return impl->core_timing;
}
Core::PerfStats& System::GetPerfStats() {
return impl->perf_stats;
}

View File

@@ -47,6 +47,10 @@ namespace VideoCore {
class RendererBase;
} // namespace VideoCore
namespace Core::Timing {
class CoreTiming;
}
namespace Core {
class ARM_Interface;
@@ -205,6 +209,12 @@ public:
/// Provides a constant pointer to the current process.
const Kernel::Process* CurrentProcess() const;
/// Provides a reference to the core timing instance.
Timing::CoreTiming& CoreTiming();
/// Provides a constant reference to the core timing instance.
const Timing::CoreTiming& CoreTiming() const;
/// Provides a reference to the kernel instance.
Kernel::KernelCore& Kernel();

View File

@@ -49,17 +49,18 @@ bool CpuBarrier::Rendezvous() {
return false;
}
Cpu::Cpu(ExclusiveMonitor& exclusive_monitor, CpuBarrier& cpu_barrier, std::size_t core_index)
: cpu_barrier{cpu_barrier}, core_index{core_index} {
Cpu::Cpu(Timing::CoreTiming& core_timing, ExclusiveMonitor& exclusive_monitor,
CpuBarrier& cpu_barrier, std::size_t core_index)
: cpu_barrier{cpu_barrier}, core_timing{core_timing}, core_index{core_index} {
if (Settings::values.use_cpu_jit) {
#ifdef ARCHITECTURE_x86_64
arm_interface = std::make_unique<ARM_Dynarmic>(exclusive_monitor, core_index);
arm_interface = std::make_unique<ARM_Dynarmic>(core_timing, exclusive_monitor, core_index);
#else
arm_interface = std::make_unique<ARM_Unicorn>();
LOG_WARNING(Core, "CPU JIT requested, but Dynarmic not available");
#endif
} else {
arm_interface = std::make_unique<ARM_Unicorn>();
arm_interface = std::make_unique<ARM_Unicorn>(core_timing);
}
scheduler = std::make_unique<Kernel::Scheduler>(*arm_interface);
@@ -93,14 +94,14 @@ void Cpu::RunLoop(bool tight_loop) {
if (IsMainCore()) {
// TODO(Subv): Only let CoreTiming idle if all 4 cores are idling.
Timing::Idle();
Timing::Advance();
core_timing.Idle();
core_timing.Advance();
}
PrepareReschedule();
} else {
if (IsMainCore()) {
Timing::Advance();
core_timing.Advance();
}
if (tight_loop) {

View File

@@ -15,6 +15,10 @@ namespace Kernel {
class Scheduler;
}
namespace Core::Timing {
class CoreTiming;
}
namespace Core {
class ARM_Interface;
@@ -41,7 +45,8 @@ private:
class Cpu {
public:
Cpu(ExclusiveMonitor& exclusive_monitor, CpuBarrier& cpu_barrier, std::size_t core_index);
Cpu(Timing::CoreTiming& core_timing, ExclusiveMonitor& exclusive_monitor,
CpuBarrier& cpu_barrier, std::size_t core_index);
~Cpu();
void RunLoop(bool tight_loop = true);
@@ -82,6 +87,7 @@ private:
std::unique_ptr<ARM_Interface> arm_interface;
CpuBarrier& cpu_barrier;
std::unique_ptr<Kernel::Scheduler> scheduler;
Timing::CoreTiming& core_timing;
std::atomic<bool> reschedule_pending = false;
std::size_t core_index;

View File

@@ -8,71 +8,60 @@
#include <mutex>
#include <string>
#include <tuple>
#include <unordered_map>
#include <vector>
#include "common/assert.h"
#include "common/thread.h"
#include "common/threadsafe_queue.h"
#include "core/core_timing_util.h"
namespace Core::Timing {
static s64 global_timer;
static int slice_length;
static int downcount;
constexpr int MAX_SLICE_LENGTH = 20000;
struct EventType {
TimedCallback callback;
const std::string* name;
};
struct Event {
struct CoreTiming::Event {
s64 time;
u64 fifo_order;
u64 userdata;
const EventType* type;
// Sort by time, unless the times are the same, in which case sort by
// the order added to the queue
friend bool operator>(const Event& left, const Event& right) {
return std::tie(left.time, left.fifo_order) > std::tie(right.time, right.fifo_order);
}
friend bool operator<(const Event& left, const Event& right) {
return std::tie(left.time, left.fifo_order) < std::tie(right.time, right.fifo_order);
}
};
// Sort by time, unless the times are the same, in which case sort by the order added to the queue
static bool operator>(const Event& left, const Event& right) {
return std::tie(left.time, left.fifo_order) > std::tie(right.time, right.fifo_order);
CoreTiming::CoreTiming() = default;
CoreTiming::~CoreTiming() = default;
void CoreTiming::Initialize() {
downcount = MAX_SLICE_LENGTH;
slice_length = MAX_SLICE_LENGTH;
global_timer = 0;
idled_cycles = 0;
// The time between CoreTiming being initialized and the first call to Advance() is considered
// the slice boundary between slice -1 and slice 0. Dispatcher loops must call Advance() before
// executing the first cycle of each slice to prepare the slice length and downcount for
// that slice.
is_global_timer_sane = true;
event_fifo_id = 0;
const auto empty_timed_callback = [](u64, s64) {};
ev_lost = RegisterEvent("_lost_event", empty_timed_callback);
}
static bool operator<(const Event& left, const Event& right) {
return std::tie(left.time, left.fifo_order) < std::tie(right.time, right.fifo_order);
void CoreTiming::Shutdown() {
MoveEvents();
ClearPendingEvents();
UnregisterAllEvents();
}
// unordered_map stores each element separately as a linked list node so pointers to elements
// remain stable regardless of rehashes/resizing.
static std::unordered_map<std::string, EventType> event_types;
// The queue is a min-heap using std::make_heap/push_heap/pop_heap.
// We don't use std::priority_queue because we need to be able to serialize, unserialize and
// erase arbitrary events (RemoveEvent()) regardless of the queue order. These aren't accomodated
// by the standard adaptor class.
static std::vector<Event> event_queue;
static u64 event_fifo_id;
// the queue for storing the events from other threads threadsafe until they will be added
// to the event_queue by the emu thread
static Common::MPSCQueue<Event, false> ts_queue;
// the queue for unscheduling the events from other threads threadsafe
static Common::MPSCQueue<std::pair<const EventType*, u64>, false> unschedule_queue;
constexpr int MAX_SLICE_LENGTH = 20000;
static s64 idled_cycles;
// Are we in a function that has been called from Advance()
// If events are sheduled from a function that gets called from Advance(),
// don't change slice_length and downcount.
static bool is_global_timer_sane;
static EventType* ev_lost = nullptr;
static void EmptyTimedCallback(u64 userdata, s64 cyclesLate) {}
EventType* RegisterEvent(const std::string& name, TimedCallback callback) {
EventType* CoreTiming::RegisterEvent(const std::string& name, TimedCallback callback) {
// check for existing type with same name.
// we want event type names to remain unique so that we can use them for serialization.
ASSERT_MSG(event_types.find(name) == event_types.end(),
@@ -86,71 +75,31 @@ EventType* RegisterEvent(const std::string& name, TimedCallback callback) {
return event_type;
}
void UnregisterAllEvents() {
void CoreTiming::UnregisterAllEvents() {
ASSERT_MSG(event_queue.empty(), "Cannot unregister events with events pending");
event_types.clear();
}
void Init() {
downcount = MAX_SLICE_LENGTH;
slice_length = MAX_SLICE_LENGTH;
global_timer = 0;
idled_cycles = 0;
// The time between CoreTiming being intialized and the first call to Advance() is considered
// the slice boundary between slice -1 and slice 0. Dispatcher loops must call Advance() before
// executing the first cycle of each slice to prepare the slice length and downcount for
// that slice.
is_global_timer_sane = true;
event_fifo_id = 0;
ev_lost = RegisterEvent("_lost_event", &EmptyTimedCallback);
}
void Shutdown() {
MoveEvents();
ClearPendingEvents();
UnregisterAllEvents();
}
// This should only be called from the CPU thread. If you are calling
// it from any other thread, you are doing something evil
u64 GetTicks() {
u64 ticks = static_cast<u64>(global_timer);
if (!is_global_timer_sane) {
ticks += slice_length - downcount;
}
return ticks;
}
void AddTicks(u64 ticks) {
downcount -= static_cast<int>(ticks);
}
u64 GetIdleTicks() {
return static_cast<u64>(idled_cycles);
}
void ClearPendingEvents() {
event_queue.clear();
}
void ScheduleEvent(s64 cycles_into_future, const EventType* event_type, u64 userdata) {
void CoreTiming::ScheduleEvent(s64 cycles_into_future, const EventType* event_type, u64 userdata) {
ASSERT(event_type != nullptr);
s64 timeout = GetTicks() + cycles_into_future;
const s64 timeout = GetTicks() + cycles_into_future;
// If this event needs to be scheduled before the next advance(), force one early
if (!is_global_timer_sane)
if (!is_global_timer_sane) {
ForceExceptionCheck(cycles_into_future);
}
event_queue.emplace_back(Event{timeout, event_fifo_id++, userdata, event_type});
std::push_heap(event_queue.begin(), event_queue.end(), std::greater<>());
}
void ScheduleEventThreadsafe(s64 cycles_into_future, const EventType* event_type, u64 userdata) {
void CoreTiming::ScheduleEventThreadsafe(s64 cycles_into_future, const EventType* event_type,
u64 userdata) {
ts_queue.Push(Event{global_timer + cycles_into_future, 0, userdata, event_type});
}
void UnscheduleEvent(const EventType* event_type, u64 userdata) {
auto itr = std::remove_if(event_queue.begin(), event_queue.end(), [&](const Event& e) {
void CoreTiming::UnscheduleEvent(const EventType* event_type, u64 userdata) {
const auto itr = std::remove_if(event_queue.begin(), event_queue.end(), [&](const Event& e) {
return e.type == event_type && e.userdata == userdata;
});
@@ -161,13 +110,33 @@ void UnscheduleEvent(const EventType* event_type, u64 userdata) {
}
}
void UnscheduleEventThreadsafe(const EventType* event_type, u64 userdata) {
void CoreTiming::UnscheduleEventThreadsafe(const EventType* event_type, u64 userdata) {
unschedule_queue.Push(std::make_pair(event_type, userdata));
}
void RemoveEvent(const EventType* event_type) {
auto itr = std::remove_if(event_queue.begin(), event_queue.end(),
[&](const Event& e) { return e.type == event_type; });
u64 CoreTiming::GetTicks() const {
u64 ticks = static_cast<u64>(global_timer);
if (!is_global_timer_sane) {
ticks += slice_length - downcount;
}
return ticks;
}
u64 CoreTiming::GetIdleTicks() const {
return static_cast<u64>(idled_cycles);
}
void CoreTiming::AddTicks(u64 ticks) {
downcount -= static_cast<int>(ticks);
}
void CoreTiming::ClearPendingEvents() {
event_queue.clear();
}
void CoreTiming::RemoveEvent(const EventType* event_type) {
const auto itr = std::remove_if(event_queue.begin(), event_queue.end(),
[&](const Event& e) { return e.type == event_type; });
// Removing random items breaks the invariant so we have to re-establish it.
if (itr != event_queue.end()) {
@@ -176,22 +145,24 @@ void RemoveEvent(const EventType* event_type) {
}
}
void RemoveNormalAndThreadsafeEvent(const EventType* event_type) {
void CoreTiming::RemoveNormalAndThreadsafeEvent(const EventType* event_type) {
MoveEvents();
RemoveEvent(event_type);
}
void ForceExceptionCheck(s64 cycles) {
void CoreTiming::ForceExceptionCheck(s64 cycles) {
cycles = std::max<s64>(0, cycles);
if (downcount > cycles) {
// downcount is always (much) smaller than MAX_INT so we can safely cast cycles to an int
// here. Account for cycles already executed by adjusting the g.slice_length
slice_length -= downcount - static_cast<int>(cycles);
downcount = static_cast<int>(cycles);
if (downcount <= cycles) {
return;
}
// downcount is always (much) smaller than MAX_INT so we can safely cast cycles to an int
// here. Account for cycles already executed by adjusting the g.slice_length
slice_length -= downcount - static_cast<int>(cycles);
downcount = static_cast<int>(cycles);
}
void MoveEvents() {
void CoreTiming::MoveEvents() {
for (Event ev; ts_queue.Pop(ev);) {
ev.fifo_order = event_fifo_id++;
event_queue.emplace_back(std::move(ev));
@@ -199,13 +170,13 @@ void MoveEvents() {
}
}
void Advance() {
void CoreTiming::Advance() {
MoveEvents();
for (std::pair<const EventType*, u64> ev; unschedule_queue.Pop(ev);) {
UnscheduleEvent(ev.first, ev.second);
}
int cycles_executed = slice_length - downcount;
const int cycles_executed = slice_length - downcount;
global_timer += cycles_executed;
slice_length = MAX_SLICE_LENGTH;
@@ -229,16 +200,16 @@ void Advance() {
downcount = slice_length;
}
void Idle() {
void CoreTiming::Idle() {
idled_cycles += downcount;
downcount = 0;
}
std::chrono::microseconds GetGlobalTimeUs() {
std::chrono::microseconds CoreTiming::GetGlobalTimeUs() const {
return std::chrono::microseconds{GetTicks() * 1000000 / BASE_CLOCK_RATE};
}
int GetDowncount() {
int CoreTiming::GetDowncount() const {
return downcount;
}

View File

@@ -4,6 +4,27 @@
#pragma once
#include <chrono>
#include <functional>
#include <string>
#include <unordered_map>
#include <vector>
#include "common/common_types.h"
#include "common/threadsafe_queue.h"
namespace Core::Timing {
/// A callback that may be scheduled for a particular core timing event.
using TimedCallback = std::function<void(u64 userdata, int cycles_late)>;
/// Contains the characteristics of a particular event.
struct EventType {
/// The event's callback function.
TimedCallback callback;
/// A pointer to the name of the event.
const std::string* name;
};
/**
* This is a system to schedule events into the emulated machine's future. Time is measured
* in main CPU clock cycles.
@@ -16,80 +37,120 @@
* inside callback:
* ScheduleEvent(periodInCycles - cyclesLate, callback, "whatever")
*/
class CoreTiming {
public:
CoreTiming();
~CoreTiming();
#include <chrono>
#include <functional>
#include <string>
#include "common/common_types.h"
CoreTiming(const CoreTiming&) = delete;
CoreTiming(CoreTiming&&) = delete;
namespace Core::Timing {
CoreTiming& operator=(const CoreTiming&) = delete;
CoreTiming& operator=(CoreTiming&&) = delete;
struct EventType;
/// CoreTiming begins at the boundary of timing slice -1. An initial call to Advance() is
/// required to end slice - 1 and start slice 0 before the first cycle of code is executed.
void Initialize();
using TimedCallback = std::function<void(u64 userdata, int cycles_late)>;
/// Tears down all timing related functionality.
void Shutdown();
/**
* CoreTiming begins at the boundary of timing slice -1. An initial call to Advance() is
* required to end slice -1 and start slice 0 before the first cycle of code is executed.
*/
void Init();
void Shutdown();
/// Registers a core timing event with the given name and callback.
///
/// @param name The name of the core timing event to register.
/// @param callback The callback to execute for the event.
///
/// @returns An EventType instance representing the registered event.
///
/// @pre The name of the event being registered must be unique among all
/// registered events.
///
EventType* RegisterEvent(const std::string& name, TimedCallback callback);
/**
* This should only be called from the emu thread, if you are calling it any other thread, you are
* doing something evil
*/
u64 GetTicks();
u64 GetIdleTicks();
void AddTicks(u64 ticks);
/// Unregisters all registered events thus far.
void UnregisterAllEvents();
/**
* Returns the event_type identifier. if name is not unique, it will assert.
*/
EventType* RegisterEvent(const std::string& name, TimedCallback callback);
void UnregisterAllEvents();
/// After the first Advance, the slice lengths and the downcount will be reduced whenever an
/// event is scheduled earlier than the current values.
///
/// Scheduling from a callback will not update the downcount until the Advance() completes.
void ScheduleEvent(s64 cycles_into_future, const EventType* event_type, u64 userdata = 0);
/**
* After the first Advance, the slice lengths and the downcount will be reduced whenever an event
* is scheduled earlier than the current values.
* Scheduling from a callback will not update the downcount until the Advance() completes.
*/
void ScheduleEvent(s64 cycles_into_future, const EventType* event_type, u64 userdata = 0);
/// This is to be called when outside of hle threads, such as the graphics thread, wants to
/// schedule things to be executed on the main thread.
///
/// @note This doesn't change slice_length and thus events scheduled by this might be
/// called with a delay of up to MAX_SLICE_LENGTH
void ScheduleEventThreadsafe(s64 cycles_into_future, const EventType* event_type,
u64 userdata = 0);
/**
* This is to be called when outside of hle threads, such as the graphics thread, wants to
* schedule things to be executed on the main thread.
* Not that this doesn't change slice_length and thus events scheduled by this might be called
* with a delay of up to MAX_SLICE_LENGTH
*/
void ScheduleEventThreadsafe(s64 cycles_into_future, const EventType* event_type, u64 userdata);
void UnscheduleEvent(const EventType* event_type, u64 userdata);
void UnscheduleEventThreadsafe(const EventType* event_type, u64 userdata);
void UnscheduleEvent(const EventType* event_type, u64 userdata);
void UnscheduleEventThreadsafe(const EventType* event_type, u64 userdata);
/// We only permit one event of each type in the queue at a time.
void RemoveEvent(const EventType* event_type);
void RemoveNormalAndThreadsafeEvent(const EventType* event_type);
/// We only permit one event of each type in the queue at a time.
void RemoveEvent(const EventType* event_type);
void RemoveNormalAndThreadsafeEvent(const EventType* event_type);
void ForceExceptionCheck(s64 cycles);
/** Advance must be called at the beginning of dispatcher loops, not the end. Advance() ends
* the previous timing slice and begins the next one, you must Advance from the previous
* slice to the current one before executing any cycles. CoreTiming starts in slice -1 so an
* Advance() is required to initialize the slice length before the first cycle of emulated
* instructions is executed.
*/
void Advance();
void MoveEvents();
/// This should only be called from the emu thread, if you are calling it any other thread,
/// you are doing something evil
u64 GetTicks() const;
/// Pretend that the main CPU has executed enough cycles to reach the next event.
void Idle();
u64 GetIdleTicks() const;
/// Clear all pending events. This should ONLY be done on exit.
void ClearPendingEvents();
void AddTicks(u64 ticks);
void ForceExceptionCheck(s64 cycles);
/// Advance must be called at the beginning of dispatcher loops, not the end. Advance() ends
/// the previous timing slice and begins the next one, you must Advance from the previous
/// slice to the current one before executing any cycles. CoreTiming starts in slice -1 so an
/// Advance() is required to initialize the slice length before the first cycle of emulated
/// instructions is executed.
void Advance();
std::chrono::microseconds GetGlobalTimeUs();
/// Pretend that the main CPU has executed enough cycles to reach the next event.
void Idle();
int GetDowncount();
std::chrono::microseconds GetGlobalTimeUs() const;
int GetDowncount() const;
private:
struct Event;
/// Clear all pending events. This should ONLY be done on exit.
void ClearPendingEvents();
void MoveEvents();
s64 global_timer = 0;
s64 idled_cycles = 0;
int slice_length = 0;
int downcount = 0;
// Are we in a function that has been called from Advance()
// If events are scheduled from a function that gets called from Advance(),
// don't change slice_length and downcount.
bool is_global_timer_sane = false;
// The queue is a min-heap using std::make_heap/push_heap/pop_heap.
// We don't use std::priority_queue because we need to be able to serialize, unserialize and
// erase arbitrary events (RemoveEvent()) regardless of the queue order. These aren't
// accomodated by the standard adaptor class.
std::vector<Event> event_queue;
u64 event_fifo_id = 0;
// Stores each element separately as a linked list node so pointers to elements
// remain stable regardless of rehashes/resizing.
std::unordered_map<std::string, EventType> event_types;
// The queue for storing the events from other threads threadsafe until they will be added
// to the event_queue by the emu thread
Common::MPSCQueue<Event> ts_queue;
// The queue for unscheduling the events from other threads threadsafe
Common::MPSCQueue<std::pair<const EventType*, u64>> unschedule_queue;
EventType* ev_lost = nullptr;
};
} // namespace Core::Timing

View File

@@ -27,7 +27,8 @@ void CpuCoreManager::Initialize(System& system) {
exclusive_monitor = Cpu::MakeExclusiveMonitor(cores.size());
for (std::size_t index = 0; index < cores.size(); ++index) {
cores[index] = std::make_unique<Cpu>(*exclusive_monitor, *barrier, index);
cores[index] =
std::make_unique<Cpu>(system.CoreTiming(), *exclusive_monitor, *barrier, index);
}
// Create threads for CPU cores 1-3, and build thread_to_cpu map

View File

@@ -398,7 +398,8 @@ static bool ValidCryptoRevisionString(std::string_view base, size_t begin, size_
}
void KeyManager::LoadFromFile(const std::string& filename, bool is_title_keys) {
std::ifstream file(filename);
std::ifstream file;
OpenFStream(file, filename, std::ios_base::in);
if (!file.is_open())
return;

View File

@@ -47,7 +47,7 @@ std::size_t VectorVfsFile::Write(const u8* data_, std::size_t length, std::size_
if (offset + length > data.size())
data.resize(offset + length);
const auto write = std::min(length, data.size() - offset);
std::memcpy(data.data(), data_, write);
std::memcpy(data.data() + offset, data_, write);
return write;
}

View File

@@ -67,7 +67,7 @@ static bool IsWithinTouchscreen(const Layout::FramebufferLayout& layout, unsigne
framebuffer_x >= layout.screen.left && framebuffer_x < layout.screen.right);
}
std::tuple<unsigned, unsigned> EmuWindow::ClipToTouchScreen(unsigned new_x, unsigned new_y) {
std::tuple<unsigned, unsigned> EmuWindow::ClipToTouchScreen(unsigned new_x, unsigned new_y) const {
new_x = std::max(new_x, framebuffer_layout.screen.left);
new_x = std::min(new_x, framebuffer_layout.screen.right - 1);

View File

@@ -166,7 +166,7 @@ private:
/**
* Clip the provided coordinates to be inside the touchscreen area.
*/
std::tuple<unsigned, unsigned> ClipToTouchScreen(unsigned new_x, unsigned new_y);
std::tuple<unsigned, unsigned> ClipToTouchScreen(unsigned new_x, unsigned new_y) const;
};
} // namespace Core::Frontend

View File

@@ -12,12 +12,12 @@ namespace Layout {
// Finds the largest size subrectangle contained in window area that is confined to the aspect ratio
template <class T>
static MathUtil::Rectangle<T> maxRectangle(MathUtil::Rectangle<T> window_area,
float screen_aspect_ratio) {
static Common::Rectangle<T> MaxRectangle(Common::Rectangle<T> window_area,
float screen_aspect_ratio) {
float scale = std::min(static_cast<float>(window_area.GetWidth()),
window_area.GetHeight() / screen_aspect_ratio);
return MathUtil::Rectangle<T>{0, 0, static_cast<T>(std::round(scale)),
static_cast<T>(std::round(scale * screen_aspect_ratio))};
return Common::Rectangle<T>{0, 0, static_cast<T>(std::round(scale)),
static_cast<T>(std::round(scale * screen_aspect_ratio))};
}
FramebufferLayout DefaultFrameLayout(unsigned width, unsigned height) {
@@ -29,8 +29,8 @@ FramebufferLayout DefaultFrameLayout(unsigned width, unsigned height) {
const float emulation_aspect_ratio{static_cast<float>(ScreenUndocked::Height) /
ScreenUndocked::Width};
MathUtil::Rectangle<unsigned> screen_window_area{0, 0, width, height};
MathUtil::Rectangle<unsigned> screen = maxRectangle(screen_window_area, emulation_aspect_ratio);
Common::Rectangle<unsigned> screen_window_area{0, 0, width, height};
Common::Rectangle<unsigned> screen = MaxRectangle(screen_window_area, emulation_aspect_ratio);
float window_aspect_ratio = static_cast<float>(height) / width;

View File

@@ -16,7 +16,7 @@ struct FramebufferLayout {
unsigned width{ScreenUndocked::Width};
unsigned height{ScreenUndocked::Height};
MathUtil::Rectangle<unsigned> screen;
Common::Rectangle<unsigned> screen;
/**
* Returns the ration of pixel size of the screen, compared to the native size of the undocked

View File

@@ -124,7 +124,7 @@ using AnalogDevice = InputDevice<std::tuple<float, float>>;
* Orientation is determined by right-hand rule.
* Units: deg/sec
*/
using MotionDevice = InputDevice<std::tuple<Math::Vec3<float>, Math::Vec3<float>>>;
using MotionDevice = InputDevice<std::tuple<Common::Vec3<float>, Common::Vec3<float>>>;
/**
* A touch device is an input device that returns a tuple of two floats and a bool. The floats are

View File

@@ -17,8 +17,7 @@
#include "core/hle/result.h"
#include "core/memory.h"
namespace Kernel {
namespace AddressArbiter {
namespace Kernel::AddressArbiter {
// Performs actual address waiting logic.
static ResultCode WaitForAddress(VAddr address, s64 timeout) {
@@ -176,5 +175,4 @@ ResultCode WaitForAddressIfEqual(VAddr address, s32 value, s64 timeout) {
return WaitForAddress(address, timeout);
}
} // namespace AddressArbiter
} // namespace Kernel
} // namespace Kernel::AddressArbiter

View File

@@ -8,9 +8,8 @@
union ResultCode;
namespace Kernel {
namespace Kernel::AddressArbiter {
namespace AddressArbiter {
enum class ArbitrationType {
WaitIfLessThan = 0,
DecrementAndWaitIfLessThan = 1,
@@ -29,6 +28,5 @@ ResultCode ModifyByWaitingCountAndSignalToAddressIfEqual(VAddr address, s32 valu
ResultCode WaitForAddressIfLessThan(VAddr address, s32 value, s64 timeout, bool should_decrement);
ResultCode WaitForAddressIfEqual(VAddr address, s32 value, s64 timeout);
} // namespace AddressArbiter
} // namespace Kernel
} // namespace Kernel::AddressArbiter

View File

@@ -14,6 +14,7 @@ constexpr ResultCode ERR_MAX_CONNECTIONS_REACHED{ErrorModule::Kernel, 7};
constexpr ResultCode ERR_INVALID_CAPABILITY_DESCRIPTOR{ErrorModule::Kernel, 14};
constexpr ResultCode ERR_INVALID_SIZE{ErrorModule::Kernel, 101};
constexpr ResultCode ERR_INVALID_ADDRESS{ErrorModule::Kernel, 102};
constexpr ResultCode ERR_OUT_OF_MEMORY{ErrorModule::Kernel, 104};
constexpr ResultCode ERR_HANDLE_TABLE_FULL{ErrorModule::Kernel, 105};
constexpr ResultCode ERR_INVALID_ADDRESS_STATE{ErrorModule::Kernel, 106};
constexpr ResultCode ERR_INVALID_MEMORY_PERMISSIONS{ErrorModule::Kernel, 108};

View File

@@ -14,32 +14,47 @@
namespace Kernel {
namespace {
constexpr u16 GetSlot(Handle handle) {
return handle >> 15;
return static_cast<u16>(handle >> 15);
}
constexpr u16 GetGeneration(Handle handle) {
return handle & 0x7FFF;
return static_cast<u16>(handle & 0x7FFF);
}
} // Anonymous namespace
HandleTable::HandleTable() {
next_generation = 1;
Clear();
}
HandleTable::~HandleTable() = default;
ResultCode HandleTable::SetSize(s32 handle_table_size) {
if (static_cast<u32>(handle_table_size) > MAX_COUNT) {
return ERR_OUT_OF_MEMORY;
}
// Values less than or equal to zero indicate to use the maximum allowable
// size for the handle table in the actual kernel, so we ignore the given
// value in that case, since we assume this by default unless this function
// is called.
if (handle_table_size > 0) {
table_size = static_cast<u16>(handle_table_size);
}
return RESULT_SUCCESS;
}
ResultVal<Handle> HandleTable::Create(SharedPtr<Object> obj) {
DEBUG_ASSERT(obj != nullptr);
u16 slot = next_free_slot;
if (slot >= generations.size()) {
const u16 slot = next_free_slot;
if (slot >= table_size) {
LOG_ERROR(Kernel, "Unable to allocate Handle, too many slots in use.");
return ERR_HANDLE_TABLE_FULL;
}
next_free_slot = generations[slot];
u16 generation = next_generation++;
const u16 generation = next_generation++;
// Overflow count so it fits in the 15 bits dedicated to the generation in the handle.
// Horizon OS uses zero to represent an invalid handle, so skip to 1.
@@ -64,10 +79,11 @@ ResultVal<Handle> HandleTable::Duplicate(Handle handle) {
}
ResultCode HandleTable::Close(Handle handle) {
if (!IsValid(handle))
if (!IsValid(handle)) {
return ERR_INVALID_HANDLE;
}
u16 slot = GetSlot(handle);
const u16 slot = GetSlot(handle);
objects[slot] = nullptr;
@@ -77,10 +93,10 @@ ResultCode HandleTable::Close(Handle handle) {
}
bool HandleTable::IsValid(Handle handle) const {
std::size_t slot = GetSlot(handle);
u16 generation = GetGeneration(handle);
const std::size_t slot = GetSlot(handle);
const u16 generation = GetGeneration(handle);
return slot < MAX_COUNT && objects[slot] != nullptr && generations[slot] == generation;
return slot < table_size && objects[slot] != nullptr && generations[slot] == generation;
}
SharedPtr<Object> HandleTable::GetGeneric(Handle handle) const {
@@ -97,7 +113,7 @@ SharedPtr<Object> HandleTable::GetGeneric(Handle handle) const {
}
void HandleTable::Clear() {
for (u16 i = 0; i < MAX_COUNT; ++i) {
for (u16 i = 0; i < table_size; ++i) {
generations[i] = i + 1;
objects[i] = nullptr;
}

View File

@@ -49,6 +49,20 @@ public:
HandleTable();
~HandleTable();
/**
* Sets the number of handles that may be in use at one time
* for this handle table.
*
* @param handle_table_size The desired size to limit the handle table to.
*
* @returns an error code indicating if initialization was successful.
* If initialization was not successful, then ERR_OUT_OF_MEMORY
* will be returned.
*
* @pre handle_table_size must be within the range [0, 1024]
*/
ResultCode SetSize(s32 handle_table_size);
/**
* Allocates a handle for the given object.
* @return The created Handle or one of the following errors:
@@ -103,14 +117,21 @@ private:
*/
std::array<u16, MAX_COUNT> generations;
/**
* The limited size of the handle table. This can be specified by process
* capabilities in order to restrict the overall number of handles that
* can be created in a process instance
*/
u16 table_size = static_cast<u16>(MAX_COUNT);
/**
* Global counter of the number of created handles. Stored in `generations` when a handle is
* created, and wraps around to 1 when it hits 0x8000.
*/
u16 next_generation;
u16 next_generation = 1;
/// Head of the free slots linked list.
u16 next_free_slot;
u16 next_free_slot = 0;
};
} // namespace Kernel

View File

@@ -86,11 +86,11 @@ static void ThreadWakeupCallback(u64 thread_handle, [[maybe_unused]] int cycles_
}
struct KernelCore::Impl {
void Initialize(KernelCore& kernel) {
void Initialize(KernelCore& kernel, Core::Timing::CoreTiming& core_timing) {
Shutdown();
InitializeSystemResourceLimit(kernel);
InitializeThreads();
InitializeThreads(core_timing);
}
void Shutdown() {
@@ -122,9 +122,9 @@ struct KernelCore::Impl {
ASSERT(system_resource_limit->SetLimitValue(ResourceType::Sessions, 900).IsSuccess());
}
void InitializeThreads() {
void InitializeThreads(Core::Timing::CoreTiming& core_timing) {
thread_wakeup_event_type =
Core::Timing::RegisterEvent("ThreadWakeupCallback", ThreadWakeupCallback);
core_timing.RegisterEvent("ThreadWakeupCallback", ThreadWakeupCallback);
}
std::atomic<u32> next_object_id{0};
@@ -152,8 +152,8 @@ KernelCore::~KernelCore() {
Shutdown();
}
void KernelCore::Initialize() {
impl->Initialize(*this);
void KernelCore::Initialize(Core::Timing::CoreTiming& core_timing) {
impl->Initialize(*this, core_timing);
}
void KernelCore::Shutdown() {

View File

@@ -12,8 +12,9 @@ template <typename T>
class ResultVal;
namespace Core::Timing {
class CoreTiming;
struct EventType;
}
} // namespace Core::Timing
namespace Kernel {
@@ -39,7 +40,11 @@ public:
KernelCore& operator=(KernelCore&&) = delete;
/// Resets the kernel to a clean slate for use.
void Initialize();
///
/// @param core_timing CoreTiming instance used to create any necessary
/// kernel-specific callback events.
///
void Initialize(Core::Timing::CoreTiming& core_timing);
/// Clears all resources in use by the kernel instance.
void Shutdown();

View File

@@ -99,7 +99,13 @@ ResultCode Process::LoadFromMetadata(const FileSys::ProgramMetadata& metadata) {
vm_manager.Reset(metadata.GetAddressSpaceType());
const auto& caps = metadata.GetKernelCapabilities();
return capabilities.InitializeForUserProcess(caps.data(), caps.size(), vm_manager);
const auto capability_init_result =
capabilities.InitializeForUserProcess(caps.data(), caps.size(), vm_manager);
if (capability_init_result.IsError()) {
return capability_init_result;
}
return handle_table.SetSize(capabilities.GetHandleTableSize());
}
void Process::Run(VAddr entry_point, s32 main_thread_priority, u32 stack_size) {

View File

@@ -96,7 +96,7 @@ void ProcessCapabilities::InitializeForMetadatalessProcess() {
interrupt_capabilities.set();
// Allow using the maximum possible amount of handles
handle_table_size = static_cast<u32>(HandleTable::MAX_COUNT);
handle_table_size = static_cast<s32>(HandleTable::MAX_COUNT);
// Allow all debugging capabilities.
is_debuggable = true;
@@ -337,7 +337,7 @@ ResultCode ProcessCapabilities::HandleHandleTableFlags(u32 flags) {
return ERR_RESERVED_VALUE;
}
handle_table_size = (flags >> 16) & 0x3FF;
handle_table_size = static_cast<s32>((flags >> 16) & 0x3FF);
return RESULT_SUCCESS;
}

View File

@@ -156,7 +156,7 @@ public:
}
/// Gets the number of total allowable handles for the process' handle table.
u32 GetHandleTableSize() const {
s32 GetHandleTableSize() const {
return handle_table_size;
}
@@ -252,7 +252,7 @@ private:
u64 core_mask = 0;
u64 priority_mask = 0;
u32 handle_table_size = 0;
s32 handle_table_size = 0;
u32 kernel_version = 0;
ProgramType program_type = ProgramType::SysModule;

View File

@@ -111,7 +111,7 @@ void Scheduler::SwitchContext(Thread* new_thread) {
void Scheduler::UpdateLastContextSwitchTime(Thread* thread, Process* process) {
const u64 prev_switch_ticks = last_context_switch_time;
const u64 most_recent_switch_ticks = Core::Timing::GetTicks();
const u64 most_recent_switch_ticks = Core::System::GetInstance().CoreTiming().GetTicks();
const u64 update_ticks = most_recent_switch_ticks - prev_switch_ticks;
if (thread != nullptr) {

View File

@@ -47,23 +47,6 @@ constexpr bool IsValidAddressRange(VAddr address, u64 size) {
return address + size > address;
}
// Checks if a given address range lies within a larger address range.
constexpr bool IsInsideAddressRange(VAddr address, u64 size, VAddr address_range_begin,
VAddr address_range_end) {
const VAddr end_address = address + size - 1;
return address_range_begin <= address && end_address <= address_range_end - 1;
}
bool IsInsideAddressSpace(const VMManager& vm, VAddr address, u64 size) {
return IsInsideAddressRange(address, size, vm.GetAddressSpaceBaseAddress(),
vm.GetAddressSpaceEndAddress());
}
bool IsInsideNewMapRegion(const VMManager& vm, VAddr address, u64 size) {
return IsInsideAddressRange(address, size, vm.GetNewMapRegionBaseAddress(),
vm.GetNewMapRegionEndAddress());
}
// 8 GiB
constexpr u64 MAIN_MEMORY_SIZE = 0x200000000;
@@ -105,14 +88,14 @@ ResultCode MapUnmapMemorySanityChecks(const VMManager& vm_manager, VAddr dst_add
return ERR_INVALID_ADDRESS_STATE;
}
if (!IsInsideAddressSpace(vm_manager, src_addr, size)) {
if (!vm_manager.IsWithinAddressSpace(src_addr, size)) {
LOG_ERROR(Kernel_SVC,
"Source is not within the address space, addr=0x{:016X}, size=0x{:016X}",
src_addr, size);
return ERR_INVALID_ADDRESS_STATE;
}
if (!IsInsideNewMapRegion(vm_manager, dst_addr, size)) {
if (!vm_manager.IsWithinNewMapRegion(dst_addr, size)) {
LOG_ERROR(Kernel_SVC,
"Destination is not within the new map region, addr=0x{:016X}, size=0x{:016X}",
dst_addr, size);
@@ -238,7 +221,7 @@ static ResultCode SetMemoryPermission(VAddr addr, u64 size, u32 prot) {
auto* const current_process = Core::CurrentProcess();
auto& vm_manager = current_process->VMManager();
if (!IsInsideAddressSpace(vm_manager, addr, size)) {
if (!vm_manager.IsWithinAddressSpace(addr, size)) {
LOG_ERROR(Kernel_SVC,
"Source is not within the address space, addr=0x{:016X}, size=0x{:016X}", addr,
size);
@@ -299,7 +282,7 @@ static ResultCode SetMemoryAttribute(VAddr address, u64 size, u32 mask, u32 attr
}
auto& vm_manager = Core::CurrentProcess()->VMManager();
if (!IsInsideAddressSpace(vm_manager, address, size)) {
if (!vm_manager.IsWithinAddressSpace(address, size)) {
LOG_ERROR(Kernel_SVC,
"Given address (0x{:016X}) is outside the bounds of the address space.", address);
return ERR_INVALID_ADDRESS_STATE;
@@ -918,6 +901,7 @@ static ResultCode GetInfo(u64* result, u64 info_id, u64 handle, u64 info_sub_id)
}
const auto& system = Core::System::GetInstance();
const auto& core_timing = system.CoreTiming();
const auto& scheduler = system.CurrentScheduler();
const auto* const current_thread = scheduler.GetCurrentThread();
const bool same_thread = current_thread == thread;
@@ -927,9 +911,9 @@ static ResultCode GetInfo(u64* result, u64 info_id, u64 handle, u64 info_sub_id)
if (same_thread && info_sub_id == 0xFFFFFFFFFFFFFFFF) {
const u64 thread_ticks = current_thread->GetTotalCPUTimeTicks();
out_ticks = thread_ticks + (Core::Timing::GetTicks() - prev_ctx_ticks);
out_ticks = thread_ticks + (core_timing.GetTicks() - prev_ctx_ticks);
} else if (same_thread && info_sub_id == system.CurrentCoreIndex()) {
out_ticks = Core::Timing::GetTicks() - prev_ctx_ticks;
out_ticks = core_timing.GetTicks() - prev_ctx_ticks;
}
*result = out_ticks;
@@ -1546,10 +1530,11 @@ static ResultCode SignalToAddress(VAddr address, u32 type, s32 value, s32 num_to
static u64 GetSystemTick() {
LOG_TRACE(Kernel_SVC, "called");
const u64 result{Core::Timing::GetTicks()};
auto& core_timing = Core::System::GetInstance().CoreTiming();
const u64 result{core_timing.GetTicks()};
// Advance time to defeat dumb games that busy-wait for the frame to end.
Core::Timing::AddTicks(400);
core_timing.AddTicks(400);
return result;
}

View File

@@ -43,7 +43,8 @@ Thread::~Thread() = default;
void Thread::Stop() {
// Cancel any outstanding wakeup events for this thread
Core::Timing::UnscheduleEvent(kernel.ThreadWakeupCallbackEventType(), callback_handle);
Core::System::GetInstance().CoreTiming().UnscheduleEvent(kernel.ThreadWakeupCallbackEventType(),
callback_handle);
kernel.ThreadWakeupCallbackHandleTable().Close(callback_handle);
callback_handle = 0;
@@ -85,13 +86,14 @@ void Thread::WakeAfterDelay(s64 nanoseconds) {
// This function might be called from any thread so we have to be cautious and use the
// thread-safe version of ScheduleEvent.
Core::Timing::ScheduleEventThreadsafe(Core::Timing::nsToCycles(nanoseconds),
kernel.ThreadWakeupCallbackEventType(), callback_handle);
Core::System::GetInstance().CoreTiming().ScheduleEventThreadsafe(
Core::Timing::nsToCycles(nanoseconds), kernel.ThreadWakeupCallbackEventType(),
callback_handle);
}
void Thread::CancelWakeupTimer() {
Core::Timing::UnscheduleEventThreadsafe(kernel.ThreadWakeupCallbackEventType(),
callback_handle);
Core::System::GetInstance().CoreTiming().UnscheduleEventThreadsafe(
kernel.ThreadWakeupCallbackEventType(), callback_handle);
}
static std::optional<s32> GetNextProcessorId(u64 mask) {
@@ -182,14 +184,13 @@ ResultVal<SharedPtr<Thread>> Thread::Create(KernelCore& kernel, std::string name
return ERR_INVALID_PROCESSOR_ID;
}
// TODO(yuriks): Other checks, returning 0xD9001BEA
if (!Memory::IsValidVirtualAddress(owner_process, entry_point)) {
LOG_ERROR(Kernel_SVC, "(name={}): invalid entry {:016X}", name, entry_point);
// TODO (bunnei): Find the correct error code to use here
return ResultCode(-1);
}
auto& system = Core::System::GetInstance();
SharedPtr<Thread> thread(new Thread(kernel));
thread->thread_id = kernel.CreateNewThreadID();
@@ -198,7 +199,7 @@ ResultVal<SharedPtr<Thread>> Thread::Create(KernelCore& kernel, std::string name
thread->stack_top = stack_top;
thread->tpidr_el0 = 0;
thread->nominal_priority = thread->current_priority = priority;
thread->last_running_ticks = Core::Timing::GetTicks();
thread->last_running_ticks = system.CoreTiming().GetTicks();
thread->processor_id = processor_id;
thread->ideal_core = processor_id;
thread->affinity_mask = 1ULL << processor_id;
@@ -209,7 +210,7 @@ ResultVal<SharedPtr<Thread>> Thread::Create(KernelCore& kernel, std::string name
thread->name = std::move(name);
thread->callback_handle = kernel.ThreadWakeupCallbackHandleTable().Create(thread).Unwrap();
thread->owner_process = &owner_process;
thread->scheduler = &Core::System::GetInstance().Scheduler(processor_id);
thread->scheduler = &system.Scheduler(processor_id);
thread->scheduler->AddThread(thread, priority);
thread->tls_address = thread->owner_process->MarkNextAvailableTLSSlotAsUsed(*thread);
@@ -258,7 +259,7 @@ void Thread::SetStatus(ThreadStatus new_status) {
}
if (status == ThreadStatus::Running) {
last_running_ticks = Core::Timing::GetTicks();
last_running_ticks = Core::System::GetInstance().CoreTiming().GetTicks();
}
status = new_status;

View File

@@ -17,8 +17,8 @@
#include "core/memory_setup.h"
namespace Kernel {
static const char* GetMemoryStateName(MemoryState state) {
namespace {
const char* GetMemoryStateName(MemoryState state) {
static constexpr const char* names[] = {
"Unmapped", "Io",
"Normal", "CodeStatic",
@@ -35,6 +35,14 @@ static const char* GetMemoryStateName(MemoryState state) {
return names[ToSvcMemoryState(state)];
}
// Checks if a given address range lies within a larger address range.
constexpr bool IsInsideAddressRange(VAddr address, u64 size, VAddr address_range_begin,
VAddr address_range_end) {
const VAddr end_address = address + size - 1;
return address_range_begin <= address && end_address <= address_range_end - 1;
}
} // Anonymous namespace
bool VirtualMemoryArea::CanBeMergedWith(const VirtualMemoryArea& next) const {
ASSERT(base + size == next.base);
if (permissions != next.permissions || state != next.state || attribute != next.attribute ||
@@ -249,8 +257,7 @@ ResultCode VMManager::ReprotectRange(VAddr target, u64 size, VMAPermission new_p
}
ResultVal<VAddr> VMManager::HeapAllocate(VAddr target, u64 size, VMAPermission perms) {
if (target < GetHeapRegionBaseAddress() || target + size > GetHeapRegionEndAddress() ||
target + size < target) {
if (!IsWithinHeapRegion(target, size)) {
return ERR_INVALID_ADDRESS;
}
@@ -285,8 +292,7 @@ ResultVal<VAddr> VMManager::HeapAllocate(VAddr target, u64 size, VMAPermission p
}
ResultCode VMManager::HeapFree(VAddr target, u64 size) {
if (target < GetHeapRegionBaseAddress() || target + size > GetHeapRegionEndAddress() ||
target + size < target) {
if (!IsWithinHeapRegion(target, size)) {
return ERR_INVALID_ADDRESS;
}
@@ -706,6 +712,11 @@ u64 VMManager::GetAddressSpaceWidth() const {
return address_space_width;
}
bool VMManager::IsWithinAddressSpace(VAddr address, u64 size) const {
return IsInsideAddressRange(address, size, GetAddressSpaceBaseAddress(),
GetAddressSpaceEndAddress());
}
VAddr VMManager::GetASLRRegionBaseAddress() const {
return aslr_region_base;
}
@@ -750,6 +761,11 @@ u64 VMManager::GetCodeRegionSize() const {
return code_region_end - code_region_base;
}
bool VMManager::IsWithinCodeRegion(VAddr address, u64 size) const {
return IsInsideAddressRange(address, size, GetCodeRegionBaseAddress(),
GetCodeRegionEndAddress());
}
VAddr VMManager::GetHeapRegionBaseAddress() const {
return heap_region_base;
}
@@ -762,6 +778,11 @@ u64 VMManager::GetHeapRegionSize() const {
return heap_region_end - heap_region_base;
}
bool VMManager::IsWithinHeapRegion(VAddr address, u64 size) const {
return IsInsideAddressRange(address, size, GetHeapRegionBaseAddress(),
GetHeapRegionEndAddress());
}
VAddr VMManager::GetMapRegionBaseAddress() const {
return map_region_base;
}
@@ -774,6 +795,10 @@ u64 VMManager::GetMapRegionSize() const {
return map_region_end - map_region_base;
}
bool VMManager::IsWithinMapRegion(VAddr address, u64 size) const {
return IsInsideAddressRange(address, size, GetMapRegionBaseAddress(), GetMapRegionEndAddress());
}
VAddr VMManager::GetNewMapRegionBaseAddress() const {
return new_map_region_base;
}
@@ -786,6 +811,11 @@ u64 VMManager::GetNewMapRegionSize() const {
return new_map_region_end - new_map_region_base;
}
bool VMManager::IsWithinNewMapRegion(VAddr address, u64 size) const {
return IsInsideAddressRange(address, size, GetNewMapRegionBaseAddress(),
GetNewMapRegionEndAddress());
}
VAddr VMManager::GetTLSIORegionBaseAddress() const {
return tls_io_region_base;
}
@@ -798,4 +828,9 @@ u64 VMManager::GetTLSIORegionSize() const {
return tls_io_region_end - tls_io_region_base;
}
bool VMManager::IsWithinTLSIORegion(VAddr address, u64 size) const {
return IsInsideAddressRange(address, size, GetTLSIORegionBaseAddress(),
GetTLSIORegionEndAddress());
}
} // namespace Kernel

View File

@@ -432,18 +432,21 @@ public:
/// Gets the address space width in bits.
u64 GetAddressSpaceWidth() const;
/// Determines whether or not the given address range lies within the address space.
bool IsWithinAddressSpace(VAddr address, u64 size) const;
/// Gets the base address of the ASLR region.
VAddr GetASLRRegionBaseAddress() const;
/// Gets the end address of the ASLR region.
VAddr GetASLRRegionEndAddress() const;
/// Determines whether or not the specified address range is within the ASLR region.
bool IsWithinASLRRegion(VAddr address, u64 size) const;
/// Gets the size of the ASLR region
u64 GetASLRRegionSize() const;
/// Determines whether or not the specified address range is within the ASLR region.
bool IsWithinASLRRegion(VAddr address, u64 size) const;
/// Gets the base address of the code region.
VAddr GetCodeRegionBaseAddress() const;
@@ -453,6 +456,9 @@ public:
/// Gets the total size of the code region in bytes.
u64 GetCodeRegionSize() const;
/// Determines whether or not the specified range is within the code region.
bool IsWithinCodeRegion(VAddr address, u64 size) const;
/// Gets the base address of the heap region.
VAddr GetHeapRegionBaseAddress() const;
@@ -462,6 +468,9 @@ public:
/// Gets the total size of the heap region in bytes.
u64 GetHeapRegionSize() const;
/// Determines whether or not the specified range is within the heap region.
bool IsWithinHeapRegion(VAddr address, u64 size) const;
/// Gets the base address of the map region.
VAddr GetMapRegionBaseAddress() const;
@@ -471,6 +480,9 @@ public:
/// Gets the total size of the map region in bytes.
u64 GetMapRegionSize() const;
/// Determines whether or not the specified range is within the map region.
bool IsWithinMapRegion(VAddr address, u64 size) const;
/// Gets the base address of the new map region.
VAddr GetNewMapRegionBaseAddress() const;
@@ -480,6 +492,9 @@ public:
/// Gets the total size of the new map region in bytes.
u64 GetNewMapRegionSize() const;
/// Determines whether or not the given address range is within the new map region
bool IsWithinNewMapRegion(VAddr address, u64 size) const;
/// Gets the base address of the TLS IO region.
VAddr GetTLSIORegionBaseAddress() const;
@@ -489,6 +504,9 @@ public:
/// Gets the total size of the TLS IO region in bytes.
u64 GetTLSIORegionSize() const;
/// Determines if the given address range is within the TLS IO region.
bool IsWithinTLSIORegion(VAddr address, u64 size) const;
/// Each VMManager has its own page table, which is set as the main one when the owning process
/// is scheduled.
Memory::PageTable page_table;

View File

@@ -18,17 +18,11 @@
#include "core/hle/kernel/readable_event.h"
#include "core/hle/kernel/writable_event.h"
#include "core/hle/service/audio/audout_u.h"
#include "core/hle/service/audio/errors.h"
#include "core/memory.h"
namespace Service::Audio {
namespace ErrCodes {
enum {
ErrorUnknown = 2,
BufferCountExceeded = 8,
};
}
constexpr std::array<char, 10> DefaultDevice{{"DeviceOut"}};
constexpr int DefaultSampleRate{48000};
@@ -68,12 +62,12 @@ public:
RegisterHandlers(functions);
// This is the event handle used to check if the audio buffer was released
auto& kernel = Core::System::GetInstance().Kernel();
buffer_event = Kernel::WritableEvent::CreateEventPair(kernel, Kernel::ResetType::Sticky,
"IAudioOutBufferReleased");
auto& system = Core::System::GetInstance();
buffer_event = Kernel::WritableEvent::CreateEventPair(
system.Kernel(), Kernel::ResetType::Sticky, "IAudioOutBufferReleased");
stream = audio_core.OpenStream(audio_params.sample_rate, audio_params.channel_count,
std::move(unique_name),
stream = audio_core.OpenStream(system.CoreTiming(), audio_params.sample_rate,
audio_params.channel_count, std::move(unique_name),
[=]() { buffer_event.writable->Signal(); });
}
@@ -100,7 +94,7 @@ private:
if (stream->IsPlaying()) {
IPC::ResponseBuilder rb{ctx, 2};
rb.Push(ResultCode(ErrorModule::Audio, ErrCodes::ErrorUnknown));
rb.Push(ERR_OPERATION_FAILED);
return;
}
@@ -143,7 +137,8 @@ private:
if (!audio_core.QueueBuffer(stream, tag, std::move(samples))) {
IPC::ResponseBuilder rb{ctx, 2};
rb.Push(ResultCode(ErrorModule::Audio, ErrCodes::BufferCountExceeded));
rb.Push(ERR_BUFFER_COUNT_EXCEEDED);
return;
}
IPC::ResponseBuilder rb{ctx, 2};

View File

@@ -17,6 +17,7 @@
#include "core/hle/kernel/readable_event.h"
#include "core/hle/kernel/writable_event.h"
#include "core/hle/service/audio/audren_u.h"
#include "core/hle/service/audio/errors.h"
namespace Service::Audio {
@@ -37,15 +38,16 @@ public:
{8, &IAudioRenderer::SetRenderingTimeLimit, "SetRenderingTimeLimit"},
{9, &IAudioRenderer::GetRenderingTimeLimit, "GetRenderingTimeLimit"},
{10, &IAudioRenderer::RequestUpdateImpl, "RequestUpdateAuto"},
{11, nullptr, "ExecuteAudioRendererRendering"},
{11, &IAudioRenderer::ExecuteAudioRendererRendering, "ExecuteAudioRendererRendering"},
};
// clang-format on
RegisterHandlers(functions);
auto& kernel = Core::System::GetInstance().Kernel();
system_event = Kernel::WritableEvent::CreateEventPair(kernel, Kernel::ResetType::Sticky,
"IAudioRenderer:SystemEvent");
renderer = std::make_unique<AudioCore::AudioRenderer>(audren_params, system_event.writable);
auto& system = Core::System::GetInstance();
system_event = Kernel::WritableEvent::CreateEventPair(
system.Kernel(), Kernel::ResetType::Sticky, "IAudioRenderer:SystemEvent");
renderer = std::make_unique<AudioCore::AudioRenderer>(system.CoreTiming(), audren_params,
system_event.writable);
}
private:
@@ -137,6 +139,17 @@ private:
rb.Push(rendering_time_limit_percent);
}
void ExecuteAudioRendererRendering(Kernel::HLERequestContext& ctx) {
LOG_DEBUG(Service_Audio, "called");
// This service command currently only reports an unsupported operation
// error code, or aborts. Given that, we just always return an error
// code in this case.
IPC::ResponseBuilder rb{ctx, 2};
rb.Push(ERR_NOT_SUPPORTED);
}
Kernel::EventPair system_event;
std::unique_ptr<AudioCore::AudioRenderer> renderer;
u32 rendering_time_limit_percent = 100;
@@ -234,7 +247,7 @@ AudRenU::AudRenU() : ServiceFramework("audren:u") {
{0, &AudRenU::OpenAudioRenderer, "OpenAudioRenderer"},
{1, &AudRenU::GetAudioRendererWorkBufferSize, "GetAudioRendererWorkBufferSize"},
{2, &AudRenU::GetAudioDeviceService, "GetAudioDeviceService"},
{3, nullptr, "OpenAudioRendererAuto"},
{3, &AudRenU::OpenAudioRendererAuto, "OpenAudioRendererAuto"},
{4, &AudRenU::GetAudioDeviceServiceWithRevisionInfo, "GetAudioDeviceServiceWithRevisionInfo"},
};
// clang-format on
@@ -247,12 +260,7 @@ AudRenU::~AudRenU() = default;
void AudRenU::OpenAudioRenderer(Kernel::HLERequestContext& ctx) {
LOG_DEBUG(Service_Audio, "called");
IPC::RequestParser rp{ctx};
auto params = rp.PopRaw<AudioCore::AudioRendererParameter>();
IPC::ResponseBuilder rb{ctx, 2, 0, 1};
rb.Push(RESULT_SUCCESS);
rb.PushIpcInterface<Audio::IAudioRenderer>(std::move(params));
OpenAudioRendererImpl(ctx);
}
void AudRenU::GetAudioRendererWorkBufferSize(Kernel::HLERequestContext& ctx) {
@@ -261,20 +269,20 @@ void AudRenU::GetAudioRendererWorkBufferSize(Kernel::HLERequestContext& ctx) {
LOG_DEBUG(Service_Audio, "called");
u64 buffer_sz = Common::AlignUp(4 * params.mix_buffer_count, 0x40);
buffer_sz += params.unknown_c * 1024;
buffer_sz += 0x940 * (params.unknown_c + 1);
buffer_sz += params.submix_count * 1024;
buffer_sz += 0x940 * (params.submix_count + 1);
buffer_sz += 0x3F0 * params.voice_count;
buffer_sz += Common::AlignUp(8 * (params.unknown_c + 1), 0x10);
buffer_sz += Common::AlignUp(8 * (params.submix_count + 1), 0x10);
buffer_sz += Common::AlignUp(8 * params.voice_count, 0x10);
buffer_sz +=
Common::AlignUp((0x3C0 * (params.sink_count + params.unknown_c) + 4 * params.sample_count) *
(params.mix_buffer_count + 6),
0x40);
buffer_sz += Common::AlignUp(
(0x3C0 * (params.sink_count + params.submix_count) + 4 * params.sample_count) *
(params.mix_buffer_count + 6),
0x40);
if (IsFeatureSupported(AudioFeatures::Splitter, params.revision)) {
u32 count = params.unknown_c + 1;
const u32 count = params.submix_count + 1;
u64 node_count = Common::AlignUp(count, 0x40);
u64 node_state_buffer_sz =
const u64 node_state_buffer_sz =
4 * (node_count * node_count) + 0xC * node_count + 2 * (node_count / 8);
u64 edge_matrix_buffer_sz = 0;
node_count = Common::AlignUp(count * count, 0x40);
@@ -288,19 +296,19 @@ void AudRenU::GetAudioRendererWorkBufferSize(Kernel::HLERequestContext& ctx) {
buffer_sz += 0x20 * (params.effect_count + 4 * params.voice_count) + 0x50;
if (IsFeatureSupported(AudioFeatures::Splitter, params.revision)) {
buffer_sz += 0xE0 * params.unknown_2c;
buffer_sz += 0xE0 * params.num_splitter_send_channels;
buffer_sz += 0x20 * params.splitter_count;
buffer_sz += Common::AlignUp(4 * params.unknown_2c, 0x10);
buffer_sz += Common::AlignUp(4 * params.num_splitter_send_channels, 0x10);
}
buffer_sz = Common::AlignUp(buffer_sz, 0x40) + 0x170 * params.sink_count;
u64 output_sz = buffer_sz + 0x280 * params.sink_count + 0x4B0 * params.effect_count +
((params.voice_count * 256) | 0x40);
if (params.unknown_1c >= 1) {
if (params.performance_frame_count >= 1) {
output_sz = Common::AlignUp(((16 * params.sink_count + 16 * params.effect_count +
16 * params.voice_count + 16) +
0x658) *
(params.unknown_1c + 1) +
(params.performance_frame_count + 1) +
0xc0,
0x40) +
output_sz;
@@ -324,6 +332,12 @@ void AudRenU::GetAudioDeviceService(Kernel::HLERequestContext& ctx) {
rb.PushIpcInterface<Audio::IAudioDevice>();
}
void AudRenU::OpenAudioRendererAuto(Kernel::HLERequestContext& ctx) {
LOG_DEBUG(Service_Audio, "called");
OpenAudioRendererImpl(ctx);
}
void AudRenU::GetAudioDeviceServiceWithRevisionInfo(Kernel::HLERequestContext& ctx) {
LOG_WARNING(Service_Audio, "(STUBBED) called");
@@ -334,6 +348,15 @@ void AudRenU::GetAudioDeviceServiceWithRevisionInfo(Kernel::HLERequestContext& c
// based on the current revision
}
void AudRenU::OpenAudioRendererImpl(Kernel::HLERequestContext& ctx) {
IPC::RequestParser rp{ctx};
const auto params = rp.PopRaw<AudioCore::AudioRendererParameter>();
IPC::ResponseBuilder rb{ctx, 2, 0, 1};
rb.Push(RESULT_SUCCESS);
rb.PushIpcInterface<IAudioRenderer>(params);
}
bool AudRenU::IsFeatureSupported(AudioFeatures feature, u32_le revision) const {
u32_be version_num = (revision - Common::MakeMagic('R', 'E', 'V', '0')); // Byte swap
switch (feature) {

View File

@@ -21,8 +21,11 @@ private:
void OpenAudioRenderer(Kernel::HLERequestContext& ctx);
void GetAudioRendererWorkBufferSize(Kernel::HLERequestContext& ctx);
void GetAudioDeviceService(Kernel::HLERequestContext& ctx);
void OpenAudioRendererAuto(Kernel::HLERequestContext& ctx);
void GetAudioDeviceServiceWithRevisionInfo(Kernel::HLERequestContext& ctx);
void OpenAudioRendererImpl(Kernel::HLERequestContext& ctx);
enum class AudioFeatures : u32 {
Splitter,
};

View File

@@ -0,0 +1,15 @@
// Copyright 2019 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include "core/hle/result.h"
namespace Service::Audio {
constexpr ResultCode ERR_OPERATION_FAILED{ErrorModule::Audio, 2};
constexpr ResultCode ERR_BUFFER_COUNT_EXCEEDED{ErrorModule::Audio, 8};
constexpr ResultCode ERR_NOT_SUPPORTED{ErrorModule::Audio, 513};
} // namespace Service::Audio

View File

@@ -7,6 +7,10 @@
#include "common/common_types.h"
#include "common/swap.h"
namespace Core::Timing {
class CoreTiming;
}
namespace Service::HID {
class ControllerBase {
public:
@@ -20,7 +24,8 @@ public:
virtual void OnRelease() = 0;
// When the controller is requesting an update for the shared memory
virtual void OnUpdate(u8* data, std::size_t size) = 0;
virtual void OnUpdate(const Core::Timing::CoreTiming& core_timing, u8* data,
std::size_t size) = 0;
// Called when input devices should be loaded
virtual void OnLoadInputDevices() = 0;

View File

@@ -21,8 +21,9 @@ void Controller_DebugPad::OnInit() {}
void Controller_DebugPad::OnRelease() {}
void Controller_DebugPad::OnUpdate(u8* data, std::size_t size) {
shared_memory.header.timestamp = Core::Timing::GetTicks();
void Controller_DebugPad::OnUpdate(const Core::Timing::CoreTiming& core_timing, u8* data,
std::size_t size) {
shared_memory.header.timestamp = core_timing.GetTicks();
shared_memory.header.total_entry_count = 17;
if (!IsControllerActivated()) {

View File

@@ -26,7 +26,7 @@ public:
void OnRelease() override;
// When the controller is requesting an update for the shared memory
void OnUpdate(u8* data, std::size_t size) override;
void OnUpdate(const Core::Timing::CoreTiming& core_timing, u8* data, std::size_t size) override;
// Called when input devices should be loaded
void OnLoadInputDevices() override;

View File

@@ -17,8 +17,9 @@ void Controller_Gesture::OnInit() {}
void Controller_Gesture::OnRelease() {}
void Controller_Gesture::OnUpdate(u8* data, std::size_t size) {
shared_memory.header.timestamp = Core::Timing::GetTicks();
void Controller_Gesture::OnUpdate(const Core::Timing::CoreTiming& core_timing, u8* data,
std::size_t size) {
shared_memory.header.timestamp = core_timing.GetTicks();
shared_memory.header.total_entry_count = 17;
if (!IsControllerActivated()) {

View File

@@ -22,7 +22,7 @@ public:
void OnRelease() override;
// When the controller is requesting an update for the shared memory
void OnUpdate(u8* data, size_t size) override;
void OnUpdate(const Core::Timing::CoreTiming& core_timing, u8* data, size_t size) override;
// Called when input devices should be loaded
void OnLoadInputDevices() override;

View File

@@ -19,8 +19,9 @@ void Controller_Keyboard::OnInit() {}
void Controller_Keyboard::OnRelease() {}
void Controller_Keyboard::OnUpdate(u8* data, std::size_t size) {
shared_memory.header.timestamp = Core::Timing::GetTicks();
void Controller_Keyboard::OnUpdate(const Core::Timing::CoreTiming& core_timing, u8* data,
std::size_t size) {
shared_memory.header.timestamp = core_timing.GetTicks();
shared_memory.header.total_entry_count = 17;
if (!IsControllerActivated()) {

View File

@@ -25,7 +25,7 @@ public:
void OnRelease() override;
// When the controller is requesting an update for the shared memory
void OnUpdate(u8* data, std::size_t size) override;
void OnUpdate(const Core::Timing::CoreTiming& core_timing, u8* data, std::size_t size) override;
// Called when input devices should be loaded
void OnLoadInputDevices() override;

View File

@@ -17,8 +17,9 @@ Controller_Mouse::~Controller_Mouse() = default;
void Controller_Mouse::OnInit() {}
void Controller_Mouse::OnRelease() {}
void Controller_Mouse::OnUpdate(u8* data, std::size_t size) {
shared_memory.header.timestamp = Core::Timing::GetTicks();
void Controller_Mouse::OnUpdate(const Core::Timing::CoreTiming& core_timing, u8* data,
std::size_t size) {
shared_memory.header.timestamp = core_timing.GetTicks();
shared_memory.header.total_entry_count = 17;
if (!IsControllerActivated()) {

View File

@@ -24,7 +24,7 @@ public:
void OnRelease() override;
// When the controller is requesting an update for the shared memory
void OnUpdate(u8* data, std::size_t size) override;
void OnUpdate(const Core::Timing::CoreTiming& core_timing, u8* data, std::size_t size) override;
// Called when input devices should be loaded
void OnLoadInputDevices() override;

View File

@@ -288,7 +288,8 @@ void Controller_NPad::RequestPadStateUpdate(u32 npad_id) {
rstick_entry.y = static_cast<s32>(stick_r_y_f * HID_JOYSTICK_MAX);
}
void Controller_NPad::OnUpdate(u8* data, std::size_t data_len) {
void Controller_NPad::OnUpdate(const Core::Timing::CoreTiming& core_timing, u8* data,
std::size_t data_len) {
if (!IsControllerActivated())
return;
for (std::size_t i = 0; i < shared_memory_entries.size(); i++) {
@@ -308,7 +309,7 @@ void Controller_NPad::OnUpdate(u8* data, std::size_t data_len) {
const auto& last_entry =
main_controller->npad[main_controller->common.last_entry_index];
main_controller->common.timestamp = Core::Timing::GetTicks();
main_controller->common.timestamp = core_timing.GetTicks();
main_controller->common.last_entry_index =
(main_controller->common.last_entry_index + 1) % 17;

View File

@@ -30,7 +30,7 @@ public:
void OnRelease() override;
// When the controller is requesting an update for the shared memory
void OnUpdate(u8* data, std::size_t size) override;
void OnUpdate(const Core::Timing::CoreTiming& core_timing, u8* data, std::size_t size) override;
// Called when input devices should be loaded
void OnLoadInputDevices() override;

View File

@@ -16,13 +16,14 @@ void Controller_Stubbed::OnInit() {}
void Controller_Stubbed::OnRelease() {}
void Controller_Stubbed::OnUpdate(u8* data, std::size_t size) {
void Controller_Stubbed::OnUpdate(const Core::Timing::CoreTiming& core_timing, u8* data,
std::size_t size) {
if (!smart_update) {
return;
}
CommonHeader header{};
header.timestamp = Core::Timing::GetTicks();
header.timestamp = core_timing.GetTicks();
header.total_entry_count = 17;
header.entry_count = 0;
header.last_entry_index = 0;

View File

@@ -20,7 +20,7 @@ public:
void OnRelease() override;
// When the controller is requesting an update for the shared memory
void OnUpdate(u8* data, std::size_t size) override;
void OnUpdate(const Core::Timing::CoreTiming& core_timing, u8* data, std::size_t size) override;
// Called when input devices should be loaded
void OnLoadInputDevices() override;

View File

@@ -20,8 +20,9 @@ void Controller_Touchscreen::OnInit() {}
void Controller_Touchscreen::OnRelease() {}
void Controller_Touchscreen::OnUpdate(u8* data, std::size_t size) {
shared_memory.header.timestamp = Core::Timing::GetTicks();
void Controller_Touchscreen::OnUpdate(const Core::Timing::CoreTiming& core_timing, u8* data,
std::size_t size) {
shared_memory.header.timestamp = core_timing.GetTicks();
shared_memory.header.total_entry_count = 17;
if (!IsControllerActivated()) {
@@ -48,7 +49,7 @@ void Controller_Touchscreen::OnUpdate(u8* data, std::size_t size) {
touch_entry.diameter_x = Settings::values.touchscreen.diameter_x;
touch_entry.diameter_y = Settings::values.touchscreen.diameter_y;
touch_entry.rotation_angle = Settings::values.touchscreen.rotation_angle;
const u64 tick = Core::Timing::GetTicks();
const u64 tick = core_timing.GetTicks();
touch_entry.delta_time = tick - last_touch;
last_touch = tick;
touch_entry.finger = Settings::values.touchscreen.finger;

View File

@@ -24,7 +24,7 @@ public:
void OnRelease() override;
// When the controller is requesting an update for the shared memory
void OnUpdate(u8* data, std::size_t size) override;
void OnUpdate(const Core::Timing::CoreTiming& core_timing, u8* data, std::size_t size) override;
// Called when input devices should be loaded
void OnLoadInputDevices() override;

View File

@@ -17,9 +17,10 @@ void Controller_XPad::OnInit() {}
void Controller_XPad::OnRelease() {}
void Controller_XPad::OnUpdate(u8* data, std::size_t size) {
void Controller_XPad::OnUpdate(const Core::Timing::CoreTiming& core_timing, u8* data,
std::size_t size) {
for (auto& xpad_entry : shared_memory.shared_memory_entries) {
xpad_entry.header.timestamp = Core::Timing::GetTicks();
xpad_entry.header.timestamp = core_timing.GetTicks();
xpad_entry.header.total_entry_count = 17;
if (!IsControllerActivated()) {

View File

@@ -22,7 +22,7 @@ public:
void OnRelease() override;
// When the controller is requesting an update for the shared memory
void OnUpdate(u8* data, std::size_t size) override;
void OnUpdate(const Core::Timing::CoreTiming& core_timing, u8* data, std::size_t size) override;
// Called when input devices should be loaded
void OnLoadInputDevices() override;

View File

@@ -73,13 +73,15 @@ IAppletResource::IAppletResource() : ServiceFramework("IAppletResource") {
GetController<Controller_Stubbed>(HidController::Unknown3).SetCommonHeaderOffset(0x5000);
// Register update callbacks
pad_update_event = Core::Timing::RegisterEvent(
"HID::UpdatePadCallback",
[this](u64 userdata, int cycles_late) { UpdateControllers(userdata, cycles_late); });
auto& core_timing = Core::System::GetInstance().CoreTiming();
pad_update_event =
core_timing.RegisterEvent("HID::UpdatePadCallback", [this](u64 userdata, int cycles_late) {
UpdateControllers(userdata, cycles_late);
});
// TODO(shinyquagsire23): Other update callbacks? (accel, gyro?)
Core::Timing::ScheduleEvent(pad_update_ticks, pad_update_event);
core_timing.ScheduleEvent(pad_update_ticks, pad_update_event);
ReloadInputDevices();
}
@@ -93,7 +95,7 @@ void IAppletResource::DeactivateController(HidController controller) {
}
IAppletResource ::~IAppletResource() {
Core::Timing::UnscheduleEvent(pad_update_event, 0);
Core::System::GetInstance().CoreTiming().UnscheduleEvent(pad_update_event, 0);
}
void IAppletResource::GetSharedMemoryHandle(Kernel::HLERequestContext& ctx) {
@@ -105,15 +107,17 @@ void IAppletResource::GetSharedMemoryHandle(Kernel::HLERequestContext& ctx) {
}
void IAppletResource::UpdateControllers(u64 userdata, int cycles_late) {
auto& core_timing = Core::System::GetInstance().CoreTiming();
const bool should_reload = Settings::values.is_device_reload_pending.exchange(false);
for (const auto& controller : controllers) {
if (should_reload) {
controller->OnLoadInputDevices();
}
controller->OnUpdate(shared_mem->GetPointer(), SHARED_MEMORY_SIZE);
controller->OnUpdate(core_timing, shared_mem->GetPointer(), SHARED_MEMORY_SIZE);
}
Core::Timing::ScheduleEvent(pad_update_ticks - cycles_late, pad_update_event);
core_timing.ScheduleEvent(pad_update_ticks - cycles_late, pad_update_event);
}
class IActiveVibrationDeviceList final : public ServiceFramework<IActiveVibrationDeviceList> {

View File

@@ -15,7 +15,7 @@ namespace Kernel {
class SharedMemory;
}
namespace SM {
namespace Service::SM {
class ServiceManager;
}

View File

@@ -98,7 +98,7 @@ void IRS::GetImageTransferProcessorState(Kernel::HLERequestContext& ctx) {
IPC::ResponseBuilder rb{ctx, 5};
rb.Push(RESULT_SUCCESS);
rb.PushRaw<u64>(Core::Timing::GetTicks());
rb.PushRaw<u64>(Core::System::GetInstance().CoreTiming().GetTicks());
rb.PushRaw<u32>(0);
}

View File

@@ -23,7 +23,7 @@ u32 nvdisp_disp0::ioctl(Ioctl command, const std::vector<u8>& input, std::vector
void nvdisp_disp0::flip(u32 buffer_handle, u32 offset, u32 format, u32 width, u32 height,
u32 stride, NVFlinger::BufferQueue::BufferTransformFlags transform,
const MathUtil::Rectangle<int>& crop_rect) {
const Common::Rectangle<int>& crop_rect) {
VAddr addr = nvmap_dev->GetObjectAddress(buffer_handle);
LOG_TRACE(Service,
"Drawing from address {:X} offset {:08X} Width {} Height {} Stride {} Format {}",

View File

@@ -25,7 +25,7 @@ public:
/// Performs a screen flip, drawing the buffer pointed to by the handle.
void flip(u32 buffer_handle, u32 offset, u32 format, u32 width, u32 height, u32 stride,
NVFlinger::BufferQueue::BufferTransformFlags transform,
const MathUtil::Rectangle<int>& crop_rect);
const Common::Rectangle<int>& crop_rect);
private:
std::shared_ptr<nvmap> nvmap_dev;

View File

@@ -5,6 +5,7 @@
#include <cstring>
#include "common/assert.h"
#include "common/logging/log.h"
#include "core/core.h"
#include "core/core_timing.h"
#include "core/core_timing_util.h"
#include "core/hle/service/nvdrv/devices/nvhost_ctrl_gpu.h"
@@ -184,7 +185,7 @@ u32 nvhost_ctrl_gpu::GetGpuTime(const std::vector<u8>& input, std::vector<u8>& o
IoctlGetGpuTime params{};
std::memcpy(&params, input.data(), input.size());
params.gpu_time = Core::Timing::cyclesToNs(Core::Timing::GetTicks());
params.gpu_time = Core::Timing::cyclesToNs(Core::System::GetInstance().CoreTiming().GetTicks());
std::memcpy(output.data(), &params, output.size());
return 0;
}

View File

@@ -63,7 +63,7 @@ const IGBPBuffer& BufferQueue::RequestBuffer(u32 slot) const {
}
void BufferQueue::QueueBuffer(u32 slot, BufferTransformFlags transform,
const MathUtil::Rectangle<int>& crop_rect) {
const Common::Rectangle<int>& crop_rect) {
auto itr = std::find_if(queue.begin(), queue.end(),
[&](const Buffer& buffer) { return buffer.slot == slot; });
ASSERT(itr != queue.end());

View File

@@ -67,14 +67,14 @@ public:
Status status = Status::Free;
IGBPBuffer igbp_buffer;
BufferTransformFlags transform;
MathUtil::Rectangle<int> crop_rect;
Common::Rectangle<int> crop_rect;
};
void SetPreallocatedBuffer(u32 slot, const IGBPBuffer& igbp_buffer);
std::optional<u32> DequeueBuffer(u32 width, u32 height);
const IGBPBuffer& RequestBuffer(u32 slot) const;
void QueueBuffer(u32 slot, BufferTransformFlags transform,
const MathUtil::Rectangle<int>& crop_rect);
const Common::Rectangle<int>& crop_rect);
std::optional<std::reference_wrapper<const Buffer>> AcquireBuffer();
void ReleaseBuffer(u32 slot);
u32 Query(QueryType type);

View File

@@ -14,11 +14,12 @@
#include "core/core_timing_util.h"
#include "core/hle/kernel/kernel.h"
#include "core/hle/kernel/readable_event.h"
#include "core/hle/kernel/writable_event.h"
#include "core/hle/service/nvdrv/devices/nvdisp_disp0.h"
#include "core/hle/service/nvdrv/nvdrv.h"
#include "core/hle/service/nvflinger/buffer_queue.h"
#include "core/hle/service/nvflinger/nvflinger.h"
#include "core/hle/service/vi/display/vi_display.h"
#include "core/hle/service/vi/layer/vi_layer.h"
#include "core/perf_stats.h"
#include "video_core/renderer_base.h"
@@ -27,19 +28,25 @@ namespace Service::NVFlinger {
constexpr std::size_t SCREEN_REFRESH_RATE = 60;
constexpr u64 frame_ticks = static_cast<u64>(Core::Timing::BASE_CLOCK_RATE / SCREEN_REFRESH_RATE);
NVFlinger::NVFlinger() {
NVFlinger::NVFlinger(Core::Timing::CoreTiming& core_timing) : core_timing{core_timing} {
displays.emplace_back(0, "Default");
displays.emplace_back(1, "External");
displays.emplace_back(2, "Edid");
displays.emplace_back(3, "Internal");
displays.emplace_back(4, "Null");
// Schedule the screen composition events
composition_event =
Core::Timing::RegisterEvent("ScreenComposition", [this](u64 userdata, int cycles_late) {
core_timing.RegisterEvent("ScreenComposition", [this](u64 userdata, int cycles_late) {
Compose();
Core::Timing::ScheduleEvent(frame_ticks - cycles_late, composition_event);
this->core_timing.ScheduleEvent(frame_ticks - cycles_late, composition_event);
});
Core::Timing::ScheduleEvent(frame_ticks, composition_event);
core_timing.ScheduleEvent(frame_ticks, composition_event);
}
NVFlinger::~NVFlinger() {
Core::Timing::UnscheduleEvent(composition_event, 0);
core_timing.UnscheduleEvent(composition_event, 0);
}
void NVFlinger::SetNVDrvInstance(std::shared_ptr<Nvidia::Module> instance) {
@@ -52,13 +59,14 @@ std::optional<u64> NVFlinger::OpenDisplay(std::string_view name) {
// TODO(Subv): Currently we only support the Default display.
ASSERT(name == "Default");
const auto itr = std::find_if(displays.begin(), displays.end(),
[&](const Display& display) { return display.name == name; });
const auto itr =
std::find_if(displays.begin(), displays.end(),
[&](const VI::Display& display) { return display.GetName() == name; });
if (itr == displays.end()) {
return {};
}
return itr->id;
return itr->GetID();
}
std::optional<u64> NVFlinger::CreateLayer(u64 display_id) {
@@ -68,13 +76,10 @@ std::optional<u64> NVFlinger::CreateLayer(u64 display_id) {
return {};
}
ASSERT_MSG(display->layers.empty(), "Only one layer is supported per display at the moment");
const u64 layer_id = next_layer_id++;
const u32 buffer_queue_id = next_buffer_queue_id++;
auto buffer_queue = std::make_shared<BufferQueue>(buffer_queue_id, layer_id);
display->layers.emplace_back(layer_id, buffer_queue);
buffer_queues.emplace_back(std::move(buffer_queue));
buffer_queues.emplace_back(buffer_queue_id, layer_id);
display->CreateLayer(layer_id, buffer_queues.back());
return layer_id;
}
@@ -85,7 +90,7 @@ std::optional<u32> NVFlinger::FindBufferQueueId(u64 display_id, u64 layer_id) co
return {};
}
return layer->buffer_queue->GetId();
return layer->GetBufferQueue().GetId();
}
Kernel::SharedPtr<Kernel::ReadableEvent> NVFlinger::FindVsyncEvent(u64 display_id) const {
@@ -95,20 +100,29 @@ Kernel::SharedPtr<Kernel::ReadableEvent> NVFlinger::FindVsyncEvent(u64 display_i
return nullptr;
}
return display->vsync_event.readable;
return display->GetVSyncEvent();
}
std::shared_ptr<BufferQueue> NVFlinger::FindBufferQueue(u32 id) const {
BufferQueue& NVFlinger::FindBufferQueue(u32 id) {
const auto itr = std::find_if(buffer_queues.begin(), buffer_queues.end(),
[&](const auto& queue) { return queue->GetId() == id; });
[id](const auto& queue) { return queue.GetId() == id; });
ASSERT(itr != buffer_queues.end());
return *itr;
}
Display* NVFlinger::FindDisplay(u64 display_id) {
const auto itr = std::find_if(displays.begin(), displays.end(),
[&](const Display& display) { return display.id == display_id; });
const BufferQueue& NVFlinger::FindBufferQueue(u32 id) const {
const auto itr = std::find_if(buffer_queues.begin(), buffer_queues.end(),
[id](const auto& queue) { return queue.GetId() == id; });
ASSERT(itr != buffer_queues.end());
return *itr;
}
VI::Display* NVFlinger::FindDisplay(u64 display_id) {
const auto itr =
std::find_if(displays.begin(), displays.end(),
[&](const VI::Display& display) { return display.GetID() == display_id; });
if (itr == displays.end()) {
return nullptr;
@@ -117,9 +131,10 @@ Display* NVFlinger::FindDisplay(u64 display_id) {
return &*itr;
}
const Display* NVFlinger::FindDisplay(u64 display_id) const {
const auto itr = std::find_if(displays.begin(), displays.end(),
[&](const Display& display) { return display.id == display_id; });
const VI::Display* NVFlinger::FindDisplay(u64 display_id) const {
const auto itr =
std::find_if(displays.begin(), displays.end(),
[&](const VI::Display& display) { return display.GetID() == display_id; });
if (itr == displays.end()) {
return nullptr;
@@ -128,57 +143,41 @@ const Display* NVFlinger::FindDisplay(u64 display_id) const {
return &*itr;
}
Layer* NVFlinger::FindLayer(u64 display_id, u64 layer_id) {
VI::Layer* NVFlinger::FindLayer(u64 display_id, u64 layer_id) {
auto* const display = FindDisplay(display_id);
if (display == nullptr) {
return nullptr;
}
const auto itr = std::find_if(display->layers.begin(), display->layers.end(),
[&](const Layer& layer) { return layer.id == layer_id; });
if (itr == display->layers.end()) {
return nullptr;
}
return &*itr;
return display->FindLayer(layer_id);
}
const Layer* NVFlinger::FindLayer(u64 display_id, u64 layer_id) const {
const VI::Layer* NVFlinger::FindLayer(u64 display_id, u64 layer_id) const {
const auto* const display = FindDisplay(display_id);
if (display == nullptr) {
return nullptr;
}
const auto itr = std::find_if(display->layers.begin(), display->layers.end(),
[&](const Layer& layer) { return layer.id == layer_id; });
if (itr == display->layers.end()) {
return nullptr;
}
return &*itr;
return display->FindLayer(layer_id);
}
void NVFlinger::Compose() {
for (auto& display : displays) {
// Trigger vsync for this display at the end of drawing
SCOPE_EXIT({ display.vsync_event.writable->Signal(); });
SCOPE_EXIT({ display.SignalVSyncEvent(); });
// Don't do anything for displays without layers.
if (display.layers.empty())
if (!display.HasLayers())
continue;
// TODO(Subv): Support more than 1 layer.
ASSERT_MSG(display.layers.size() == 1, "Max 1 layer per display is supported");
Layer& layer = display.layers[0];
auto& buffer_queue = layer.buffer_queue;
VI::Layer& layer = display.GetLayer(0);
auto& buffer_queue = layer.GetBufferQueue();
// Search for a queued buffer and acquire it
auto buffer = buffer_queue->AcquireBuffer();
auto buffer = buffer_queue.AcquireBuffer();
MicroProfileFlip();
@@ -203,19 +202,8 @@ void NVFlinger::Compose() {
igbp_buffer.width, igbp_buffer.height, igbp_buffer.stride,
buffer->get().transform, buffer->get().crop_rect);
buffer_queue->ReleaseBuffer(buffer->get().slot);
buffer_queue.ReleaseBuffer(buffer->get().slot);
}
}
Layer::Layer(u64 id, std::shared_ptr<BufferQueue> queue) : id(id), buffer_queue(std::move(queue)) {}
Layer::~Layer() = default;
Display::Display(u64 id, std::string name) : id(id), name(std::move(name)) {
auto& kernel = Core::System::GetInstance().Kernel();
vsync_event = Kernel::WritableEvent::CreateEventPair(kernel, Kernel::ResetType::Sticky,
fmt::format("Display VSync Event {}", id));
}
Display::~Display() = default;
} // namespace Service::NVFlinger

View File

@@ -4,7 +4,6 @@
#pragma once
#include <array>
#include <memory>
#include <optional>
#include <string>
@@ -15,8 +14,9 @@
#include "core/hle/kernel/object.h"
namespace Core::Timing {
class CoreTiming;
struct EventType;
}
} // namespace Core::Timing
namespace Kernel {
class ReadableEvent;
@@ -25,34 +25,20 @@ class WritableEvent;
namespace Service::Nvidia {
class Module;
}
} // namespace Service::Nvidia
namespace Service::VI {
class Display;
class Layer;
} // namespace Service::VI
namespace Service::NVFlinger {
class BufferQueue;
struct Layer {
Layer(u64 id, std::shared_ptr<BufferQueue> queue);
~Layer();
u64 id;
std::shared_ptr<BufferQueue> buffer_queue;
};
struct Display {
Display(u64 id, std::string name);
~Display();
u64 id;
std::string name;
std::vector<Layer> layers;
Kernel::EventPair vsync_event;
};
class NVFlinger final {
public:
NVFlinger();
explicit NVFlinger(Core::Timing::CoreTiming& core_timing);
~NVFlinger();
/// Sets the NVDrv module instance to use to send buffers to the GPU.
@@ -79,7 +65,10 @@ public:
Kernel::SharedPtr<Kernel::ReadableEvent> FindVsyncEvent(u64 display_id) const;
/// Obtains a buffer queue identified by the ID.
std::shared_ptr<BufferQueue> FindBufferQueue(u32 id) const;
BufferQueue& FindBufferQueue(u32 id);
/// Obtains a buffer queue identified by the ID.
const BufferQueue& FindBufferQueue(u32 id) const;
/// Performs a composition request to the emulated nvidia GPU and triggers the vsync events when
/// finished.
@@ -87,27 +76,21 @@ public:
private:
/// Finds the display identified by the specified ID.
Display* FindDisplay(u64 display_id);
VI::Display* FindDisplay(u64 display_id);
/// Finds the display identified by the specified ID.
const Display* FindDisplay(u64 display_id) const;
const VI::Display* FindDisplay(u64 display_id) const;
/// Finds the layer identified by the specified ID in the desired display.
Layer* FindLayer(u64 display_id, u64 layer_id);
VI::Layer* FindLayer(u64 display_id, u64 layer_id);
/// Finds the layer identified by the specified ID in the desired display.
const Layer* FindLayer(u64 display_id, u64 layer_id) const;
const VI::Layer* FindLayer(u64 display_id, u64 layer_id) const;
std::shared_ptr<Nvidia::Module> nvdrv;
std::array<Display, 5> displays{{
{0, "Default"},
{1, "External"},
{2, "Edid"},
{3, "Internal"},
{4, "Null"},
}};
std::vector<std::shared_ptr<BufferQueue>> buffer_queues;
std::vector<VI::Display> displays;
std::vector<BufferQueue> buffer_queues;
/// Id to use for the next layer that is created, this counter is shared among all displays.
u64 next_layer_id = 1;
@@ -117,6 +100,9 @@ private:
/// Event that handles screen composition.
Core::Timing::EventType* composition_event;
/// Core timing instance for registering/unregistering the composition event.
Core::Timing::CoreTiming& core_timing;
};
} // namespace Service::NVFlinger

View File

@@ -194,10 +194,11 @@ ResultCode ServiceFrameworkBase::HandleSyncRequest(Kernel::HLERequestContext& co
// Module interface
/// Initialize ServiceManager
void Init(std::shared_ptr<SM::ServiceManager>& sm, FileSys::VfsFilesystem& vfs) {
void Init(std::shared_ptr<SM::ServiceManager>& sm, Core::System& system,
FileSys::VfsFilesystem& vfs) {
// NVFlinger needs to be accessed by several services like Vi and AppletOE so we instantiate it
// here and pass it into the respective InstallInterfaces functions.
auto nv_flinger = std::make_shared<NVFlinger::NVFlinger>();
auto nv_flinger = std::make_shared<NVFlinger::NVFlinger>(system.CoreTiming());
SM::ServiceManager::InstallInterfaces(sm);

View File

@@ -14,6 +14,14 @@
////////////////////////////////////////////////////////////////////////////////////////////////////
// Namespace Service
namespace Core {
class System;
}
namespace FileSys {
class VfsFilesystem;
}
namespace Kernel {
class ClientPort;
class ServerPort;
@@ -21,10 +29,6 @@ class ServerSession;
class HLERequestContext;
} // namespace Kernel
namespace FileSys {
class VfsFilesystem;
}
namespace Service {
namespace SM {
@@ -178,7 +182,8 @@ private:
};
/// Initialize ServiceManager
void Init(std::shared_ptr<SM::ServiceManager>& sm, FileSys::VfsFilesystem& vfs);
void Init(std::shared_ptr<SM::ServiceManager>& sm, Core::System& system,
FileSys::VfsFilesystem& vfs);
/// Shutdown ServiceManager
void Shutdown();

View File

@@ -5,6 +5,7 @@
#include <chrono>
#include <ctime>
#include "common/logging/log.h"
#include "core/core.h"
#include "core/core_timing.h"
#include "core/core_timing_util.h"
#include "core/hle/ipc_helpers.h"
@@ -106,8 +107,9 @@ private:
void GetCurrentTimePoint(Kernel::HLERequestContext& ctx) {
LOG_DEBUG(Service_Time, "called");
const auto& core_timing = Core::System::GetInstance().CoreTiming();
const SteadyClockTimePoint steady_clock_time_point{
Core::Timing::cyclesToMs(Core::Timing::GetTicks()) / 1000};
Core::Timing::cyclesToMs(core_timing.GetTicks()) / 1000};
IPC::ResponseBuilder rb{ctx, (sizeof(SteadyClockTimePoint) / 4) + 2};
rb.Push(RESULT_SUCCESS);
rb.PushRaw(steady_clock_time_point);
@@ -281,8 +283,9 @@ void Module::Interface::GetClockSnapshot(Kernel::HLERequestContext& ctx) {
return;
}
const auto& core_timing = Core::System::GetInstance().CoreTiming();
const SteadyClockTimePoint steady_clock_time_point{
Core::Timing::cyclesToMs(Core::Timing::GetTicks()) / 1000, {}};
Core::Timing::cyclesToMs(core_timing.GetTicks()) / 1000, {}};
CalendarTime calendar_time{};
calendar_time.year = tm->tm_year + 1900;

View File

@@ -0,0 +1,71 @@
// Copyright 2019 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include <algorithm>
#include <utility>
#include <fmt/format.h>
#include "common/assert.h"
#include "core/core.h"
#include "core/hle/kernel/readable_event.h"
#include "core/hle/service/vi/display/vi_display.h"
#include "core/hle/service/vi/layer/vi_layer.h"
namespace Service::VI {
Display::Display(u64 id, std::string name) : id{id}, name{std::move(name)} {
auto& kernel = Core::System::GetInstance().Kernel();
vsync_event = Kernel::WritableEvent::CreateEventPair(kernel, Kernel::ResetType::Sticky,
fmt::format("Display VSync Event {}", id));
}
Display::~Display() = default;
Layer& Display::GetLayer(std::size_t index) {
return layers.at(index);
}
const Layer& Display::GetLayer(std::size_t index) const {
return layers.at(index);
}
Kernel::SharedPtr<Kernel::ReadableEvent> Display::GetVSyncEvent() const {
return vsync_event.readable;
}
void Display::SignalVSyncEvent() {
vsync_event.writable->Signal();
}
void Display::CreateLayer(u64 id, NVFlinger::BufferQueue& buffer_queue) {
// TODO(Subv): Support more than 1 layer.
ASSERT_MSG(layers.empty(), "Only one layer is supported per display at the moment");
layers.emplace_back(id, buffer_queue);
}
Layer* Display::FindLayer(u64 id) {
const auto itr = std::find_if(layers.begin(), layers.end(),
[id](const VI::Layer& layer) { return layer.GetID() == id; });
if (itr == layers.end()) {
return nullptr;
}
return &*itr;
}
const Layer* Display::FindLayer(u64 id) const {
const auto itr = std::find_if(layers.begin(), layers.end(),
[id](const VI::Layer& layer) { return layer.GetID() == id; });
if (itr == layers.end()) {
return nullptr;
}
return &*itr;
}
} // namespace Service::VI

View File

@@ -0,0 +1,98 @@
// Copyright 2019 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <string>
#include <vector>
#include "common/common_types.h"
#include "core/hle/kernel/writable_event.h"
namespace Service::NVFlinger {
class BufferQueue;
}
namespace Service::VI {
class Layer;
/// Represents a single display type
class Display {
public:
/// Constructs a display with a given unique ID and name.
///
/// @param id The unique ID for this display.
/// @param name The name for this display.
///
Display(u64 id, std::string name);
~Display();
Display(const Display&) = delete;
Display& operator=(const Display&) = delete;
Display(Display&&) = default;
Display& operator=(Display&&) = default;
/// Gets the unique ID assigned to this display.
u64 GetID() const {
return id;
}
/// Gets the name of this display
const std::string& GetName() const {
return name;
}
/// Whether or not this display has any layers added to it.
bool HasLayers() const {
return !layers.empty();
}
/// Gets a layer for this display based off an index.
Layer& GetLayer(std::size_t index);
/// Gets a layer for this display based off an index.
const Layer& GetLayer(std::size_t index) const;
/// Gets the readable vsync event.
Kernel::SharedPtr<Kernel::ReadableEvent> GetVSyncEvent() const;
/// Signals the internal vsync event.
void SignalVSyncEvent();
/// Creates and adds a layer to this display with the given ID.
///
/// @param id The ID to assign to the created layer.
/// @param buffer_queue The buffer queue for the layer instance to use.
///
void CreateLayer(u64 id, NVFlinger::BufferQueue& buffer_queue);
/// Attempts to find a layer with the given ID.
///
/// @param id The layer ID.
///
/// @returns If found, the Layer instance with the given ID.
/// If not found, then nullptr is returned.
///
Layer* FindLayer(u64 id);
/// Attempts to find a layer with the given ID.
///
/// @param id The layer ID.
///
/// @returns If found, the Layer instance with the given ID.
/// If not found, then nullptr is returned.
///
const Layer* FindLayer(u64 id) const;
private:
u64 id;
std::string name;
std::vector<Layer> layers;
Kernel::EventPair vsync_event;
};
} // namespace Service::VI

View File

@@ -0,0 +1,13 @@
// Copyright 2019 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include "core/hle/service/vi/layer/vi_layer.h"
namespace Service::VI {
Layer::Layer(u64 id, NVFlinger::BufferQueue& queue) : id{id}, buffer_queue{queue} {}
Layer::~Layer() = default;
} // namespace Service::VI

View File

@@ -0,0 +1,52 @@
// Copyright 2019 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include "common/common_types.h"
namespace Service::NVFlinger {
class BufferQueue;
}
namespace Service::VI {
/// Represents a single display layer.
class Layer {
public:
/// Constructs a layer with a given ID and buffer queue.
///
/// @param id The ID to assign to this layer.
/// @param queue The buffer queue for this layer to use.
///
Layer(u64 id, NVFlinger::BufferQueue& queue);
~Layer();
Layer(const Layer&) = delete;
Layer& operator=(const Layer&) = delete;
Layer(Layer&&) = default;
Layer& operator=(Layer&&) = delete;
/// Gets the ID for this layer.
u64 GetID() const {
return id;
}
/// Gets a reference to the buffer queue this layer is using.
NVFlinger::BufferQueue& GetBufferQueue() {
return buffer_queue;
}
/// Gets a const reference to the buffer queue this layer is using.
const NVFlinger::BufferQueue& GetBufferQueue() const {
return buffer_queue;
}
private:
u64 id;
NVFlinger::BufferQueue& buffer_queue;
};
} // namespace Service::VI

View File

@@ -420,7 +420,7 @@ public:
u32_le fence_is_valid;
std::array<Fence, 2> fences;
MathUtil::Rectangle<int> GetCropRect() const {
Common::Rectangle<int> GetCropRect() const {
return {crop_left, crop_top, crop_right, crop_bottom};
}
};
@@ -525,7 +525,7 @@ private:
LOG_DEBUG(Service_VI, "called. id=0x{:08X} transaction={:X}, flags=0x{:08X}", id,
static_cast<u32>(transaction), flags);
auto buffer_queue = nv_flinger->FindBufferQueue(id);
auto& buffer_queue = nv_flinger->FindBufferQueue(id);
if (transaction == TransactionId::Connect) {
IGBPConnectRequestParcel request{ctx.ReadBuffer()};
@@ -538,7 +538,7 @@ private:
} else if (transaction == TransactionId::SetPreallocatedBuffer) {
IGBPSetPreallocatedBufferRequestParcel request{ctx.ReadBuffer()};
buffer_queue->SetPreallocatedBuffer(request.data.slot, request.buffer);
buffer_queue.SetPreallocatedBuffer(request.data.slot, request.buffer);
IGBPSetPreallocatedBufferResponseParcel response{};
ctx.WriteBuffer(response.Serialize());
@@ -546,7 +546,7 @@ private:
IGBPDequeueBufferRequestParcel request{ctx.ReadBuffer()};
const u32 width{request.data.width};
const u32 height{request.data.height};
std::optional<u32> slot = buffer_queue->DequeueBuffer(width, height);
std::optional<u32> slot = buffer_queue.DequeueBuffer(width, height);
if (slot) {
// Buffer is available
@@ -559,8 +559,8 @@ private:
[=](Kernel::SharedPtr<Kernel::Thread> thread, Kernel::HLERequestContext& ctx,
Kernel::ThreadWakeupReason reason) {
// Repeat TransactParcel DequeueBuffer when a buffer is available
auto buffer_queue = nv_flinger->FindBufferQueue(id);
std::optional<u32> slot = buffer_queue->DequeueBuffer(width, height);
auto& buffer_queue = nv_flinger->FindBufferQueue(id);
std::optional<u32> slot = buffer_queue.DequeueBuffer(width, height);
ASSERT_MSG(slot != std::nullopt, "Could not dequeue buffer.");
IGBPDequeueBufferResponseParcel response{*slot};
@@ -568,28 +568,28 @@ private:
IPC::ResponseBuilder rb{ctx, 2};
rb.Push(RESULT_SUCCESS);
},
buffer_queue->GetWritableBufferWaitEvent());
buffer_queue.GetWritableBufferWaitEvent());
}
} else if (transaction == TransactionId::RequestBuffer) {
IGBPRequestBufferRequestParcel request{ctx.ReadBuffer()};
auto& buffer = buffer_queue->RequestBuffer(request.slot);
auto& buffer = buffer_queue.RequestBuffer(request.slot);
IGBPRequestBufferResponseParcel response{buffer};
ctx.WriteBuffer(response.Serialize());
} else if (transaction == TransactionId::QueueBuffer) {
IGBPQueueBufferRequestParcel request{ctx.ReadBuffer()};
buffer_queue->QueueBuffer(request.data.slot, request.data.transform,
request.data.GetCropRect());
buffer_queue.QueueBuffer(request.data.slot, request.data.transform,
request.data.GetCropRect());
IGBPQueueBufferResponseParcel response{1280, 720};
ctx.WriteBuffer(response.Serialize());
} else if (transaction == TransactionId::Query) {
IGBPQueryRequestParcel request{ctx.ReadBuffer()};
u32 value =
buffer_queue->Query(static_cast<NVFlinger::BufferQueue::QueryType>(request.type));
const u32 value =
buffer_queue.Query(static_cast<NVFlinger::BufferQueue::QueryType>(request.type));
IGBPQueryResponseParcel response{value};
ctx.WriteBuffer(response.Serialize());
@@ -629,12 +629,12 @@ private:
LOG_WARNING(Service_VI, "(STUBBED) called id={}, unknown={:08X}", id, unknown);
const auto buffer_queue = nv_flinger->FindBufferQueue(id);
const auto& buffer_queue = nv_flinger->FindBufferQueue(id);
// TODO(Subv): Find out what this actually is.
IPC::ResponseBuilder rb{ctx, 2, 1};
rb.Push(RESULT_SUCCESS);
rb.PushCopyObjects(buffer_queue->GetBufferWaitEvent());
rb.PushCopyObjects(buffer_queue.GetBufferWaitEvent());
}
std::shared_ptr<NVFlinger::NVFlinger> nv_flinger;
@@ -752,6 +752,7 @@ public:
{1102, nullptr, "GetDisplayResolution"},
{2010, &IManagerDisplayService::CreateManagedLayer, "CreateManagedLayer"},
{2011, nullptr, "DestroyManagedLayer"},
{2012, nullptr, "CreateStrayLayer"},
{2050, nullptr, "CreateIndirectLayer"},
{2051, nullptr, "DestroyIndirectLayer"},
{2052, nullptr, "CreateIndirectProducerEndPoint"},

View File

@@ -71,15 +71,20 @@ static void MapPages(PageTable& page_table, VAddr base, u64 size, u8* memory, Pa
FlushMode::FlushAndInvalidate);
VAddr end = base + size;
while (base != end) {
ASSERT_MSG(base < page_table.pointers.size(), "out of range mapping at {:016X}", base);
ASSERT_MSG(end <= page_table.pointers.size(), "out of range mapping at {:016X}",
base + page_table.pointers.size());
page_table.attributes[base] = type;
page_table.pointers[base] = memory;
std::fill(page_table.attributes.begin() + base, page_table.attributes.begin() + end, type);
base += 1;
if (memory != nullptr)
if (memory == nullptr) {
std::fill(page_table.pointers.begin() + base, page_table.pointers.begin() + end, memory);
} else {
while (base != end) {
page_table.pointers[base] = memory;
base += 1;
memory += PAGE_SIZE;
}
}
}
@@ -166,9 +171,6 @@ T Read(const VAddr vaddr) {
return value;
}
// The memory access might do an MMIO or cached access, so we have to lock the HLE kernel state
std::lock_guard<std::recursive_mutex> lock(HLE::g_hle_lock);
PageType type = current_page_table->attributes[vaddr >> PAGE_BITS];
switch (type) {
case PageType::Unmapped:
@@ -199,9 +201,6 @@ void Write(const VAddr vaddr, const T data) {
return;
}
// The memory access might do an MMIO or cached access, so we have to lock the HLE kernel state
std::lock_guard<std::recursive_mutex> lock(HLE::g_hle_lock);
PageType type = current_page_table->attributes[vaddr >> PAGE_BITS];
switch (type) {
case PageType::Unmapped:

View File

@@ -32,12 +32,12 @@ public:
}
void BeginTilt(int x, int y) {
mouse_origin = Math::MakeVec(x, y);
mouse_origin = Common::MakeVec(x, y);
is_tilting = true;
}
void Tilt(int x, int y) {
auto mouse_move = Math::MakeVec(x, y) - mouse_origin;
auto mouse_move = Common::MakeVec(x, y) - mouse_origin;
if (is_tilting) {
std::lock_guard<std::mutex> guard(tilt_mutex);
if (mouse_move.x == 0 && mouse_move.y == 0) {
@@ -45,7 +45,7 @@ public:
} else {
tilt_direction = mouse_move.Cast<float>();
tilt_angle =
std::clamp(tilt_direction.Normalize() * sensitivity, 0.0f, MathUtil::PI * 0.5f);
std::clamp(tilt_direction.Normalize() * sensitivity, 0.0f, Common::PI * 0.5f);
}
}
}
@@ -56,7 +56,7 @@ public:
is_tilting = false;
}
std::tuple<Math::Vec3<float>, Math::Vec3<float>> GetStatus() {
std::tuple<Common::Vec3<float>, Common::Vec3<float>> GetStatus() {
std::lock_guard<std::mutex> guard(status_mutex);
return status;
}
@@ -66,17 +66,17 @@ private:
const std::chrono::steady_clock::duration update_duration;
const float sensitivity;
Math::Vec2<int> mouse_origin;
Common::Vec2<int> mouse_origin;
std::mutex tilt_mutex;
Math::Vec2<float> tilt_direction;
Common::Vec2<float> tilt_direction;
float tilt_angle = 0;
bool is_tilting = false;
Common::Event shutdown_event;
std::tuple<Math::Vec3<float>, Math::Vec3<float>> status;
std::tuple<Common::Vec3<float>, Common::Vec3<float>> status;
std::mutex status_mutex;
// Note: always keep the thread declaration at the end so that other objects are initialized
@@ -85,8 +85,8 @@ private:
void MotionEmuThread() {
auto update_time = std::chrono::steady_clock::now();
Math::Quaternion<float> q = MakeQuaternion(Math::Vec3<float>(), 0);
Math::Quaternion<float> old_q;
Common::Quaternion<float> q = Common::MakeQuaternion(Common::Vec3<float>(), 0);
Common::Quaternion<float> old_q;
while (!shutdown_event.WaitUntil(update_time)) {
update_time += update_duration;
@@ -96,18 +96,18 @@ private:
std::lock_guard<std::mutex> guard(tilt_mutex);
// Find the quaternion describing current 3DS tilting
q = MakeQuaternion(Math::MakeVec(-tilt_direction.y, 0.0f, tilt_direction.x),
tilt_angle);
q = Common::MakeQuaternion(
Common::MakeVec(-tilt_direction.y, 0.0f, tilt_direction.x), tilt_angle);
}
auto inv_q = q.Inverse();
// Set the gravity vector in world space
auto gravity = Math::MakeVec(0.0f, -1.0f, 0.0f);
auto gravity = Common::MakeVec(0.0f, -1.0f, 0.0f);
// Find the angular rate vector in world space
auto angular_rate = ((q - old_q) * inv_q).xyz * 2;
angular_rate *= 1000 / update_millisecond / MathUtil::PI * 180;
angular_rate *= 1000 / update_millisecond / Common::PI * 180;
// Transform the two vectors from world space to 3DS space
gravity = QuaternionRotate(inv_q, gravity);
@@ -131,7 +131,7 @@ public:
device = std::make_shared<MotionEmuDevice>(update_millisecond, sensitivity);
}
std::tuple<Math::Vec3<float>, Math::Vec3<float>> GetStatus() const override {
std::tuple<Common::Vec3<float>, Common::Vec3<float>> GetStatus() const override {
return device->GetStatus();
}

View File

@@ -28,100 +28,103 @@ void CallbackTemplate(u64 userdata, s64 cycles_late) {
REQUIRE(lateness == cycles_late);
}
class ScopeInit final {
public:
struct ScopeInit final {
ScopeInit() {
Core::Timing::Init();
core_timing.Initialize();
}
~ScopeInit() {
Core::Timing::Shutdown();
core_timing.Shutdown();
}
Core::Timing::CoreTiming core_timing;
};
static void AdvanceAndCheck(u32 idx, int downcount, int expected_lateness = 0,
int cpu_downcount = 0) {
static void AdvanceAndCheck(Core::Timing::CoreTiming& core_timing, u32 idx, int downcount,
int expected_lateness = 0, int cpu_downcount = 0) {
callbacks_ran_flags = 0;
expected_callback = CB_IDS[idx];
lateness = expected_lateness;
// Pretend we executed X cycles of instructions.
Core::Timing::AddTicks(Core::Timing::GetDowncount() - cpu_downcount);
Core::Timing::Advance();
core_timing.AddTicks(core_timing.GetDowncount() - cpu_downcount);
core_timing.Advance();
REQUIRE(decltype(callbacks_ran_flags)().set(idx) == callbacks_ran_flags);
REQUIRE(downcount == Core::Timing::GetDowncount());
REQUIRE(downcount == core_timing.GetDowncount());
}
TEST_CASE("CoreTiming[BasicOrder]", "[core]") {
ScopeInit guard;
auto& core_timing = guard.core_timing;
Core::Timing::EventType* cb_a = Core::Timing::RegisterEvent("callbackA", CallbackTemplate<0>);
Core::Timing::EventType* cb_b = Core::Timing::RegisterEvent("callbackB", CallbackTemplate<1>);
Core::Timing::EventType* cb_c = Core::Timing::RegisterEvent("callbackC", CallbackTemplate<2>);
Core::Timing::EventType* cb_d = Core::Timing::RegisterEvent("callbackD", CallbackTemplate<3>);
Core::Timing::EventType* cb_e = Core::Timing::RegisterEvent("callbackE", CallbackTemplate<4>);
Core::Timing::EventType* cb_a = core_timing.RegisterEvent("callbackA", CallbackTemplate<0>);
Core::Timing::EventType* cb_b = core_timing.RegisterEvent("callbackB", CallbackTemplate<1>);
Core::Timing::EventType* cb_c = core_timing.RegisterEvent("callbackC", CallbackTemplate<2>);
Core::Timing::EventType* cb_d = core_timing.RegisterEvent("callbackD", CallbackTemplate<3>);
Core::Timing::EventType* cb_e = core_timing.RegisterEvent("callbackE", CallbackTemplate<4>);
// Enter slice 0
Core::Timing::Advance();
core_timing.Advance();
// D -> B -> C -> A -> E
Core::Timing::ScheduleEvent(1000, cb_a, CB_IDS[0]);
REQUIRE(1000 == Core::Timing::GetDowncount());
Core::Timing::ScheduleEvent(500, cb_b, CB_IDS[1]);
REQUIRE(500 == Core::Timing::GetDowncount());
Core::Timing::ScheduleEvent(800, cb_c, CB_IDS[2]);
REQUIRE(500 == Core::Timing::GetDowncount());
Core::Timing::ScheduleEvent(100, cb_d, CB_IDS[3]);
REQUIRE(100 == Core::Timing::GetDowncount());
Core::Timing::ScheduleEvent(1200, cb_e, CB_IDS[4]);
REQUIRE(100 == Core::Timing::GetDowncount());
core_timing.ScheduleEvent(1000, cb_a, CB_IDS[0]);
REQUIRE(1000 == core_timing.GetDowncount());
core_timing.ScheduleEvent(500, cb_b, CB_IDS[1]);
REQUIRE(500 == core_timing.GetDowncount());
core_timing.ScheduleEvent(800, cb_c, CB_IDS[2]);
REQUIRE(500 == core_timing.GetDowncount());
core_timing.ScheduleEvent(100, cb_d, CB_IDS[3]);
REQUIRE(100 == core_timing.GetDowncount());
core_timing.ScheduleEvent(1200, cb_e, CB_IDS[4]);
REQUIRE(100 == core_timing.GetDowncount());
AdvanceAndCheck(3, 400);
AdvanceAndCheck(1, 300);
AdvanceAndCheck(2, 200);
AdvanceAndCheck(0, 200);
AdvanceAndCheck(4, MAX_SLICE_LENGTH);
AdvanceAndCheck(core_timing, 3, 400);
AdvanceAndCheck(core_timing, 1, 300);
AdvanceAndCheck(core_timing, 2, 200);
AdvanceAndCheck(core_timing, 0, 200);
AdvanceAndCheck(core_timing, 4, MAX_SLICE_LENGTH);
}
TEST_CASE("CoreTiming[Threadsave]", "[core]") {
ScopeInit guard;
auto& core_timing = guard.core_timing;
Core::Timing::EventType* cb_a = Core::Timing::RegisterEvent("callbackA", CallbackTemplate<0>);
Core::Timing::EventType* cb_b = Core::Timing::RegisterEvent("callbackB", CallbackTemplate<1>);
Core::Timing::EventType* cb_c = Core::Timing::RegisterEvent("callbackC", CallbackTemplate<2>);
Core::Timing::EventType* cb_d = Core::Timing::RegisterEvent("callbackD", CallbackTemplate<3>);
Core::Timing::EventType* cb_e = Core::Timing::RegisterEvent("callbackE", CallbackTemplate<4>);
Core::Timing::EventType* cb_a = core_timing.RegisterEvent("callbackA", CallbackTemplate<0>);
Core::Timing::EventType* cb_b = core_timing.RegisterEvent("callbackB", CallbackTemplate<1>);
Core::Timing::EventType* cb_c = core_timing.RegisterEvent("callbackC", CallbackTemplate<2>);
Core::Timing::EventType* cb_d = core_timing.RegisterEvent("callbackD", CallbackTemplate<3>);
Core::Timing::EventType* cb_e = core_timing.RegisterEvent("callbackE", CallbackTemplate<4>);
// Enter slice 0
Core::Timing::Advance();
core_timing.Advance();
// D -> B -> C -> A -> E
Core::Timing::ScheduleEventThreadsafe(1000, cb_a, CB_IDS[0]);
core_timing.ScheduleEventThreadsafe(1000, cb_a, CB_IDS[0]);
// Manually force since ScheduleEventThreadsafe doesn't call it
Core::Timing::ForceExceptionCheck(1000);
REQUIRE(1000 == Core::Timing::GetDowncount());
Core::Timing::ScheduleEventThreadsafe(500, cb_b, CB_IDS[1]);
core_timing.ForceExceptionCheck(1000);
REQUIRE(1000 == core_timing.GetDowncount());
core_timing.ScheduleEventThreadsafe(500, cb_b, CB_IDS[1]);
// Manually force since ScheduleEventThreadsafe doesn't call it
Core::Timing::ForceExceptionCheck(500);
REQUIRE(500 == Core::Timing::GetDowncount());
Core::Timing::ScheduleEventThreadsafe(800, cb_c, CB_IDS[2]);
core_timing.ForceExceptionCheck(500);
REQUIRE(500 == core_timing.GetDowncount());
core_timing.ScheduleEventThreadsafe(800, cb_c, CB_IDS[2]);
// Manually force since ScheduleEventThreadsafe doesn't call it
Core::Timing::ForceExceptionCheck(800);
REQUIRE(500 == Core::Timing::GetDowncount());
Core::Timing::ScheduleEventThreadsafe(100, cb_d, CB_IDS[3]);
core_timing.ForceExceptionCheck(800);
REQUIRE(500 == core_timing.GetDowncount());
core_timing.ScheduleEventThreadsafe(100, cb_d, CB_IDS[3]);
// Manually force since ScheduleEventThreadsafe doesn't call it
Core::Timing::ForceExceptionCheck(100);
REQUIRE(100 == Core::Timing::GetDowncount());
Core::Timing::ScheduleEventThreadsafe(1200, cb_e, CB_IDS[4]);
core_timing.ForceExceptionCheck(100);
REQUIRE(100 == core_timing.GetDowncount());
core_timing.ScheduleEventThreadsafe(1200, cb_e, CB_IDS[4]);
// Manually force since ScheduleEventThreadsafe doesn't call it
Core::Timing::ForceExceptionCheck(1200);
REQUIRE(100 == Core::Timing::GetDowncount());
core_timing.ForceExceptionCheck(1200);
REQUIRE(100 == core_timing.GetDowncount());
AdvanceAndCheck(3, 400);
AdvanceAndCheck(1, 300);
AdvanceAndCheck(2, 200);
AdvanceAndCheck(0, 200);
AdvanceAndCheck(4, MAX_SLICE_LENGTH);
AdvanceAndCheck(core_timing, 3, 400);
AdvanceAndCheck(core_timing, 1, 300);
AdvanceAndCheck(core_timing, 2, 200);
AdvanceAndCheck(core_timing, 0, 200);
AdvanceAndCheck(core_timing, 4, MAX_SLICE_LENGTH);
}
namespace SharedSlotTest {
@@ -142,59 +145,62 @@ TEST_CASE("CoreTiming[SharedSlot]", "[core]") {
using namespace SharedSlotTest;
ScopeInit guard;
auto& core_timing = guard.core_timing;
Core::Timing::EventType* cb_a = Core::Timing::RegisterEvent("callbackA", FifoCallback<0>);
Core::Timing::EventType* cb_b = Core::Timing::RegisterEvent("callbackB", FifoCallback<1>);
Core::Timing::EventType* cb_c = Core::Timing::RegisterEvent("callbackC", FifoCallback<2>);
Core::Timing::EventType* cb_d = Core::Timing::RegisterEvent("callbackD", FifoCallback<3>);
Core::Timing::EventType* cb_e = Core::Timing::RegisterEvent("callbackE", FifoCallback<4>);
Core::Timing::EventType* cb_a = core_timing.RegisterEvent("callbackA", FifoCallback<0>);
Core::Timing::EventType* cb_b = core_timing.RegisterEvent("callbackB", FifoCallback<1>);
Core::Timing::EventType* cb_c = core_timing.RegisterEvent("callbackC", FifoCallback<2>);
Core::Timing::EventType* cb_d = core_timing.RegisterEvent("callbackD", FifoCallback<3>);
Core::Timing::EventType* cb_e = core_timing.RegisterEvent("callbackE", FifoCallback<4>);
Core::Timing::ScheduleEvent(1000, cb_a, CB_IDS[0]);
Core::Timing::ScheduleEvent(1000, cb_b, CB_IDS[1]);
Core::Timing::ScheduleEvent(1000, cb_c, CB_IDS[2]);
Core::Timing::ScheduleEvent(1000, cb_d, CB_IDS[3]);
Core::Timing::ScheduleEvent(1000, cb_e, CB_IDS[4]);
core_timing.ScheduleEvent(1000, cb_a, CB_IDS[0]);
core_timing.ScheduleEvent(1000, cb_b, CB_IDS[1]);
core_timing.ScheduleEvent(1000, cb_c, CB_IDS[2]);
core_timing.ScheduleEvent(1000, cb_d, CB_IDS[3]);
core_timing.ScheduleEvent(1000, cb_e, CB_IDS[4]);
// Enter slice 0
Core::Timing::Advance();
REQUIRE(1000 == Core::Timing::GetDowncount());
core_timing.Advance();
REQUIRE(1000 == core_timing.GetDowncount());
callbacks_ran_flags = 0;
counter = 0;
lateness = 0;
Core::Timing::AddTicks(Core::Timing::GetDowncount());
Core::Timing::Advance();
REQUIRE(MAX_SLICE_LENGTH == Core::Timing::GetDowncount());
core_timing.AddTicks(core_timing.GetDowncount());
core_timing.Advance();
REQUIRE(MAX_SLICE_LENGTH == core_timing.GetDowncount());
REQUIRE(0x1FULL == callbacks_ran_flags.to_ullong());
}
TEST_CASE("Core::Timing[PredictableLateness]", "[core]") {
ScopeInit guard;
auto& core_timing = guard.core_timing;
Core::Timing::EventType* cb_a = Core::Timing::RegisterEvent("callbackA", CallbackTemplate<0>);
Core::Timing::EventType* cb_b = Core::Timing::RegisterEvent("callbackB", CallbackTemplate<1>);
Core::Timing::EventType* cb_a = core_timing.RegisterEvent("callbackA", CallbackTemplate<0>);
Core::Timing::EventType* cb_b = core_timing.RegisterEvent("callbackB", CallbackTemplate<1>);
// Enter slice 0
Core::Timing::Advance();
core_timing.Advance();
Core::Timing::ScheduleEvent(100, cb_a, CB_IDS[0]);
Core::Timing::ScheduleEvent(200, cb_b, CB_IDS[1]);
core_timing.ScheduleEvent(100, cb_a, CB_IDS[0]);
core_timing.ScheduleEvent(200, cb_b, CB_IDS[1]);
AdvanceAndCheck(0, 90, 10, -10); // (100 - 10)
AdvanceAndCheck(1, MAX_SLICE_LENGTH, 50, -50);
AdvanceAndCheck(core_timing, 0, 90, 10, -10); // (100 - 10)
AdvanceAndCheck(core_timing, 1, MAX_SLICE_LENGTH, 50, -50);
}
namespace ChainSchedulingTest {
static int reschedules = 0;
static void RescheduleCallback(u64 userdata, s64 cycles_late) {
static void RescheduleCallback(Core::Timing::CoreTiming& core_timing, u64 userdata,
s64 cycles_late) {
--reschedules;
REQUIRE(reschedules >= 0);
REQUIRE(lateness == cycles_late);
if (reschedules > 0) {
Core::Timing::ScheduleEvent(1000, reinterpret_cast<Core::Timing::EventType*>(userdata),
userdata);
core_timing.ScheduleEvent(1000, reinterpret_cast<Core::Timing::EventType*>(userdata),
userdata);
}
}
} // namespace ChainSchedulingTest
@@ -203,36 +209,39 @@ TEST_CASE("CoreTiming[ChainScheduling]", "[core]") {
using namespace ChainSchedulingTest;
ScopeInit guard;
auto& core_timing = guard.core_timing;
Core::Timing::EventType* cb_a = Core::Timing::RegisterEvent("callbackA", CallbackTemplate<0>);
Core::Timing::EventType* cb_b = Core::Timing::RegisterEvent("callbackB", CallbackTemplate<1>);
Core::Timing::EventType* cb_c = Core::Timing::RegisterEvent("callbackC", CallbackTemplate<2>);
Core::Timing::EventType* cb_rs =
Core::Timing::RegisterEvent("callbackReschedule", RescheduleCallback);
Core::Timing::EventType* cb_a = core_timing.RegisterEvent("callbackA", CallbackTemplate<0>);
Core::Timing::EventType* cb_b = core_timing.RegisterEvent("callbackB", CallbackTemplate<1>);
Core::Timing::EventType* cb_c = core_timing.RegisterEvent("callbackC", CallbackTemplate<2>);
Core::Timing::EventType* cb_rs = core_timing.RegisterEvent(
"callbackReschedule", [&core_timing](u64 userdata, s64 cycles_late) {
RescheduleCallback(core_timing, userdata, cycles_late);
});
// Enter slice 0
Core::Timing::Advance();
core_timing.Advance();
Core::Timing::ScheduleEvent(800, cb_a, CB_IDS[0]);
Core::Timing::ScheduleEvent(1000, cb_b, CB_IDS[1]);
Core::Timing::ScheduleEvent(2200, cb_c, CB_IDS[2]);
Core::Timing::ScheduleEvent(1000, cb_rs, reinterpret_cast<u64>(cb_rs));
REQUIRE(800 == Core::Timing::GetDowncount());
core_timing.ScheduleEvent(800, cb_a, CB_IDS[0]);
core_timing.ScheduleEvent(1000, cb_b, CB_IDS[1]);
core_timing.ScheduleEvent(2200, cb_c, CB_IDS[2]);
core_timing.ScheduleEvent(1000, cb_rs, reinterpret_cast<u64>(cb_rs));
REQUIRE(800 == core_timing.GetDowncount());
reschedules = 3;
AdvanceAndCheck(0, 200); // cb_a
AdvanceAndCheck(1, 1000); // cb_b, cb_rs
AdvanceAndCheck(core_timing, 0, 200); // cb_a
AdvanceAndCheck(core_timing, 1, 1000); // cb_b, cb_rs
REQUIRE(2 == reschedules);
Core::Timing::AddTicks(Core::Timing::GetDowncount());
Core::Timing::Advance(); // cb_rs
core_timing.AddTicks(core_timing.GetDowncount());
core_timing.Advance(); // cb_rs
REQUIRE(1 == reschedules);
REQUIRE(200 == Core::Timing::GetDowncount());
REQUIRE(200 == core_timing.GetDowncount());
AdvanceAndCheck(2, 800); // cb_c
AdvanceAndCheck(core_timing, 2, 800); // cb_c
Core::Timing::AddTicks(Core::Timing::GetDowncount());
Core::Timing::Advance(); // cb_rs
core_timing.AddTicks(core_timing.GetDowncount());
core_timing.Advance(); // cb_rs
REQUIRE(0 == reschedules);
REQUIRE(MAX_SLICE_LENGTH == Core::Timing::GetDowncount());
REQUIRE(MAX_SLICE_LENGTH == core_timing.GetDowncount());
}

Some files were not shown because too many files have changed in this diff Show More