Compare commits

...

264 Commits

Author SHA1 Message Date
Morph1984
ec95c73a12 remove <f32>
We can remove this since its already a f32 value
2019-09-03 23:20:19 -04:00
Morph1984
58783b8a46 explicitly represent 1 as a float (1.0f instead of 1) 2019-09-03 23:06:32 -04:00
Morph1984
b1ca56bed2 Change u32 -> f32
Volume is a f32 value. (SwIPC describes it as a u32, but it is actually f32 as corroborated by switchbrew docs and SetAudioDeviceOutputVolume)

 ```cpp
const f32 volume = rp.Pop<f32>();
```
2019-09-03 22:30:20 -04:00
Morph1984
ba661c8d9a service/audio/audren_u: Stub IAudioDevice::GetAudioDeviceOutputVolume 2019-09-03 16:05:33 -04:00
bunnei
50b5bb44a0 Merge pull request #2765 from FernandoS27/dma-fix
MaxwellDMA: Fixes, corrections and relaxations.
2019-09-01 13:13:05 -04:00
Rodrigo Locatti
4d4f9cc104 video_core: Silent miscellaneous warnings (#2820)
* texture_cache/surface_params: Remove unused local variable

* rasterizer_interface: Add missing documentation commentary

* maxwell_dma: Remove unused rasterizer reference

* video_core/gpu: Sort member declaration order to silent -Wreorder warning

* fermi_2d: Remove unused MemoryManager reference

* video_core: Silent unused variable warnings

* buffer_cache: Silent -Wreorder warnings

* kepler_memory: Remove unused MemoryManager reference

* gl_texture_cache: Add missing override

* buffer_cache: Add missing include

* shader/decode: Remove unused variables
2019-08-30 14:08:00 -04:00
Fernando Sahmkow
67cc2d5046 Merge pull request #2819 from ReinUsesLisp/fixup-clang
gl_buffer_cache: Add missing include
2019-08-29 18:13:35 -04:00
ReinUsesLisp
878adee0a3 gl_buffer_cache: Add missing include
RasterizerInterface was considered an incomplete object by clang.
2019-08-29 22:02:52 +00:00
bunnei
a67c4e6e02 Merge pull request #2742 from ReinUsesLisp/fix-texture-buffers
gl_texture_cache: Miscellaneous texture buffer fixes
2019-08-29 15:59:17 -04:00
James Rowe
f0c75573b1 Revert "externals: Update FMT to 6.0.0"
This reverts commit ca4ca8a6dc.
2019-08-29 12:23:34 -06:00
Ethan
ca4ca8a6dc externals: Update FMT to 6.0.0 2019-08-29 19:37:46 +02:00
bunnei
e424615839 Merge pull request #2783 from FernandoS27/new-buffer-cache
Implement a New LLE Buffer Cache
2019-08-29 13:07:01 -04:00
bunnei
f8cc5668f8 Merge pull request #2758 from ReinUsesLisp/packed-tid
shader/decode: Implement S2R Tic
2019-08-29 12:58:43 -04:00
bunnei
680ab61327 Merge pull request #2786 from ReinUsesLisp/vote
shader_ir: Implement VOTE on Nvidia drivers
2019-08-29 12:58:10 -04:00
ReinUsesLisp
4e35177e23 shader_ir: Implement VOTE
Implement VOTE using Nvidia's intrinsics. Documentation about these can
be found here
https://developer.nvidia.com/reading-between-threads-shader-intrinsics

Instead of using portable ARB instructions I opted to use Nvidia
intrinsics because these are the closest we have to how Tegra X1
hardware renders.

To stub VOTE on non-Nvidia drivers (including nouveau) this commit
simulates a GPU with a warp size of one, returning what is meaningful
for the instruction being emulated:

* anyThreadNV(value) -> value
* allThreadsNV(value) -> value
* allThreadsEqualNV(value) -> true

ballotARB, also known as "uint64_t(activeThreadsNV())", emits

VOTE.ANY Rd, PT, PT;

on nouveau's compiler. This doesn't match exactly to Nvidia's code

VOTE.ALL Rd, PT, PT;

Which is emulated with activeThreadsNV() by this commit. In theory this
shouldn't really matter since .ANY, .ALL and .EQ affect the predicates
(set to PT on those cases) and not the registers.
2019-08-21 14:50:38 -03:00
Fernando Sahmkow
83ec2091c1 Buffer Cache: Adress Feedback. 2019-08-21 12:14:27 -04:00
Fernando Sahmkow
6ce2c85047 Buffer_Cache: Implement flushing. 2019-08-21 12:14:26 -04:00
Fernando Sahmkow
de8ff8a1c6 Buffer_Cache: Implement barriers. 2019-08-21 12:14:25 -04:00
Fernando Sahmkow
286f4c446a Buffer_Cache: Optimize and track written areas. 2019-08-21 12:14:25 -04:00
Fernando Sahmkow
5f4b746a1e BufferCache: Rework mapping caching. 2019-08-21 12:14:24 -04:00
Fernando Sahmkow
86d8563314 Buffer_Cache: Fixes and optimizations. 2019-08-21 12:14:23 -04:00
Fernando Sahmkow
862bec001b Video_Core: Implement a new Buffer Cache 2019-08-21 12:14:22 -04:00
bunnei
b4a8cfbd00 Merge pull request #2748 from FernandoS27/align-memory
VM_Manager: Align allocated host physical memory to 256bytes
2019-08-21 12:10:10 -04:00
bunnei
d654b3d82e Merge pull request #2769 from FernandoS27/commands-flush
GPU: Flush commands on every dma pusher step.
2019-08-21 10:29:56 -04:00
bunnei
dfdd20142e Merge pull request #2777 from ReinUsesLisp/hsetp2-fe3h-fix
half_set_predicate: Fix HSETP2_C constant buffer offset
2019-08-21 10:29:17 -04:00
bunnei
cedc1aab4a Merge pull request #2753 from FernandoS27/float-convert
Shader_Ir: Implement F16 Variants of F2F, F2I, I2F.
2019-08-21 10:27:57 -04:00
bunnei
74a7ce1df7 Merge pull request #2773 from lioncash/test-unused
yuzu-tester/yuzu: Remove unused variable
2019-08-21 10:27:29 -04:00
bunnei
ef584f1a3a Merge pull request #2747 from lioncash/audio
service/audren_u: Unstub ListAudioDeviceName
2019-08-18 09:08:25 -04:00
bunnei
ca61e298b3 Merge pull request #2778 from ReinUsesLisp/nop
shader_ir: Implement NOP
2019-08-18 08:51:34 -04:00
bunnei
87bbefe55f Merge pull request #2768 from ReinUsesLisp/hsetp2-fix
decode/half_set_predicate: Fix predicates
2019-08-18 08:50:54 -04:00
James Rowe
93abe1ccf3 Merge pull request #2789 from jroweboy/quickfix
Fixup! #2772 missed this one file
2019-08-16 21:47:20 -06:00
James Rowe
509734d818 Fixup! #2772 missed this one file 2019-08-16 21:24:17 -06:00
James Rowe
e2392fe46f Merge pull request #2766 from FearlessTobi/port-4849
Port citra-emu/citra#4849: "Qt: Fixed behaviour of buttons by connecting functors to correct signals"
2019-08-16 19:39:05 -06:00
James Rowe
0e9e166d85 Merge pull request #2772 from lioncash/ui
yuzu/CMakeLists: Remove qt5_wrap_ui macro usage
2019-08-16 19:37:35 -06:00
Lioncash
5980aa1e51 yuzu/CMakeLists: Remove qt5_wrap_ui macro usage
We can simply enable CMAKE_AUTOUIC and let CMake take care of handling
the UI code generation for targets.

As part of letting CMake automatically handle the header file parsing,
we must not name includes with "ui_*" unless they're related to the
output of the Qt UIC compiler. Because of this, we need to rename
ui_settings, given it would conflict with this restriction.
2019-08-09 17:54:08 -04:00
ReinUsesLisp
2ff8044806 shader_ir: Implement NOP 2019-08-04 03:02:55 -03:00
ReinUsesLisp
ec0da3ef64 half_set_predicate: Fix HSETP2_C constant buffer offset 2019-08-04 02:50:55 -03:00
Silent
221250d922 Qt: Fixed behaviour of buttons by connecting functors to correct signals
Following screens got fixes:
- Configure/Debug
- Configure/Input
2019-08-02 04:09:38 +02:00
Flame Sage
978f7067ee Merge pull request #2770 from DarkLordZach/azure-pr-fix
ci: Fix Azure PR Builds
2019-08-01 22:04:21 -04:00
Zach Hilman
9aef7e5e22 Correct apt permissions 2019-08-01 21:33:53 -04:00
Zach Hilman
6b2937bf76 Upgrade PIP version with APT 2019-08-01 21:29:27 -04:00
Zach Hilman
a2d2a6b6dd Upgrade pip version 2019-08-01 21:23:17 -04:00
Zach Hilman
d3ea2df06d Add missing dot 2019-08-01 21:09:11 -04:00
Lioncash
6e11cfcdf0 yuzu-tester/yuzu: Correct format string
Prevents an invalid formatting exception from being thrown.
2019-07-29 20:55:48 -04:00
Lioncash
a0ee10b114 yuzu-tester/yuzu: Remove unused variable
Gets rid of a compilation warning.
2019-07-29 20:50:33 -04:00
Zach Hilman
bcbec6f37c ci: Fix Azure PR Builds 2019-07-28 14:21:18 -04:00
Fernando Sahmkow
e52c895559 GPU: Flush commands on every dma pusher step.
This commit ensures that the host gpu is constantly fed with commands to
work with, while the guest gpu keeps producing the rest of the commands.
This reduces syncing time between host and guest gpu.
2019-07-26 16:54:22 -04:00
bunnei
52f54c728d Merge pull request #2592 from FernandoS27/sync1
Implement GPU Synchronization Mechanisms & Correct NVFlinger
2019-07-26 14:26:44 -04:00
ReinUsesLisp
77f1a676a1 decode/half_set_predicate: Fix predicates 2019-07-26 00:12:38 -03:00
Fernando Sahmkow
a452ff983d MaxwellDMA: Fixes, corrections and relaxations.
This commit fixes offsets on Linear -> Tiled copies, corrects z pos
fortiled->linear copies, corrects bytes_per_pixel calculation in tiled
-> linear copies and relaxes some limitations set by latest dma fixes
refactors.
2019-07-25 20:41:42 -04:00
bunnei
b0ff3179ef Merge pull request #2739 from lioncash/cflow
video_core/control_flow: Minor changes/warning cleanup
2019-07-25 13:04:56 -04:00
bunnei
4d26550f5f Merge pull request #2737 from FernandoS27/track-fix
Shader_Ir: Correct tracking to track from right to left
2019-07-25 12:41:52 -04:00
bunnei
ccbc554949 Merge pull request #2689 from lioncash/tl
yuzu/main: Make error messages within OnCoreError more localization-friendly
2019-07-25 12:35:07 -04:00
bunnei
31e8a61527 Merge pull request #2743 from FernandoS27/surpress-assert
Downgrade and suppress a series of GPU asserts and debug messages.
2019-07-25 12:34:36 -04:00
bunnei
9be9600bdc Merge pull request #2704 from FernandoS27/conditional
maxwell3d: Implement Conditional Rendering
2019-07-24 17:07:57 -04:00
Zach Hilman
12514ccd35 Fix README change mistake (#2754)
Fix README change mistake
2019-07-24 16:42:33 -04:00
ReinUsesLisp
104641db07 shader/decode: Implement S2R Tic 2019-07-22 16:16:10 -03:00
bunnei
f601f25bcc Merge pull request #2734 from ReinUsesLisp/compute-shaders
gl_rasterizer: Implement compute shaders
2019-07-22 11:12:55 -04:00
bunnei
27e10e0442 Merge pull request #2735 from FernandoS27/pipeline-rework
Rework Dirty Flags in GPU Pipeline, Optimize CBData and Redo Clearing mechanism
2019-07-21 00:59:52 -04:00
Zach Hilman
6738fb5fef Update README.md 2019-07-20 21:03:30 -04:00
Fernando Sahmkow
11f4e739bd Shader_Ir: Implement F16 Variants of F2F, F2I, I2F.
This commit takes care of implementing the F16 Variants of the 
conversion instructions and makes sure conversions are done.
2019-07-20 17:38:25 -04:00
Fernando Sahmkow
0a67416971 Merge pull request #2693 from ReinUsesLisp/hsetp2
shader/half_set_predicate: Implement missing HSETP2 variants
2019-07-20 17:25:08 -04:00
Flame Sage
369be67039 Update README.md 2019-07-20 19:24:24 +00:00
Flame Sage
aa599ac709 Update README.md 2019-07-20 19:22:30 +00:00
Flame Sage
a2edb27158 Merge pull request #2752 from DarkLordZach/master
azure: Fix clang-format and releases
2019-07-20 15:20:53 -04:00
Zach Hilman
f470bcb826 azure: Fix clang-format and releases 2019-07-20 15:19:25 -04:00
Fernando Sahmkow
7a35178ee2 Maxwell3D: Reorganize and address feedback 2019-07-20 10:18:35 -04:00
Fernando Sahmkow
1158777737 Shader_Ir: Change Debug Asserts for Log Warnings 2019-07-19 22:15:34 -04:00
Fernando Sahmkow
febb88efc4 Common/Alignment: Add noexcept where required. 2019-07-19 21:49:54 -04:00
ReinUsesLisp
45c162444d shader/half_set_predicate: Fix HSETP2 implementation 2019-07-19 22:21:22 -03:00
ReinUsesLisp
6c4985edc9 shader/half_set_predicate: Implement missing HSETP2 variants 2019-07-19 22:20:47 -03:00
Fernando Sahmkow
024b5fe91a Kernel: Address Feedback 2019-07-19 11:28:57 -04:00
Fernando Sahmkow
0901c33753 Common: Correct alignment allocator to work on C++14 or higher. 2019-07-19 11:11:42 -04:00
Fernando Sahmkow
9bede4eeed VM_Manager: Align allocated memory to 256bytes
This commit ensures that all backing memory allocated for the Guest CPU
is aligned to 256 bytes. This due to how gpu memory works and the heavy
constraints it has in the alignment of physical memory.
2019-07-19 10:06:08 -04:00
Lioncash
16730c4c43 service/audren_u: Handle audio USB output revision queries in ListAudioDeviceName()
Audio devices use the supplied revision information in order to
determine if USB audio output is able to be supported. In this case, we
can only really handle using this revision information in
ListAudioDeviceName(), where it checks if USB audio output is supported
before supplying it as a device name.

A few other scenarios exist where the revision info is checked, such as:

- Early exiting from SetAudioDeviceOutputVolume if USB audio is
  attempted to be set when that device is unsupported.

- Early exiting and returning 0.0f in GetAudioDeviceOutputVolume when
  USB output volume is queried and it's an unsupported device.

- Falling back to AHUB headphones in GetActiveAudioDeviceName when the
  device type is USB output, but is unsupported based off the revision
  info.

In order for these changes to also be implemented, a few other changes
to the interface need to be made.

Given we now properly handle everything about ListAudioDeviceName(), we
no longer need to describe it as a stubbed function.
2019-07-19 07:55:27 -04:00
Lioncash
b9ebab71be service/audren_u: Move revision testing code out of AudRenU
The revision querying facilities are used by more than just audren. e.g.
audio devices can use this to test whether or not USB audio output is
supported.

This will be used within the following change.
2019-07-19 07:55:23 -04:00
Lioncash
ed0485c599 service/audio: Remove global system accessors
Trims out the lingering reliance on global state out of the audio code.
2019-07-19 07:29:36 -04:00
Lioncash
7653e4babc service/audren_u: Remove unnecessary return value from GetActiveAudioDeviceName()
This service function only ever returns a result and nothing more.
2019-07-19 06:57:31 -04:00
Lioncash
6ecbc6c557 service/audren_u: Report proper device names
AudioDevice and AudioInterface aren't valid device names on the Switch.
We should also be returning consistent names in
GetActiveAudioDeviceName().

While we're at it, we can also handle proper name output in
ListAudioDeviceName, by returning all the available devices on the
Switch.
2019-07-19 06:57:30 -04:00
Lioncash
c1c89411da video_core/control_flow: Provide operator!= for types with operator==
Provides operational symmetry for the respective structures.
2019-07-18 21:03:31 -04:00
Lioncash
1780e0e3d0 video_core/control_flow: Prevent sign conversion in TryGetBlock()
The return value is a u32, not an s32, so this would result in an
implicit signedness conversion.
2019-07-18 21:03:31 -04:00
Lioncash
a162a844d2 video_core/control_flow: Remove unnecessary BlockStack copy constructor
This is the default behavior of the copy constructor, so it doesn't need
to be specified.

While we're at it we can make the other non-default constructor
explicit.
2019-07-18 21:03:30 -04:00
Lioncash
56bc11d952 video_core/control_flow: Use std::move where applicable
Results in less work being done where avoidable.
2019-07-18 21:03:30 -04:00
Lioncash
e7b39f47f8 video_core/control_flow: Use the prefix variant of operator++ for iterators
Same thing, but potentially allows a standard library implementation to
pick a more efficient codepath.
2019-07-18 21:03:30 -04:00
Lioncash
6885e7e7ec video_core/control_flow: Use empty() member function for checking emptiness
It's what it's there for.
2019-07-18 21:03:30 -04:00
Lioncash
45fa12a05c video_core: Resolve -Wreorder warnings
Ensures that the constructor members are always initialized in the order
that they're declared in.
2019-07-18 21:03:30 -04:00
Lioncash
47df844338 video_core/control_flow: Make program_size for ScanFlow() a std::size_t
Prevents a truncation warning from occurring with MSVC. Also the
internal data structures already treat it as a size_t, so this is just a
discrepancy in the interface.
2019-07-18 21:03:29 -04:00
Lioncash
3df9558593 video_core/control_flow: Place all internally linked types/functions within an anonymous namespace
Previously, quite a few functions were being linked with external
linkage.
2019-07-18 21:03:29 -04:00
Lioncash
1109db86b7 video_core/shader/decode: Prevent sign-conversion warnings
Makes it explicit that the conversions here are intentional.
2019-07-18 21:03:29 -04:00
bunnei
5d369112d9 Merge pull request #2687 from lioncash/tls-process
kernel/process: Allocate the process' TLS region during initialization
2019-07-18 13:53:04 -04:00
bunnei
63bda67a34 Merge pull request #2738 from lioncash/shader-ir
shader-ir: Minor cleanup-related changes
2019-07-18 13:52:01 -04:00
Fernando Sahmkow
5a06e33859 Shader_Ir: correct clang format 2019-07-18 10:09:26 -04:00
Fernando Sahmkow
43f57d668c GPU: Add missing puller methods.
This adds some missing puller methods. We don't assert them as these are 
nop operations for us.
2019-07-18 08:54:42 -04:00
Fernando Sahmkow
3a3fee5abf MaxwellDMA/KeplerCopy: Downgrade DMA log message to Trace.
This log was just to know which games used DMA. It's no longer 
important.
2019-07-18 08:31:38 -04:00
Fernando Sahmkow
d3b71ff80d Gl_Texture_Cache: Remove assert on component type in GetFormatTuple
Textures can have different components types in different orders. This 
assert was completely inprecise and the effectiveness of such is better 
handled by case and within the texture cache.
2019-07-18 08:20:31 -04:00
Fernando Sahmkow
0b65e9335e Shader_Ir: Downgrade precision and rounding asserts to debug asserts.
This commit reduces the sevirity of asserts for FP precision and 
rounding as this are well known and have little to no consequences in 
gpu's accuracy.
2019-07-18 08:17:19 -04:00
ReinUsesLisp
74632c76ce gl_shader_decompiler: Rename bufferImage to imageBuffer
The online OpenGL documentation is wrong. The type definition is
imageBuffer.
2019-07-18 01:16:44 -03:00
ReinUsesLisp
87909d327f gl_shader_cache: Fix newline on buffer preprocessor definitions 2019-07-18 01:16:15 -03:00
ReinUsesLisp
e7bdf8b22a textures: Fix texture buffer size calculation 2019-07-18 01:07:08 -03:00
ReinUsesLisp
84027f4808 gl_texture_cache: Do not set texture parameters to buffers 2019-07-18 01:06:26 -03:00
ReinUsesLisp
73b2dc6d4f gl_texture_cache: Add missing break in CreateTexture 2019-07-18 01:04:18 -03:00
David
d4b95bfc25 Merge pull request #2741 from FernandoS27/trace-log
Kernel: Downgrade WaitForAddress and SignalToAddress messages to Trace.
2019-07-18 13:58:29 +10:00
Fernando Sahmkow
5e457bf258 Kernel: Downgrade WaitForAddress and SignalToAddress messages to Trace.
This messages were originally set as warnning since few games used these
svcs and it was needed for debugging. This is no longer the case.
2019-07-17 22:05:47 -04:00
Fernando Sahmkow
4be61013a1 GL_State: Feedback and fixes 2019-07-17 17:29:56 -04:00
Fernando Sahmkow
5ad889f6fd Maxwell3D: Address Feedback 2019-07-17 17:29:55 -04:00
Fernando Sahmkow
7826f0afd9 Texture_Cache: Rebase Fixes 2019-07-17 17:29:54 -04:00
Fernando Sahmkow
8cdbfe69b1 GL_Rasterizer: Corrections to Clearing. 2019-07-17 17:29:54 -04:00
Fernando Sahmkow
0ff4a5fa39 Maxwell3D: Correct marking dirtiness on CB upload 2019-07-17 17:29:53 -04:00
Fernando Sahmkow
fec32fed18 GL_Rasterizer: Rework RenderTarget/DepthBuffer clearing 2019-07-17 17:29:52 -04:00
Fernando Sahmkow
a081dea8ab Maxwell3D: Implement State Dirty Flags. 2019-07-17 17:29:51 -04:00
Fernando Sahmkow
0d3db58657 Maxwell3D: Rework CBData Upload 2019-07-17 17:29:50 -04:00
Fernando Sahmkow
f2e7b29c14 Maxwell3D: Rework the dirty system to be more consistant and scaleable 2019-07-17 17:29:49 -04:00
Fernando Sahmkow
e42bcf2314 maxwell3d: Implement Conditional Rendering
Conditional Rendering takes care of conditionaly clearing or drawing
depending on a set of queries. This PR implements the query checks to
stablish if things can be rendered or not.
2019-07-17 17:13:19 -04:00
Fernando Sahmkow
223a535f3f Merge pull request #2740 from lioncash/bra
shader/decode/other: Correct branch indirect argument within BRA handling
2019-07-17 14:25:08 -04:00
Rodrigo Locatti
c3218c110f Merge pull request #2726 from lioncash/access
core: Remove CurrentArmInterface() global accessor
2019-07-17 03:42:16 -03:00
Lioncash
bebbdc2067 shader_ir: std::move Node instance where applicable
These are std::shared_ptr instances underneath the hood, which means
copying them isn't as cheap as a regular pointer. Particularly so on
weakly-ordered systems.

This avoids atomic reference count increments and decrements where they
aren't necessary for the core set of operations.
2019-07-16 19:49:23 -04:00
Lioncash
60926ac16b shader_ir: Rename Get/SetTemporal to Get/SetTemporary
This is more accurate in terms of describing what the functions are
actually doing. Temporal relates to time, not the setting of a temporary
itself.
2019-07-16 19:47:43 -04:00
Lioncash
44d87ff641 shader_ir: Remove unused includes
Removes unnecessary header dependencies.
2019-07-16 19:47:42 -04:00
Fernando Sahmkow
d614193e49 Shader_Ir: Correct tracking to track from right to left 2019-07-16 15:06:59 -04:00
Fernando Sahmkow
b56e7f870a Merge pull request #2565 from ReinUsesLisp/track-indirect
shader/track: Track indirect buffers
2019-07-16 14:58:35 -04:00
Lioncash
e2d7dda166 shader/decode/other: Correct branch indirect argument within BRA handling
This appears to have been a copy/paste error introduced within
8a6fc529a9
2019-07-16 12:20:45 -04:00
ReinUsesLisp
2a4044a858 gl_shader_cache: Fix clang-format issues 2019-07-15 20:33:51 -03:00
ReinUsesLisp
6b0d017675 gl_shader_decompiler: Stub local memory size 2019-07-15 17:38:25 -03:00
ReinUsesLisp
56bca83bde gl_shader_cache: Address review commentaries 2019-07-15 17:38:25 -03:00
ReinUsesLisp
bbecd13697 gl_shader_cache: Address CI issues 2019-07-15 17:38:25 -03:00
ReinUsesLisp
725ba6cf63 gl_rasterizer: Implement compute shaders 2019-07-15 17:38:25 -03:00
Fernando Sahmkow
1bdb59fc6e Merge pull request #2695 from ReinUsesLisp/layer-viewport
gl_shader_decompiler: Implement gl_ViewportIndex and gl_Layer in vertex shaders
2019-07-15 16:28:07 -04:00
bunnei
b77a1ed67a Merge pull request #2705 from FernandoS27/tex-cache-fixes
GPU: Fixes to Texture Cache and Include Microprofiles for GL State/BufferCopy/Macro Interpreter
2019-07-14 22:44:36 -04:00
ReinUsesLisp
afa8096df5 shader: Allow tracking of indirect buffers without variable offset
While changing this code, simplify tracking code to allow returning
the base address node, this way callers don't have to manually rebuild
it on each invocation.
2019-07-14 22:36:44 -03:00
bunnei
3477b92289 Merge pull request #2675 from ReinUsesLisp/opengl-buffer-cache
buffer_cache: Implement a generic buffer cache and its OpenGL backend
2019-07-14 19:03:43 -04:00
Fernando Sahmkow
2ac7472d3f Texture_Cache: Address Feedback 2019-07-14 17:42:39 -04:00
Fernando Sahmkow
0f54b541f4 Texture_Cache: Remove some unprecise fallback case and clang format 2019-07-14 12:00:32 -04:00
Fernando Sahmkow
5818959e54 Texture_Cache: Force Framebuffer reset if an active render target is unregistered. 2019-07-14 12:00:31 -04:00
Fernando Sahmkow
913b7a6872 GPU: Add a microprofile for macro interpreter 2019-07-14 12:00:30 -04:00
Fernando Sahmkow
a9943222f2 GL_State: Add a microprofile timer to OpenGL state. 2019-07-14 12:00:30 -04:00
Fernando Sahmkow
5c1e1a148e Gl_Texture_Cache: Measure Buffer Copy Times 2019-07-14 12:00:29 -04:00
Fernando Sahmkow
5d31bab69a Texture_Cache: Correct Linear Structural Match. 2019-07-14 12:00:28 -04:00
Fernando Sahmkow
4882c058fd Merge pull request #2690 from SciresM/physmem_fixes
Implement MapPhysicalMemory/UnmapPhysicalMemory
2019-07-14 09:16:46 -04:00
Fernando Sahmkow
0ec9da2f9f Merge pull request #2692 from ReinUsesLisp/tlds-f16
shader/texture: Add F16 support for TLDS
2019-07-14 08:44:38 -04:00
Flame Sage
b9e1db1312 Merge pull request #2730 from DarkLordZach/master
Finalize Azure Pipelines Definitions
2019-07-13 21:35:37 -04:00
Zach Hilman
bbc5b5d62d Finalize Azure Pipelines Definitions
d
2019-07-13 21:34:40 -04:00
Lioncash
093e5440e2 core: Remove CurrentArmInterface() global accessor
Replaces the final usage of the global accessor function and removes it.
Removes one more enabler of global state.
2019-07-12 21:48:49 -04:00
Zach Hilman
4d82158274 Merge pull request #2725 from ogniK5377/mult-audbuffer
"AudioRenderer" thread should have a unique name
2019-07-12 16:41:17 -04:00
David Marcec
ea5602b959 Clang format 2019-07-13 01:49:32 +10:00
David Marcec
31fe859fe5 Addressed issues 2019-07-13 01:35:40 +10:00
David Marcec
73b37886c1 "AudioRenderer" thread should have a unique name
Creating multiple "AudioRenderer" threads cause the previous thread to be overwritten. The thread will name be renamed to AudioRenderer-InstanceX, where X is the current instance number.
2019-07-13 01:22:08 +10:00
Michael Scire
d4fc560c05 Remove unicorn mappings/unmappings 2019-07-11 15:12:33 -07:00
bunnei
bb67091c77 Merge pull request #2609 from FernandoS27/new-scan
Implement a New Shader Scanner, Decompile Flow Stack and implement BRX BRA.CC
2019-07-11 17:36:23 -04:00
ReinUsesLisp
0eb0c24269 gl_shader_decompiler: Fix gl_PointSize redeclaration 2019-07-11 16:10:59 -03:00
bunnei
79c382fafd Merge pull request #2717 from SciresM/unmirror_memory
Restore memory perms on svcUnmapMemory/UnloadNro
2019-07-11 14:57:20 -04:00
bunnei
521fb325aa Merge pull request #2723 from lioncash/mem
core/arm: Remove obsolete Unicorn memory mapping
2019-07-11 14:56:26 -04:00
bunnei
2a94745500 Merge pull request #2724 from lioncash/sleep
service/am: Implement SetAutoSleepDisabled/IsAutoSleepDisabled
2019-07-11 14:56:06 -04:00
Lioncash
f4ae449f73 service/am: Implement IsAutoSleepDisabled
This simply queries whether or not auto-sleep facilities are disabled
and has no special handling. It's a basic getter function.
2019-07-11 13:34:55 -04:00
Lioncash
b81f6f67f5 service/am: Implement SetAutoSleepDisabled
Provides a basic implementation of SetAutoSleepDisabled. Until idle
handling is implemented, this is about the best we can do.

In the meantime, provide a rough documenting of specifics that occur
when this function is called on actual hardware.
2019-07-11 13:09:03 -04:00
Lioncash
8fc806e88a yuzu: Remove setting for using Unicorn
The JIT is mature enough that this setting can be removed, falling back
to Unicorn only on unsupported architectures. Any missing features from
Unicorn (of which there are extremely few), are mostly
developer-oriented, which most users don't care about.

Features should be coordinated with the JIT, not the interpreter,
anyhow.
2019-07-11 05:59:13 -04:00
Lioncash
70624e1c1d core/arm: Remove obsolete Unicorn memory mapping
This was initially necessary when AArch64 JIT emulation was in its
infancy and all memory-related instructions weren't implemented.

Given the JIT now has all of these facilities implemented, we can remove
these functions from the CPU interface.
2019-07-11 05:35:46 -04:00
Michael Scire
072a9796f5 Restore memory perms on svcUnmapMemory/UnloadNro
Prior to PR, Yuzu did not restore memory to RW-
on unmap of mirrored memory or unloading of NRO.

(In fact, in the NRO case, the memory was unmapped
instead of reprotected to --- on Load, so it was
actually lost entirely...)

This PR addresses that, and restores memory to RW-
as it should.

This fixes a crash in Super Smash Bros when creating
a World of Light save for the first time, and possibly
other games/circumstances.
2019-07-11 01:38:28 -07:00
ReinUsesLisp
aca40de224 gl_shader_decompiler: Fix conditional usage of GL_ARB_shader_viewport_layer_array 2019-07-11 04:27:00 -03:00
Flame Sage
0b3901bdd0 Merge pull request #2714 from DarkLordZach/repo-sync-pipeline
Add Repository Sync Pipeline
2019-07-10 22:22:04 -04:00
Zach Hilman
502358ab05 Add Repository Sync Pipeline 2019-07-10 22:18:32 -04:00
bunnei
fd066ffbce Merge pull request #2697 from lioncash/doc
gl_rasterizer: Amend documentation comment for ConfigureFramebuffers()
2019-07-10 16:38:09 -04:00
bunnei
7fb7054bc8 Merge pull request #2686 from ReinUsesLisp/vk-scheduler
vk_scheduler: Drop execution context in favor of views
2019-07-10 16:35:48 -04:00
bunnei
93eaea109d Merge pull request #2700 from ogniK5377/GetFriendList
IFriendService::GetFriendList
2019-07-10 16:29:48 -04:00
bunnei
463af08bed Merge pull request #2611 from DarkLordZach/pm-info-cmd
pm: Implement various pm commands for finding process and title IDs
2019-07-10 16:28:29 -04:00
bunnei
d707a12b9a Merge pull request #2650 from DarkLordZach/mii-iface-ver
mii: Implement IDatabaseService SetInterfaceVersion
2019-07-10 16:26:23 -04:00
bunnei
206ec29f17 Merge pull request #2691 from lioncash/override
video_core: Add missing override specifiers
2019-07-10 16:25:43 -04:00
Flame Sage
55245b6183 Merge pull request #2706 from DarkLordZach/azure-1
Add Pipeline Definitions for Azure CI
2019-07-09 22:22:21 -04:00
Zach Hilman
f2e5c19520 Add Pipeline Definitions 2019-07-09 22:17:56 -04:00
Flame Sage
05d55b0fd7 Set up CI with Azure Pipelines
[skip ci]
2019-07-09 22:01:46 -04:00
Fernando Sahmkow
f2549739d1 shader_ir: Add comments on missing instruction.
Also shows Nvidia's address space on comments.
2019-07-09 17:15:45 -04:00
Michael Scire
a1845d1dd3 prefer system reference over global accessor 2019-07-09 08:11:35 -07:00
Fernando Sahmkow
2de7649311 shader_ir: limit explorastion to best known program size. 2019-07-09 08:14:43 -04:00
Fernando Sahmkow
e7c6045a03 control_flow: Correct block breaking algorithm. 2019-07-09 08:14:43 -04:00
Fernando Sahmkow
dc4a93594c control_flow: Assert shaders bigger than limit. 2019-07-09 08:14:42 -04:00
Fernando Sahmkow
e7a88f0ab3 control_flow: Address feedback. 2019-07-09 08:14:42 -04:00
Fernando Sahmkow
34357b110c shader_ir: Correct parsing of scheduling instructions and correct sizing 2019-07-09 08:14:41 -04:00
Fernando Sahmkow
cfb3db1a32 shader_ir: Correct max sizing 2019-07-09 08:14:40 -04:00
Fernando Sahmkow
d45fed3030 shader_ir: Remove unnecessary constructors and use optional for ScanFlow result 2019-07-09 08:14:40 -04:00
Fernando Sahmkow
01b21ee1e8 shader_ir: Corrections, documenting and asserting control_flow 2019-07-09 08:14:39 -04:00
Fernando Sahmkow
d5533b440c shader_ir: Unify blocks in decompiled shaders. 2019-07-09 08:14:39 -04:00
Fernando Sahmkow
926b80102f shader_ir: Decompile Flow Stack 2019-07-09 08:14:38 -04:00
Fernando Sahmkow
459fce3a8f shader_ir: propagate shader size to the IR 2019-07-09 08:14:37 -04:00
Fernando Sahmkow
8a6fc529a9 shader_ir: Implement BRX & BRA.CC 2019-07-09 08:14:37 -04:00
Fernando Sahmkow
c218ae4b02 shader_ir: Remove the old scanner. 2019-07-09 08:14:36 -04:00
Fernando Sahmkow
8af6e6a052 shader_ir: Implement a new shader scanner 2019-07-09 08:14:36 -04:00
David Marcec
0330f5d6f8 IFriendService::GetFriendList
We don't have any friends implemented in Yuzu yet so it doesn't make sense to return any friends. For now we'll be returning 0 friends however the information provided will allow a proper implementation of this cmd when needed.
2019-07-09 18:20:58 +10:00
Lioncash
c04785c928 gl_rasterizer: Amend documentation comment for ConfigureFramebuffers()
must_reconfigure isn't a parameter for this function any more, so it can
be replaced with current_state.

While we're at it, we can make the parameters of the declaration match
the same name as the ones in the definition.
2019-07-09 02:08:15 -04:00
Michael Scire
697206092e Prevent merging of device mapped memory blocks.
This sets the DeviceMapped attribute for GPU-mapped memory blocks,
and prevents merging device mapped blocks. This prevents memory
mapped from the gpu from having its backing address changed by
block coalesce.
2019-07-08 22:52:05 -07:00
Zach Hilman
618d8446ab Merge pull request #2661 from ogniK5377/audren-loop
audren: Only manage wave buffers with a size
2019-07-08 09:35:42 -04:00
Zach Hilman
6c3cceafdc Merge pull request #2657 from ogniK5377/npad-assignments
hid:StartLrAssignmentMode, hid:StopLrAssignmentMode, hid:SwapNpadAssignment
2019-07-08 09:35:19 -04:00
David Marcec
5234e08a0d addressed issues 2019-07-08 14:51:40 +10:00
David Marcec
e3d000a7e6 addressed issue 2019-07-08 14:49:16 +10:00
bunnei
7b28f954c9 Merge pull request #2651 from DarkLordZach/apm-boost-mode-1
apm: Initial implementation of performance config and boost mode
2019-07-07 21:40:30 -04:00
bunnei
8f5aae3074 Merge pull request #2642 from DarkLordZach/fsp-log-2
fsp-srv: Implement Access Logging Functionality
2019-07-07 21:39:40 -04:00
ReinUsesLisp
c9d886c84e gl_shader_decompiler: Implement gl_ViewportIndex and gl_Layer in vertex shaders
This commit implements gl_ViewportIndex and gl_Layer in vertex and
geometry shaders. In the case it's used in a vertex shader, it requires
ARB_shader_viewport_layer_array. This extension is available on AMD and
Nvidia devices (mesa and proprietary drivers), but not available on
Intel on any platform. At the moment of writing this description I don't
know if this is a hardware limitation or a driver limitation.

In the case that ARB_shader_viewport_layer_array is not available,
writes to these registers on a vertex shader are ignored, with the
appropriate logging.
2019-07-07 20:42:55 -03:00
Michael Scire
ca6f08e3b1 Remove unused member function declaration 2019-07-07 13:02:41 -07:00
Michael Scire
ce64a9fab9 physmem: add helpers, cleanup logic. 2019-07-07 12:55:30 -07:00
Michael Scire
b901cd584e clang-format fixes 2019-07-07 12:08:29 -07:00
ReinUsesLisp
d0966b9f7c shader/texture: Add F16 support for TLDS 2019-07-07 16:05:56 -03:00
Michael Scire
1689784c19 address review commentary 2019-07-07 11:48:11 -07:00
Michael Scire
13a8fde3ad Implement MapPhysicalMemory/UnmapPhysicalMemory
This implements svcMapPhysicalMemory/svcUnmapPhysicalMemory for Yuzu,
which can be used to map memory at a desired address by games since
3.0.0.

It also properly parses SystemResourceSize from NPDM, and makes
information available via svcGetInfo.

This is needed for games like Super Smash Bros. and Diablo 3 -- this
PR's implementation does not run into the "ASCII reads" issue mentioned
in the comments of #2626, which was caused by the following bugs in
Yuzu's memory management that this PR also addresses:
* Yuzu's memory coalescing does not properly merge blocks. This results
  in a polluted address space/svcQueryMemory results that would be
  impossible to replicate on hardware, which can lead to game code making
  the wrong assumptions about memory layout.
  * This implements better merging for AllocatedMemoryBlocks.
* Yuzu's implementation of svcMirrorMemory unprotected the entire
  virtual memory range containing the range being mirrored. This could
  lead to games attempting to map data at that unprotected
  range/attempting to access that range after yuzu improperly unmapped
  it.
  * This PR fixes it by simply calling ReprotectRange instead of
    Reprotect.
2019-07-07 11:45:53 -07:00
Lioncash
56c7912159 kernel/process: Allocate the process' TLS region during initialization
Prior to execution within a process beginning, the process establishes
its own TLS region for uses (as far as I can tell) related to exception
handling.

Now that TLS creation was decoupled from threads themselves, we can add
this behavior to our Process class. This is also good, as it allows us
to remove a stub within svcGetInfo, namely querying the address of that
region.
2019-07-07 14:08:28 -04:00
Lioncash
eb6f55d880 kernel/process: Move main thread stack allocation to its own function
Keeps this particular set of behavior isolated to its own function.
2019-07-07 14:08:25 -04:00
Lioncash
cbdd6cd1c0 vk_sampler_cache: Remove unused includes
These are no longer used within this header, so they can be removed.
2019-07-07 13:40:36 -04:00
Lioncash
4b27680639 video_core: Add missing override specifiers 2019-07-07 13:38:39 -04:00
Lioncash
5085a16d78 yuzu/main: Make error messages within OnCoreError more localization-friendly
Previously, a translated string was being appended onto another string
in a manner that doesn't allow the translator to control where the
appended text is placed. This can be a nuisance for languages where
grammar and text ordering differs from English.

We now append the strings via the format strings themselves, which
allows translators to reorder where the text will be placed.
2019-07-07 11:02:05 -04:00
ReinUsesLisp
86a874a2fc vk_scheduler: Drop execution context in favor of views
Instead of passing by copy an execution context through out the whole
Vulkan call hierarchy, use a command buffer view and fence view
approach.

This internally dereferences the command buffer or fence forcing the
user to be unable to use an outdated version of it on normal usage.
It is still possible to keep store an outdated if it is casted to
VKFence& or vk::CommandBuffer.

While changing this file, add an extra parameter for Flush and Finish to
allow releasing the fence from this calls.
2019-07-07 03:30:22 -03:00
Zach Hilman
a4ef86a021 mii: Implement IDatabaseService SetInterfaceVersion
Appears to set a member variable used to affect the API that games access, and the method used to store data.
2019-07-06 21:39:12 -04:00
ReinUsesLisp
79a23ca5f0 buffer_cache: Avoid [[nodiscard]] to make clang-format happy 2019-07-06 01:17:05 -03:00
ReinUsesLisp
83050c9495 buffer_cache: Try to fix MinGW build 2019-07-06 01:14:05 -03:00
ReinUsesLisp
f7691ebe57 gl_rasterizer: Fix nullptr dereference on disabled buffers 2019-07-06 00:37:56 -03:00
ReinUsesLisp
7ecf64257a gl_rasterizer: Minor style changes 2019-07-06 00:37:55 -03:00
ReinUsesLisp
9cdc576f60 gl_rasterizer: Fix vertex and index data invalidations 2019-07-06 00:37:55 -03:00
ReinUsesLisp
1fa21fa192 gl_buffer_cache: Implement with generic buffer cache 2019-07-06 00:37:55 -03:00
ReinUsesLisp
32c0212b24 buffer_cache: Implement a generic buffer cache
Implements a templated class with a similar approach to our current
generic texture cache. It is designed to be compatible with Vulkan and
OpenGL,
2019-07-06 00:37:55 -03:00
ReinUsesLisp
2bcae41a73 gl_buffer_cache: Remove global system getters 2019-07-06 00:37:55 -03:00
ReinUsesLisp
02ab844934 gl_device: Query SSBO alignment 2019-07-06 00:37:55 -03:00
ReinUsesLisp
d14fbfb9b5 gl_buffer_cache: Implement flushing 2019-07-06 00:37:55 -03:00
ReinUsesLisp
345f852bdb gl_rasterizer: Drop gl_global_cache in favor of gl_buffer_cache 2019-07-06 00:37:55 -03:00
ReinUsesLisp
8155b12d3d gl_buffer_cache: Rework to support internalized buffers 2019-07-06 00:37:55 -03:00
ReinUsesLisp
f8ba72d491 gl_buffer_cache: Store in CachedBufferEntry the used buffer handle 2019-07-06 00:37:55 -03:00
ReinUsesLisp
b54fb8fc4c gl_buffer_cache: Return used buffer from Upload function 2019-07-06 00:37:55 -03:00
ReinUsesLisp
a6d2f52fc3 gl_rasterizer: Add some commentaries 2019-07-06 00:37:55 -03:00
ReinUsesLisp
2b9d4088ec gl_rasterizer: Make DrawParameters rasterizer instance const 2019-07-06 00:37:55 -03:00
ReinUsesLisp
2e39c20da5 gl_rasterizer: Move index buffer uploading to its own method 2019-07-06 00:37:55 -03:00
Fernando Sahmkow
0fc98958a3 NVServices: Correct delayed responses. 2019-07-05 15:49:35 -04:00
Fernando Sahmkow
8c91d5c166 Nv_Host_Ctrl: Correct difference calculation 2019-07-05 15:49:34 -04:00
Fernando Sahmkow
f3a39e0c9c NVServices: Address Feedback 2019-07-05 15:49:33 -04:00
Fernando Sahmkow
d20ede40b1 NVServices: Styling, define constructors as explicit and corrections 2019-07-05 15:49:32 -04:00
Fernando Sahmkow
b391e5f638 NVFlinger: Correct GCC compile error 2019-07-05 15:49:31 -04:00
Fernando Sahmkow
0335a25d1f NVServices: Make NVEvents Automatic according to documentation. 2019-07-05 15:49:29 -04:00
Fernando Sahmkow
b6844bec60 NVServices: Correct CtrlEventWaitSync to block the ipc until timeout. 2019-07-05 15:49:28 -04:00
Fernando Sahmkow
7d1b974bca GPU: Correct Interrupts to interrupt on syncpt/value instead of event, mirroring hardware 2019-07-05 15:49:26 -04:00
Fernando Sahmkow
61697864c3 nvflinger: Make the force 30 fps still force 30 fps 2019-07-05 15:49:25 -04:00
Fernando Sahmkow
efdeab3a1d nv_services: Fixes to event liberation. 2019-07-05 15:49:24 -04:00
Fernando Sahmkow
ea97589624 nvflinger: Acquire buffers in the same order as they were queued. 2019-07-05 15:49:23 -04:00
Fernando Sahmkow
24408cce9b nv_services: Deglobalize NvServices 2019-07-05 15:49:22 -04:00
Fernando Sahmkow
f2e026a1d8 gpu_asynch: Simplify synchronization to a simpler consumer->producer scheme. 2019-07-05 15:49:20 -04:00
Fernando Sahmkow
0706d633bf nv_host_ctrl: Make Sync GPU variant always return synced result. 2019-07-05 15:49:20 -04:00
Fernando Sahmkow
600dddf88d Async GPU: do invalidate as synced operation
Async GPU: Always invalidate synced.
2019-07-05 15:49:19 -04:00
Fernando Sahmkow
c13433aee4 Gpu: use an std mutex instead of a spin_lock to guard syncpoints 2019-07-05 15:49:18 -04:00
Fernando Sahmkow
78add28aab nvhost_ctrl: Corrections to event handling 2019-07-05 15:49:17 -04:00
Fernando Sahmkow
eef55f493b Gpu: Mark areas as protected. 2019-07-05 15:49:16 -04:00
Fernando Sahmkow
a45643cb3b nv_services: Stub CtrlEventSignal 2019-07-05 15:49:15 -04:00
Fernando Sahmkow
8942047d41 Gpu: Implement Hardware Interrupt Manager and manage GPU interrupts 2019-07-05 15:49:14 -04:00
Fernando Sahmkow
e0027eba85 nv_services: Implement NvQueryEvent, NvCtrlEventWait, NvEventRegister, NvEventUnregister 2019-07-05 15:49:13 -04:00
Fernando Sahmkow
7039ece0a0 nv_services: Create GPU channels correctly 2019-07-05 15:49:12 -04:00
Fernando Sahmkow
82b829625b video_core: Implement GPU side Syncpoints 2019-07-05 15:49:11 -04:00
Fernando Sahmkow
737e978f5b nv_services: Correct buffer queue fencing and GPFifo fencing 2019-07-05 15:49:10 -04:00
Fernando Sahmkow
ceb5f5079c nvflinger: Implement swap intervals 2019-07-05 15:49:08 -04:00
David Marcec
b82b5e46e7 audren: Only manage wave buffers with a size
We shouldn't be incrementing if wave buffers are empty. They are considered invalid/unused wave buffers.

This fixes the issue of certain sounds looping when they shouldn't
2019-07-01 21:20:23 +10:00
David Marcec
472210bf72 hid:StartLrAssignmentMode, hid:StopLrAssignmentMode, hid:SwapNpadAssignment
StartLrAssignmentMode and StopLrAssignmentMode don't require any implementation as it's just used for showing the screen of changing the controller orientation if the user wishes to do so.  Ever since #1634 this has not been needed as users can specify the controller orientation from the config and swap at any time. We store a private member just in case this gets used for anything extra in the future
2019-07-01 15:12:57 +10:00
Zach Hilman
7e5d7773cc am: Implement SetCpuBoostMode in terms of APM 2019-06-28 22:46:51 -04:00
Zach Hilman
e2ad3e1fb0 core: Keep instance of APM Controller 2019-06-28 22:46:31 -04:00
Zach Hilman
e52306ca60 apm: Implement SetCpuBoostMode 2019-06-28 22:46:00 -04:00
Zach Hilman
1c6e6305ea apm: Add getters for performance config and mode 2019-06-28 22:45:31 -04:00
Zach Hilman
9175b00e7d apm: Add apm:am service
8.0.0+ identical version of apm
2019-06-28 22:44:30 -04:00
Zach Hilman
65eb9cbb28 apm: Add Controller class to manage speed data and application 2019-06-28 22:43:51 -04:00
Zach Hilman
d40f38967e fsp-srv: Implement GetAccessLogVersionInfo
Returns some misc. data about logging to help the game determine if it should log.
2019-06-28 21:05:42 -04:00
Zach Hilman
554e2f2f98 reporter: Add report class for filesystem access logs 2019-06-28 21:02:50 -04:00
Zach Hilman
db2fdd0352 fsp-srv: Implement OutputAccessLogToSdCard
Allows games to log data to the SD.
2019-06-28 21:02:34 -04:00
Zach Hilman
bce4bfffb6 pm: Implement pm:shell and pm:dmnt GetApplicationPid
Returns the process ID of the current application or 0 if no app is running.
2019-06-26 19:07:34 -04:00
Zach Hilman
354c254cde pm: Implement pm:dmnt GetTitlePid
Takes a title ID and searches for a matching process, returning error if it doesn't exist, otherwise the process ID.
2019-06-26 19:06:51 -04:00
Zach Hilman
49af3bcdcb pm: Implement pm:info GetTitleId
Searches the process list for a process with the specified ID, returning the title ID if it exists.
2019-06-26 19:05:04 -04:00
250 changed files with 6475 additions and 1918 deletions

View File

@@ -0,0 +1,15 @@
#!/bin/bash -ex
# Copy documentation
cp license.txt "$REV_NAME"
cp README.md "$REV_NAME"
tar $COMPRESSION_FLAGS "$ARCHIVE_NAME" "$REV_NAME"
mv "$REV_NAME" $RELEASE_NAME
7z a "$REV_NAME.7z" $RELEASE_NAME
# move the compiled archive into the artifacts directory to be uploaded by travis releases
mv "$ARCHIVE_NAME" artifacts/
mv "$REV_NAME.7z" artifacts/

View File

@@ -0,0 +1,6 @@
#!/bin/bash -ex
GITDATE="`git show -s --date=short --format='%ad' | sed 's/-//g'`"
GITREV="`git show -s --format='%h'`"
mkdir -p artifacts

View File

@@ -0,0 +1,6 @@
#!/bin/bash -ex
# Run clang-format
cd /yuzu
chmod a+x ./.ci/scripts/format/script.sh
./.ci/scripts/format/script.sh

View File

@@ -0,0 +1,4 @@
#!/bin/bash -ex
chmod a+x ./.ci/scripts/format/docker.sh
docker run -v $(pwd):/yuzu yuzuemu/build-environments:linux-clang-format /bin/bash -ex /yuzu/.ci/scripts/format/docker.sh

View File

@@ -0,0 +1,37 @@
#!/bin/bash -ex
if grep -nrI '\s$' src *.yml *.txt *.md Doxyfile .gitignore .gitmodules .ci* dist/*.desktop \
dist/*.svg dist/*.xml; then
echo Trailing whitespace found, aborting
exit 1
fi
# Default clang-format points to default 3.5 version one
CLANG_FORMAT=clang-format-6.0
$CLANG_FORMAT --version
if [ "$TRAVIS_EVENT_TYPE" = "pull_request" ]; then
# Get list of every file modified in this pull request
files_to_lint="$(git diff --name-only --diff-filter=ACMRTUXB $TRAVIS_COMMIT_RANGE | grep '^src/[^.]*[.]\(cpp\|h\)$' || true)"
else
# Check everything for branch pushes
files_to_lint="$(find src/ -name '*.cpp' -or -name '*.h')"
fi
# Turn off tracing for this because it's too verbose
set +x
for f in $files_to_lint; do
d=$(diff -u "$f" <($CLANG_FORMAT "$f") || true)
if ! [ -z "$d" ]; then
echo "!!! $f not compliant to coding style, here is the fix:"
echo "$d"
fail=1
fi
done
set -x
if [ "$fail" = 1 ]; then
exit 1
fi

View File

@@ -0,0 +1,14 @@
#!/bin/bash -ex
cd /yuzu
ccache -s
mkdir build || true && cd build
cmake .. -G Ninja -DYUZU_USE_BUNDLED_UNICORN=ON -DYUZU_USE_QT_WEB_ENGINE=ON -DCMAKE_BUILD_TYPE=Release -DCMAKE_C_COMPILER=/usr/lib/ccache/gcc -DCMAKE_CXX_COMPILER=/usr/lib/ccache/g++ -DYUZU_ENABLE_COMPATIBILITY_REPORTING=${ENABLE_COMPATIBILITY_REPORTING:-"OFF"} -DENABLE_COMPATIBILITY_LIST_DOWNLOAD=ON -DUSE_DISCORD_PRESENCE=ON
ninja
ccache -s
ctest -VV -C Release

View File

@@ -0,0 +1,5 @@
#!/bin/bash -ex
mkdir -p "ccache" || true
chmod a+x ./.ci/scripts/linux/docker.sh
docker run -e ENABLE_COMPATIBILITY_REPORTING -e CCACHE_DIR=/yuzu/ccache -v $(pwd):/yuzu yuzuemu/build-environments:linux-fresh /bin/bash /yuzu/.ci/scripts/linux/docker.sh

View File

@@ -0,0 +1,14 @@
#!/bin/bash -ex
. .ci/scripts/common/pre-upload.sh
REV_NAME="yuzu-linux-${GITDATE}-${GITREV}"
ARCHIVE_NAME="${REV_NAME}.tar.xz"
COMPRESSION_FLAGS="-cJvf"
mkdir "$REV_NAME"
cp build/bin/yuzu-cmd "$REV_NAME"
cp build/bin/yuzu "$REV_NAME"
. .ci/scripts/common/post-upload.sh

View File

@@ -0,0 +1,28 @@
# Download all pull requests as patches that match a specific label
# Usage: python download-patches-by-label.py <Label to Match> <Root Path Folder to DL to>
import requests, sys, json, urllib3.request, shutil, subprocess
http = urllib3.PoolManager()
dl_list = {}
def check_individual(labels):
for label in labels:
if (label["name"] == sys.argv[1]):
return True
return False
try:
url = 'https://api.github.com/repos/yuzu-emu/yuzu/pulls'
response = requests.get(url)
if (response.ok):
j = json.loads(response.content)
for pr in j:
if (check_individual(pr["labels"])):
pn = pr["number"]
print("Matched PR# %s" % pn)
print(subprocess.check_output(["git", "fetch", "https://github.com/yuzu-emu/yuzu.git", "pull/%s/head:pr-%s" % (pn, pn), "-f"]))
print(subprocess.check_output(["git", "merge", "--squash", "pr-%s" % pn]))
print(subprocess.check_output(["git", "commit", "-m\"Merge PR %s\"" % pn]))
except:
sys.exit(-1)

View File

@@ -0,0 +1,18 @@
# Checks to see if the specified pull request # has the specified tag
# Usage: python check-label-presence.py <Pull Request ID> <Name of Label>
import requests, json, sys
try:
url = 'https://api.github.com/repos/yuzu-emu/yuzu/issues/%s' % sys.argv[1]
response = requests.get(url)
if (response.ok):
j = json.loads(response.content)
for label in j["labels"]:
if label["name"] == sys.argv[2]:
print('##vso[task.setvariable variable=enabletesting;]true')
sys.exit()
except:
sys.exit(-1)
print('##vso[task.setvariable variable=enabletesting;]false')

View File

@@ -0,0 +1,2 @@
git config --global user.email "yuzu@yuzu-emu.org"
git config --global user.name "yuzubot"

View File

@@ -0,0 +1,50 @@
#!/bin/bash -ex
cd /yuzu
ccache -s
# Dirty hack to trick unicorn makefile into believing we are in a MINGW system
mv /bin/uname /bin/uname1 && echo -e '#!/bin/sh\necho MINGW64' >> /bin/uname
chmod +x /bin/uname
# Dirty hack to trick unicorn makefile into believing we have cmd
echo '' >> /bin/cmd
chmod +x /bin/cmd
mkdir build || true && cd build
cmake .. -G Ninja -DCMAKE_TOOLCHAIN_FILE="$(pwd)/../CMakeModules/MinGWCross.cmake" -DUSE_CCACHE=ON -DYUZU_USE_BUNDLED_UNICORN=ON -DENABLE_COMPATIBILITY_LIST_DOWNLOAD=ON -DCMAKE_BUILD_TYPE=Release
ninja
# Clean up the dirty hacks
rm /bin/uname && mv /bin/uname1 /bin/uname
rm /bin/cmd
ccache -s
echo "Tests skipped"
#ctest -VV -C Release
echo 'Prepare binaries...'
cd ..
mkdir package
QT_PLATFORM_DLL_PATH='/usr/x86_64-w64-mingw32/lib/qt5/plugins/platforms/'
find build/ -name "yuzu*.exe" -exec cp {} 'package' \;
# copy Qt plugins
mkdir package/platforms
cp "${QT_PLATFORM_DLL_PATH}/qwindows.dll" package/platforms/
cp -rv "${QT_PLATFORM_DLL_PATH}/../mediaservice/" package/
cp -rv "${QT_PLATFORM_DLL_PATH}/../imageformats/" package/
rm -f package/mediaservice/*d.dll
for i in package/*.exe; do
# we need to process pdb here, however, cv2pdb
# does not work here, so we just simply strip all the debug symbols
x86_64-w64-mingw32-strip "${i}"
done
pip3 install pefile
python3 .ci/scripts/windows/scan_dll.py package/*.exe "package/"
python3 .ci/scripts/windows/scan_dll.py package/imageformats/*.dll "package/"

View File

@@ -0,0 +1,5 @@
#!/bin/bash -ex
mkdir -p "ccache" || true
chmod a+x ./.ci/scripts/windows/docker.sh
docker run -e CCACHE_DIR=/yuzu/ccache -v $(pwd):/yuzu yuzuemu/build-environments:linux-mingw /bin/bash -ex /yuzu/.ci/scripts/windows/docker.sh

View File

@@ -0,0 +1,106 @@
import pefile
import sys
import re
import os
import queue
import shutil
# constant definitions
KNOWN_SYS_DLLS = ['WINMM.DLL', 'MSVCRT.DLL', 'VERSION.DLL', 'MPR.DLL',
'DWMAPI.DLL', 'UXTHEME.DLL', 'DNSAPI.DLL', 'IPHLPAPI.DLL']
# below is for Ubuntu 18.04 with specified PPA enabled, if you are using
# other distro or different repositories, change the following accordingly
DLL_PATH = [
'/usr/x86_64-w64-mingw32/bin/',
'/usr/x86_64-w64-mingw32/lib/',
'/usr/lib/gcc/x86_64-w64-mingw32/7.3-posix/'
]
missing = []
def parse_imports(file_name):
results = []
pe = pefile.PE(file_name, fast_load=True)
pe.parse_data_directories()
for entry in pe.DIRECTORY_ENTRY_IMPORT:
current = entry.dll.decode()
current_u = current.upper() # b/c Windows is often case insensitive
# here we filter out system dlls
# dll w/ names like *32.dll are likely to be system dlls
if current_u.upper() not in KNOWN_SYS_DLLS and not re.match(string=current_u, pattern=r'.*32\.DLL'):
results.append(current)
return results
def parse_imports_recursive(file_name, path_list=[]):
q = queue.Queue() # create a FIFO queue
# file_name can be a string or a list for the convience
if isinstance(file_name, str):
q.put(file_name)
elif isinstance(file_name, list):
for i in file_name:
q.put(i)
full_list = []
while q.qsize():
current = q.get_nowait()
print('> %s' % current)
deps = parse_imports(current)
# if this dll does not have any import, ignore it
if not deps:
continue
for dep in deps:
# the dependency already included in the list, skip
if dep in full_list:
continue
# find the requested dll in the provided paths
full_path = find_dll(dep)
if not full_path:
missing.append(dep)
continue
full_list.append(dep)
q.put(full_path)
path_list.append(full_path)
return full_list
def find_dll(name):
for path in DLL_PATH:
for root, _, files in os.walk(path):
for f in files:
if name.lower() == f.lower():
return os.path.join(root, f)
def deploy(name, dst, dry_run=False):
dlls_path = []
parse_imports_recursive(name, dlls_path)
for dll_entry in dlls_path:
if not dry_run:
shutil.copy(dll_entry, dst)
else:
print('[Dry-Run] Copy %s to %s' % (dll_entry, dst))
print('Deploy completed.')
return dlls_path
def main():
if len(sys.argv) < 3:
print('Usage: %s [files to examine ...] [target deploy directory]')
return 1
to_deploy = sys.argv[1:-1]
tgt_dir = sys.argv[-1]
if not os.path.isdir(tgt_dir):
print('%s is not a directory.' % tgt_dir)
return 1
print('Scanning dependencies...')
deploy(to_deploy, tgt_dir)
if missing:
print('Following DLLs are not found: %s' % ('\n'.join(missing)))
return 0
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,13 @@
#!/bin/bash -ex
. .ci/scripts/common/pre-upload.sh
REV_NAME="yuzu-windows-mingw-${GITDATE}-${GITREV}"
ARCHIVE_NAME="${REV_NAME}.tar.gz"
COMPRESSION_FLAGS="-czvf"
mkdir "$REV_NAME"
# get around the permission issues
cp -r package/* "$REV_NAME"
. .ci/scripts/common/post-upload.sh

View File

@@ -0,0 +1,23 @@
parameters:
artifactSource: 'true'
cache: 'false'
steps:
- task: DockerInstaller@0
displayName: 'Prepare Environment'
inputs:
dockerVersion: '17.09.0-ce'
- ${{ if eq(parameters.cache, 'true') }}:
- task: CacheBeta@0
displayName: 'Cache Build System'
inputs:
key: yuzu-v1-$(BuildName)-$(BuildSuffix)-$(CacheSuffix)
path: $(System.DefaultWorkingDirectory)/ccache
cacheHitVar: CACHE_RESTORED
- script: chmod a+x ./.ci/scripts/$(ScriptFolder)/exec.sh && ./.ci/scripts/$(ScriptFolder)/exec.sh
displayName: 'Build'
- script: chmod a+x ./.ci/scripts/$(ScriptFolder)/upload.sh && RELEASE_NAME=$(BuildName) ./.ci/scripts/$(ScriptFolder)/upload.sh
displayName: 'Package Artifacts'
- publish: artifacts
artifact: 'yuzu-$(BuildName)-$(BuildSuffix)'
displayName: 'Upload Artifacts'

View File

@@ -0,0 +1,23 @@
jobs:
- job: build
displayName: 'standard'
pool:
vmImage: ubuntu-latest
strategy:
maxParallel: 10
matrix:
windows:
BuildSuffix: 'windows-mingw'
ScriptFolder: 'windows'
linux:
BuildSuffix: 'linux'
ScriptFolder: 'linux'
steps:
- template: ./sync-source.yml
parameters:
artifactSource: $(parameters.artifactSource)
needSubmodules: 'true'
- template: ./build-single.yml
parameters:
artifactSource: 'false'
cache: $(parameters.cache)

View File

@@ -0,0 +1,33 @@
jobs:
- job: build_test
displayName: 'testing'
pool:
vmImage: ubuntu-latest
strategy:
maxParallel: 5
matrix:
windows:
BuildSuffix: 'windows-testing'
ScriptFolder: 'windows'
steps:
- script: sudo apt upgrade python3-pip && pip install requests urllib3
displayName: 'Prepare Environment'
- task: PythonScript@0
condition: eq(variables['Build.Reason'], 'PullRequest')
displayName: 'Determine Testing Status'
inputs:
scriptSource: 'filePath'
scriptPath: '.ci/scripts/merge/check-label-presence.py'
arguments: '$(System.PullRequest.PullRequestNumber) create-testing-build'
- ${{ if eq(variables.enabletesting, 'true') }}:
- template: ./sync-source.yml
parameters:
artifactSource: $(parameters.artifactSource)
needSubmodules: 'true'
- template: ./mergebot.yml
parameters:
matchLabel: 'testing-merge'
- template: ./build-single.yml
parameters:
artifactSource: 'false'
cache: 'false'

View File

@@ -0,0 +1,14 @@
parameters:
artifactSource: 'true'
steps:
- template: ./sync-source.yml
parameters:
artifactSource: $(parameters.artifactSource)
needSubmodules: 'false'
- task: DockerInstaller@0
displayName: 'Prepare Environment'
inputs:
dockerVersion: '17.09.0-ce'
- script: chmod a+x ./.ci/scripts/format/exec.sh && ./.ci/scripts/format/exec.sh
displayName: 'Verify Formatting'

46
.ci/templates/merge.yml Normal file
View File

@@ -0,0 +1,46 @@
jobs:
- job: merge
displayName: 'pull requests'
steps:
- checkout: self
submodules: recursive
- template: ./mergebot.yml
parameters:
matchLabel: '$(BuildName)-merge'
- task: ArchiveFiles@2
displayName: 'Package Source'
inputs:
rootFolderOrFile: '$(System.DefaultWorkingDirectory)'
includeRootFolder: false
archiveType: '7z'
archiveFile: '$(Build.ArtifactStagingDirectory)/yuzu-$(BuildName)-source.7z'
- task: PublishPipelineArtifact@1
displayName: 'Upload Artifacts'
inputs:
targetPath: '$(Build.ArtifactStagingDirectory)/yuzu-$(BuildName)-source.7z'
artifact: 'yuzu-$(BuildName)-source'
replaceExistingArchive: true
- job: upload_source
displayName: 'upload'
dependsOn: merge
steps:
- template: ./sync-source.yml
parameters:
artifactSource: 'true'
needSubmodules: 'true'
- script: chmod a+x $(System.DefaultWorkingDirectory)/.ci/scripts/merge/yuzubot-git-config.sh && $(System.DefaultWorkingDirectory)/.ci/scripts/merge/yuzubot-git-config.sh
displayName: 'Apply Git Configuration'
- script: git tag -a $(BuildName)-$(Build.BuildId) -m "yuzu $(BuildName) $(Build.BuildNumber) $(Build.DefinitionName)"
displayName: 'Tag Source'
- script: git remote add other $(GitRepoPushChangesURL)
displayName: 'Register Repository'
- script: git push --follow-tags --force other HEAD:$(GitPushBranch)
displayName: 'Update Code'
- script: git rev-list -n 1 $(BuildName)-$(Build.BuildId) > $(Build.ArtifactStagingDirectory)/tag-commit.sha
displayName: 'Calculate Release Point'
- task: PublishPipelineArtifact@1
displayName: 'Upload Release Point'
inputs:
targetPath: '$(Build.ArtifactStagingDirectory)/tag-commit.sha'
artifact: 'yuzu-$(BuildName)-release-point'
replaceExistingArchive: true

View File

@@ -0,0 +1,15 @@
parameters:
matchLabel: 'dummy-merge'
steps:
- script: mkdir $(System.DefaultWorkingDirectory)/patches && pip install requests urllib3
displayName: 'Prepare Environment'
- script: chmod a+x $(System.DefaultWorkingDirectory)/.ci/scripts/merge/yuzubot-git-config.sh && $(System.DefaultWorkingDirectory)/.ci/scripts/merge/yuzubot-git-config.sh
displayName: 'Apply Git Configuration'
- task: PythonScript@0
displayName: 'Discover, Download, and Apply Patches'
inputs:
scriptSource: 'filePath'
scriptPath: '.ci/scripts/merge/apply-patches-by-label.py'
arguments: '${{ parameters.matchLabel }} patches'
workingDirectory: '$(System.DefaultWorkingDirectory)'

View File

@@ -0,0 +1,16 @@
steps:
- checkout: none
- task: DownloadPipelineArtifact@2
displayName: 'Download Source'
inputs:
artifactName: 'yuzu-$(BuildName)-source'
buildType: 'current'
targetPath: '$(Build.ArtifactStagingDirectory)'
- script: rm -rf $(System.DefaultWorkingDirectory) && mkdir $(System.DefaultWorkingDirectory)
displayName: 'Clean Working Directory'
- task: ExtractFiles@1
displayName: 'Prepare Source'
inputs:
archiveFilePatterns: '$(Build.ArtifactStagingDirectory)/*.7z'
destinationFolder: '$(System.DefaultWorkingDirectory)'
cleanDestinationFolder: false

View File

@@ -0,0 +1,11 @@
parameters:
needSubmodules: 'true'
steps:
- checkout: self
displayName: 'Checkout Recursive'
submodules: recursive
# condition: eq(parameters.needSubmodules, 'true')
#- checkout: self
# displayName: 'Checkout Fast'
# condition: ne(parameters.needSubmodules, 'true')

View File

@@ -0,0 +1,7 @@
steps:
- ${{ if eq(parameters.artifactSource, 'true') }}:
- template: ./retrieve-artifact-source.yml
- ${{ if ne(parameters.artifactSource, 'true') }}:
- template: ./retrieve-master-source.yml
parameters:
needSubmodules: $(parameters.needSubmodules)

25
.ci/yuzu-mainline.yml Normal file
View File

@@ -0,0 +1,25 @@
trigger:
- master
stages:
- stage: merge
displayName: 'merge'
jobs:
- template: ./templates/merge.yml
- stage: format
dependsOn: merge
displayName: 'format'
jobs:
- job: format
displayName: 'clang'
pool:
vmImage: ubuntu-latest
steps:
- template: ./templates/format-check.yml
- stage: build
displayName: 'build'
dependsOn: format
jobs:
- template: ./templates/build-standard.yml
parameters:
cache: 'true'

19
.ci/yuzu-patreon.yml Normal file
View File

@@ -0,0 +1,19 @@
# Starter pipeline
# Start with a minimal pipeline that you can customize to build and deploy your code.
# Add steps that build, run tests, deploy, and more:
# https://aka.ms/yaml
trigger:
- master
pool:
vmImage: 'ubuntu-latest'
steps:
- script: echo Hello, world!
displayName: 'Run a one-line script'
- script: |
echo Add other tasks to build, test, and deploy your project.
echo See https://aka.ms/yaml
displayName: 'Run a multi-line script'

19
.ci/yuzu-repo-sync.yml Normal file
View File

@@ -0,0 +1,19 @@
trigger:
- master
jobs:
- job: copy
displayName: 'Sync Repository'
pool:
vmImage: 'ubuntu-latest'
steps:
- script: echo 'https://$(GitUsername):$(GitAccessToken)@dev.azure.com' > $HOME/.git-credentials
displayName: 'Load Credentials'
- script: git config --global credential.helper store
displayName: 'Register Credential Helper'
- script: git remote add other $(GitRepoPushChangesURL)
displayName: 'Register Repository'
- script: git push --force other HEAD:$(GitPushBranch)
displayName: 'Update Code'
- script: rm -rf $HOME/.git-credentials
displayName: 'Clear Cached Credentials'

20
.ci/yuzu-verify.yml Normal file
View File

@@ -0,0 +1,20 @@
stages:
- stage: format
displayName: 'format'
jobs:
- job: format
displayName: 'clang'
pool:
vmImage: ubuntu-latest
steps:
- template: ./templates/format-check.yml
parameters:
artifactSource: 'false'
- stage: build
displayName: 'build'
dependsOn: format
jobs:
- template: ./templates/build-standard.yml
parameters:
cache: 'false'
- template: ./templates/build-testing.yml

View File

@@ -81,7 +81,10 @@ set(HASH_FILES
"${VIDEO_CORE}/shader/decode/register_set_predicate.cpp"
"${VIDEO_CORE}/shader/decode/shift.cpp"
"${VIDEO_CORE}/shader/decode/video.cpp"
"${VIDEO_CORE}/shader/decode/warp.cpp"
"${VIDEO_CORE}/shader/decode/xmad.cpp"
"${VIDEO_CORE}/shader/control_flow.cpp"
"${VIDEO_CORE}/shader/control_flow.h"
"${VIDEO_CORE}/shader/decode.cpp"
"${VIDEO_CORE}/shader/node.h"
"${VIDEO_CORE}/shader/node_helper.cpp"

View File

@@ -2,6 +2,7 @@ yuzu emulator
=============
[![Travis CI Build Status](https://travis-ci.org/yuzu-emu/yuzu.svg?branch=master)](https://travis-ci.org/yuzu-emu/yuzu)
[![AppVeyor CI Build Status](https://ci.appveyor.com/api/projects/status/77k97svb2usreu68?svg=true)](https://ci.appveyor.com/project/bunnei/yuzu)
[![Azure Mainline CI Build Status](https://dev.azure.com/yuzu-emu/yuzu/_apis/build/status/yuzu%20mainline?branchName=master)](https://dev.azure.com/yuzu-emu/yuzu/)
yuzu is an experimental open-source emulator for the Nintendo Switch from the creators of [Citra](https://citra-emu.org/).

View File

@@ -73,13 +73,15 @@ private:
EffectInStatus info{};
};
AudioRenderer::AudioRenderer(Core::Timing::CoreTiming& core_timing, AudioRendererParameter params,
Kernel::SharedPtr<Kernel::WritableEvent> buffer_event)
Kernel::SharedPtr<Kernel::WritableEvent> buffer_event,
std::size_t instance_number)
: worker_params{params}, buffer_event{buffer_event}, voices(params.voice_count),
effects(params.effect_count) {
audio_out = std::make_unique<AudioCore::AudioOut>();
stream = audio_out->OpenStream(core_timing, STREAM_SAMPLE_RATE, STREAM_NUM_CHANNELS,
"AudioRenderer", [=]() { buffer_event->Signal(); });
fmt::format("AudioRenderer-Instance{}", instance_number),
[=]() { buffer_event->Signal(); });
audio_out->StartStream(stream);
QueueMixedBuffer(0);
@@ -217,13 +219,15 @@ std::vector<s16> AudioRenderer::VoiceState::DequeueSamples(std::size_t sample_co
if (offset == samples.size()) {
offset = 0;
if (!wave_buffer.is_looping) {
if (!wave_buffer.is_looping && wave_buffer.buffer_sz) {
SetWaveIndex(wave_index + 1);
}
out_status.wave_buffer_consumed++;
if (wave_buffer.buffer_sz) {
out_status.wave_buffer_consumed++;
}
if (wave_buffer.end_of_stream) {
if (wave_buffer.end_of_stream || wave_buffer.buffer_sz == 0) {
info.play_state = PlayState::Paused;
}
}

View File

@@ -215,7 +215,8 @@ static_assert(sizeof(UpdateDataHeader) == 0x40, "UpdateDataHeader has wrong size
class AudioRenderer {
public:
AudioRenderer(Core::Timing::CoreTiming& core_timing, AudioRendererParameter params,
Kernel::SharedPtr<Kernel::WritableEvent> buffer_event);
Kernel::SharedPtr<Kernel::WritableEvent> buffer_event,
std::size_t instance_number);
~AudioRenderer();
std::vector<u8> UpdateAudioRenderer(const std::vector<u8>& input_params);

View File

@@ -55,7 +55,10 @@ add_custom_command(OUTPUT scm_rev.cpp
"${VIDEO_CORE}/shader/decode/register_set_predicate.cpp"
"${VIDEO_CORE}/shader/decode/shift.cpp"
"${VIDEO_CORE}/shader/decode/video.cpp"
"${VIDEO_CORE}/shader/decode/warp.cpp"
"${VIDEO_CORE}/shader/decode/xmad.cpp"
"${VIDEO_CORE}/shader/control_flow.cpp"
"${VIDEO_CORE}/shader/control_flow.h"
"${VIDEO_CORE}/shader/decode.cpp"
"${VIDEO_CORE}/shader/node.h"
"${VIDEO_CORE}/shader/node_helper.cpp"

View File

@@ -3,6 +3,7 @@
#pragma once
#include <cstddef>
#include <memory>
#include <type_traits>
namespace Common {
@@ -37,4 +38,63 @@ constexpr bool IsWordAligned(T value) {
return (value & 0b11) == 0;
}
template <typename T, std::size_t Align = 16>
class AlignmentAllocator {
public:
using value_type = T;
using size_type = std::size_t;
using difference_type = std::ptrdiff_t;
using pointer = T*;
using const_pointer = const T*;
using reference = T&;
using const_reference = const T&;
public:
pointer address(reference r) noexcept {
return std::addressof(r);
}
const_pointer address(const_reference r) const noexcept {
return std::addressof(r);
}
pointer allocate(size_type n) {
return static_cast<pointer>(::operator new (n, std::align_val_t{Align}));
}
void deallocate(pointer p, size_type) {
::operator delete (p, std::align_val_t{Align});
}
void construct(pointer p, const value_type& wert) {
new (p) value_type(wert);
}
void destroy(pointer p) {
p->~value_type();
}
size_type max_size() const noexcept {
return size_type(-1) / sizeof(value_type);
}
template <typename T2>
struct rebind {
using other = AlignmentAllocator<T2, Align>;
};
bool operator!=(const AlignmentAllocator<T, Align>& other) const noexcept {
return !(*this == other);
}
// Returns true if and only if storage allocated from *this
// can be deallocated from other, and vice versa.
// Always returns true for stateless allocators.
bool operator==(const AlignmentAllocator<T, Align>& other) const noexcept {
return true;
}
};
} // namespace Common

View File

@@ -111,6 +111,8 @@ add_library(core STATIC
frontend/scope_acquire_window_context.h
gdbstub/gdbstub.cpp
gdbstub/gdbstub.h
hardware_interrupt_manager.cpp
hardware_interrupt_manager.h
hle/ipc.h
hle/ipc_helpers.h
hle/kernel/address_arbiter.cpp
@@ -208,6 +210,8 @@ add_library(core STATIC
hle/service/aoc/aoc_u.h
hle/service/apm/apm.cpp
hle/service/apm/apm.h
hle/service/apm/controller.cpp
hle/service/apm/controller.h
hle/service/apm/interface.cpp
hle/service/apm/interface.h
hle/service/audio/audctl.cpp
@@ -293,6 +297,7 @@ add_library(core STATIC
hle/service/hid/irs.h
hle/service/hid/xcd.cpp
hle/service/hid/xcd.h
hle/service/hid/errors.h
hle/service/hid/controllers/controller_base.cpp
hle/service/hid/controllers/controller_base.h
hle/service/hid/controllers/debug_pad.cpp
@@ -369,6 +374,7 @@ add_library(core STATIC
hle/service/nvdrv/devices/nvmap.h
hle/service/nvdrv/interface.cpp
hle/service/nvdrv/interface.h
hle/service/nvdrv/nvdata.h
hle/service/nvdrv/nvdrv.cpp
hle/service/nvdrv/nvdrv.h
hle/service/nvdrv/nvmemp.cpp

View File

@@ -44,13 +44,6 @@ public:
/// Step CPU by one instruction
virtual void Step() = 0;
/// Maps a backing memory region for the CPU
virtual void MapBackingMemory(VAddr address, std::size_t size, u8* memory,
Kernel::VMAPermission perms) = 0;
/// Unmaps a region of memory that was previously mapped using MapBackingMemory
virtual void UnmapMemory(VAddr address, std::size_t size) = 0;
/// Clear all instruction cache
virtual void ClearInstructionCache() = 0;

View File

@@ -177,15 +177,6 @@ ARM_Dynarmic::ARM_Dynarmic(System& system, ExclusiveMonitor& exclusive_monitor,
ARM_Dynarmic::~ARM_Dynarmic() = default;
void ARM_Dynarmic::MapBackingMemory(u64 address, std::size_t size, u8* memory,
Kernel::VMAPermission perms) {
inner_unicorn.MapBackingMemory(address, size, memory, perms);
}
void ARM_Dynarmic::UnmapMemory(u64 address, std::size_t size) {
inner_unicorn.UnmapMemory(address, size);
}
void ARM_Dynarmic::SetPC(u64 pc) {
jit->SetPC(pc);
}

View File

@@ -23,9 +23,6 @@ public:
ARM_Dynarmic(System& system, ExclusiveMonitor& exclusive_monitor, std::size_t core_index);
~ARM_Dynarmic() override;
void MapBackingMemory(VAddr address, std::size_t size, u8* memory,
Kernel::VMAPermission perms) override;
void UnmapMemory(u64 address, std::size_t size) override;
void SetPC(u64 pc) override;
u64 GetPC() const override;
u64 GetReg(int index) const override;

View File

@@ -50,11 +50,14 @@ static void CodeHook(uc_engine* uc, uint64_t address, uint32_t size, void* user_
static bool UnmappedMemoryHook(uc_engine* uc, uc_mem_type type, u64 addr, int size, u64 value,
void* user_data) {
auto* const system = static_cast<System*>(user_data);
ARM_Interface::ThreadContext ctx{};
Core::CurrentArmInterface().SaveContext(ctx);
system->CurrentArmInterface().SaveContext(ctx);
ASSERT_MSG(false, "Attempted to read from unmapped memory: 0x{:X}, pc=0x{:X}, lr=0x{:X}", addr,
ctx.pc, ctx.cpu_registers[30]);
return {};
return false;
}
ARM_Unicorn::ARM_Unicorn(System& system) : system{system} {
@@ -65,7 +68,7 @@ ARM_Unicorn::ARM_Unicorn(System& system) : system{system} {
uc_hook hook{};
CHECKED(uc_hook_add(uc, &hook, UC_HOOK_INTR, (void*)InterruptHook, this, 0, -1));
CHECKED(uc_hook_add(uc, &hook, UC_HOOK_MEM_INVALID, (void*)UnmappedMemoryHook, this, 0, -1));
CHECKED(uc_hook_add(uc, &hook, UC_HOOK_MEM_INVALID, (void*)UnmappedMemoryHook, &system, 0, -1));
if (GDBStub::IsServerEnabled()) {
CHECKED(uc_hook_add(uc, &hook, UC_HOOK_CODE, (void*)CodeHook, this, 0, -1));
last_bkpt_hit = false;
@@ -76,15 +79,6 @@ ARM_Unicorn::~ARM_Unicorn() {
CHECKED(uc_close(uc));
}
void ARM_Unicorn::MapBackingMemory(VAddr address, std::size_t size, u8* memory,
Kernel::VMAPermission perms) {
CHECKED(uc_mem_map_ptr(uc, address, size, static_cast<u32>(perms), memory));
}
void ARM_Unicorn::UnmapMemory(VAddr address, std::size_t size) {
CHECKED(uc_mem_unmap(uc, address, size));
}
void ARM_Unicorn::SetPC(u64 pc) {
CHECKED(uc_reg_write(uc, UC_ARM64_REG_PC, &pc));
}

View File

@@ -18,9 +18,6 @@ public:
explicit ARM_Unicorn(System& system);
~ARM_Unicorn() override;
void MapBackingMemory(VAddr address, std::size_t size, u8* memory,
Kernel::VMAPermission perms) override;
void UnmapMemory(VAddr address, std::size_t size) override;
void SetPC(u64 pc) override;
u64 GetPC() const override;
u64 GetReg(int index) const override;

View File

@@ -19,12 +19,14 @@
#include "core/file_sys/vfs_concat.h"
#include "core/file_sys/vfs_real.h"
#include "core/gdbstub/gdbstub.h"
#include "core/hardware_interrupt_manager.h"
#include "core/hle/kernel/client_port.h"
#include "core/hle/kernel/kernel.h"
#include "core/hle/kernel/process.h"
#include "core/hle/kernel/scheduler.h"
#include "core/hle/kernel/thread.h"
#include "core/hle/service/am/applets/applets.h"
#include "core/hle/service/apm/controller.h"
#include "core/hle/service/glue/manager.h"
#include "core/hle/service/service.h"
#include "core/hle/service/sm/sm.h"
@@ -143,14 +145,14 @@ struct System::Impl {
telemetry_session = std::make_unique<Core::TelemetrySession>();
service_manager = std::make_shared<Service::SM::ServiceManager>();
Service::Init(service_manager, system, *virtual_filesystem);
Service::Init(service_manager, system);
GDBStub::Init();
renderer = VideoCore::CreateRenderer(emu_window, system);
if (!renderer->Init()) {
return ResultStatus::ErrorVideoCore;
}
interrupt_manager = std::make_unique<Core::Hardware::InterruptManager>(system);
gpu_core = VideoCore::CreateGPU(system);
is_powered_on = true;
@@ -297,6 +299,7 @@ struct System::Impl {
std::unique_ptr<VideoCore::RendererBase> renderer;
std::unique_ptr<Tegra::GPU> gpu_core;
std::shared_ptr<Tegra::DebugContext> debug_context;
std::unique_ptr<Core::Hardware::InterruptManager> interrupt_manager;
CpuCoreManager cpu_core_manager;
bool is_powered_on = false;
@@ -306,6 +309,9 @@ struct System::Impl {
/// Frontend applets
Service::AM::Applets::AppletManager applet_manager;
/// APM (Performance) services
Service::APM::Controller apm_controller{core_timing};
/// Glue services
Service::Glue::ARPManager arp_manager;
@@ -440,6 +446,14 @@ const Tegra::GPU& System::GPU() const {
return *impl->gpu_core;
}
Core::Hardware::InterruptManager& System::InterruptManager() {
return *impl->interrupt_manager;
}
const Core::Hardware::InterruptManager& System::InterruptManager() const {
return *impl->interrupt_manager;
}
VideoCore::RendererBase& System::Renderer() {
return *impl->renderer;
}
@@ -568,6 +582,14 @@ const Service::Glue::ARPManager& System::GetARPManager() const {
return impl->arp_manager;
}
Service::APM::Controller& System::GetAPMController() {
return impl->apm_controller;
}
const Service::APM::Controller& System::GetAPMController() const {
return impl->apm_controller;
}
System::ResultStatus System::Init(Frontend::EmuWindow& emu_window) {
return impl->Init(*this, emu_window);
}

View File

@@ -43,6 +43,10 @@ struct AppletFrontendSet;
class AppletManager;
} // namespace AM::Applets
namespace APM {
class Controller;
}
namespace Glue {
class ARPManager;
}
@@ -66,6 +70,10 @@ namespace Core::Timing {
class CoreTiming;
}
namespace Core::Hardware {
class InterruptManager;
}
namespace Core {
class ARM_Interface;
@@ -230,6 +238,12 @@ public:
/// Provides a constant reference to the core timing instance.
const Timing::CoreTiming& CoreTiming() const;
/// Provides a reference to the interrupt manager instance.
Core::Hardware::InterruptManager& InterruptManager();
/// Provides a constant reference to the interrupt manager instance.
const Core::Hardware::InterruptManager& InterruptManager() const;
/// Provides a reference to the kernel instance.
Kernel::KernelCore& Kernel();
@@ -296,6 +310,10 @@ public:
const Service::Glue::ARPManager& GetARPManager() const;
Service::APM::Controller& GetAPMController();
const Service::APM::Controller& GetAPMController() const;
private:
System();
@@ -319,10 +337,6 @@ private:
static System s_instance;
};
inline ARM_Interface& CurrentArmInterface() {
return System::GetInstance().CurrentArmInterface();
}
inline Kernel::Process* CurrentProcess() {
return System::GetInstance().CurrentProcess();
}

View File

@@ -53,16 +53,12 @@ bool CpuBarrier::Rendezvous() {
Cpu::Cpu(System& system, ExclusiveMonitor& exclusive_monitor, CpuBarrier& cpu_barrier,
std::size_t core_index)
: cpu_barrier{cpu_barrier}, core_timing{system.CoreTiming()}, core_index{core_index} {
if (Settings::values.cpu_jit_enabled) {
#ifdef ARCHITECTURE_x86_64
arm_interface = std::make_unique<ARM_Dynarmic>(system, exclusive_monitor, core_index);
arm_interface = std::make_unique<ARM_Dynarmic>(system, exclusive_monitor, core_index);
#else
arm_interface = std::make_unique<ARM_Unicorn>(system);
LOG_WARNING(Core, "CPU JIT requested, but Dynarmic not available");
arm_interface = std::make_unique<ARM_Unicorn>(system);
LOG_WARNING(Core, "CPU JIT requested, but Dynarmic not available");
#endif
} else {
arm_interface = std::make_unique<ARM_Unicorn>(system);
}
scheduler = std::make_unique<Kernel::Scheduler>(system, *arm_interface);
}
@@ -70,15 +66,12 @@ Cpu::Cpu(System& system, ExclusiveMonitor& exclusive_monitor, CpuBarrier& cpu_ba
Cpu::~Cpu() = default;
std::unique_ptr<ExclusiveMonitor> Cpu::MakeExclusiveMonitor(std::size_t num_cores) {
if (Settings::values.cpu_jit_enabled) {
#ifdef ARCHITECTURE_x86_64
return std::make_unique<DynarmicExclusiveMonitor>(num_cores);
return std::make_unique<DynarmicExclusiveMonitor>(num_cores);
#else
return nullptr; // TODO(merry): Passthrough exclusive monitor
// TODO(merry): Passthrough exclusive monitor
return nullptr;
#endif
} else {
return nullptr; // TODO(merry): Passthrough exclusive monitor
}
}
void Cpu::RunLoop(bool tight_loop) {

View File

@@ -94,6 +94,10 @@ u64 ProgramMetadata::GetFilesystemPermissions() const {
return aci_file_access.permissions;
}
u32 ProgramMetadata::GetSystemResourceSize() const {
return npdm_header.system_resource_size;
}
const ProgramMetadata::KernelCapabilityDescriptors& ProgramMetadata::GetKernelCapabilities() const {
return aci_kernel_capabilities;
}

View File

@@ -58,6 +58,7 @@ public:
u32 GetMainThreadStackSize() const;
u64 GetTitleID() const;
u64 GetFilesystemPermissions() const;
u32 GetSystemResourceSize() const;
const KernelCapabilityDescriptors& GetKernelCapabilities() const;
void Print() const;
@@ -76,7 +77,8 @@ private:
u8 reserved_3;
u8 main_thread_priority;
u8 main_thread_cpu;
std::array<u8, 8> reserved_4;
std::array<u8, 4> reserved_4;
u32_le system_resource_size;
u32_le process_category;
u32_le main_stack_size;
std::array<u8, 0x10> application_name;

View File

@@ -0,0 +1,30 @@
// Copyright 2019 Yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include "core/core.h"
#include "core/core_timing.h"
#include "core/hardware_interrupt_manager.h"
#include "core/hle/service/nvdrv/interface.h"
#include "core/hle/service/sm/sm.h"
namespace Core::Hardware {
InterruptManager::InterruptManager(Core::System& system_in) : system(system_in) {
gpu_interrupt_event =
system.CoreTiming().RegisterEvent("GPUInterrupt", [this](u64 message, s64) {
auto nvdrv = system.ServiceManager().GetService<Service::Nvidia::NVDRV>("nvdrv");
const u32 syncpt = static_cast<u32>(message >> 32);
const u32 value = static_cast<u32>(message);
nvdrv->SignalGPUInterruptSyncpt(syncpt, value);
});
}
InterruptManager::~InterruptManager() = default;
void InterruptManager::GPUInterruptSyncpt(const u32 syncpoint_id, const u32 value) {
const u64 msg = (static_cast<u64>(syncpoint_id) << 32ULL) | value;
system.CoreTiming().ScheduleEvent(10, gpu_interrupt_event, msg);
}
} // namespace Core::Hardware

View File

@@ -0,0 +1,31 @@
// Copyright 2019 Yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include "common/common_types.h"
namespace Core {
class System;
}
namespace Core::Timing {
struct EventType;
}
namespace Core::Hardware {
class InterruptManager {
public:
explicit InterruptManager(Core::System& system);
~InterruptManager();
void GPUInterruptSyncpt(u32 syncpoint_id, u32 value);
private:
Core::System& system;
Core::Timing::EventType* gpu_interrupt_event{};
};
} // namespace Core::Hardware

View File

@@ -8,6 +8,7 @@
#include <vector>
#include "common/common_types.h"
#include "core/hle/kernel/physical_memory.h"
namespace Kernel {
@@ -77,7 +78,7 @@ struct CodeSet final {
}
/// The overall data that backs this code set.
std::vector<u8> memory;
Kernel::PhysicalMemory memory;
/// The segments that comprise this code set.
std::array<Segment, 3> segments;

View File

@@ -0,0 +1,19 @@
// Copyright 2019 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include "common/alignment.h"
namespace Kernel {
// This encapsulation serves 2 purposes:
// - First, to encapsulate host physical memory under a single type and set an
// standard for managing it.
// - Second to ensure all host backing memory used is aligned to 256 bytes due
// to strict alignment restrictions on GPU memory.
using PhysicalMemory = std::vector<u8, Common::AlignmentAllocator<u8, 256>>;
} // namespace Kernel

View File

@@ -129,20 +129,17 @@ u64 Process::GetTotalPhysicalMemoryAvailable() const {
return vm_manager.GetTotalPhysicalMemoryAvailable();
}
u64 Process::GetTotalPhysicalMemoryAvailableWithoutMmHeap() const {
// TODO: Subtract the personal heap size from this when the
// personal heap is implemented.
return GetTotalPhysicalMemoryAvailable();
u64 Process::GetTotalPhysicalMemoryAvailableWithoutSystemResource() const {
return GetTotalPhysicalMemoryAvailable() - GetSystemResourceSize();
}
u64 Process::GetTotalPhysicalMemoryUsed() const {
return vm_manager.GetCurrentHeapSize() + main_thread_stack_size + code_memory_size;
return vm_manager.GetCurrentHeapSize() + main_thread_stack_size + code_memory_size +
GetSystemResourceUsage();
}
u64 Process::GetTotalPhysicalMemoryUsedWithoutMmHeap() const {
// TODO: Subtract the personal heap size from this when the
// personal heap is implemented.
return GetTotalPhysicalMemoryUsed();
u64 Process::GetTotalPhysicalMemoryUsedWithoutSystemResource() const {
return GetTotalPhysicalMemoryUsed() - GetSystemResourceUsage();
}
void Process::RegisterThread(const Thread* thread) {
@@ -172,6 +169,7 @@ ResultCode Process::LoadFromMetadata(const FileSys::ProgramMetadata& metadata) {
program_id = metadata.GetTitleID();
ideal_core = metadata.GetMainThreadCore();
is_64bit_process = metadata.Is64BitProgram();
system_resource_size = metadata.GetSystemResourceSize();
vm_manager.Reset(metadata.GetAddressSpaceType());
@@ -186,19 +184,11 @@ ResultCode Process::LoadFromMetadata(const FileSys::ProgramMetadata& metadata) {
}
void Process::Run(s32 main_thread_priority, u64 stack_size) {
// The kernel always ensures that the given stack size is page aligned.
main_thread_stack_size = Common::AlignUp(stack_size, Memory::PAGE_SIZE);
// Allocate and map the main thread stack
// TODO(bunnei): This is heap area that should be allocated by the kernel and not mapped as part
// of the user address space.
const VAddr mapping_address = vm_manager.GetTLSIORegionEndAddress() - main_thread_stack_size;
vm_manager
.MapMemoryBlock(mapping_address, std::make_shared<std::vector<u8>>(main_thread_stack_size),
0, main_thread_stack_size, MemoryState::Stack)
.Unwrap();
AllocateMainThreadStack(stack_size);
tls_region_address = CreateTLSRegion();
vm_manager.LogLayout();
ChangeStatus(ProcessStatus::Running);
SetupMainThread(*this, kernel, main_thread_priority);
@@ -228,6 +218,9 @@ void Process::PrepareForTermination() {
stop_threads(system.Scheduler(2).GetThreadList());
stop_threads(system.Scheduler(3).GetThreadList());
FreeTLSRegion(tls_region_address);
tls_region_address = 0;
ChangeStatus(ProcessStatus::Exited);
}
@@ -254,7 +247,7 @@ VAddr Process::CreateTLSRegion() {
ASSERT(region_address.Succeeded());
const auto map_result = vm_manager.MapMemoryBlock(
*region_address, std::make_shared<std::vector<u8>>(Memory::PAGE_SIZE), 0,
*region_address, std::make_shared<PhysicalMemory>(Memory::PAGE_SIZE), 0,
Memory::PAGE_SIZE, MemoryState::ThreadLocal);
ASSERT(map_result.Succeeded());
@@ -284,7 +277,7 @@ void Process::FreeTLSRegion(VAddr tls_address) {
}
void Process::LoadModule(CodeSet module_, VAddr base_addr) {
const auto memory = std::make_shared<std::vector<u8>>(std::move(module_.memory));
const auto memory = std::make_shared<PhysicalMemory>(std::move(module_.memory));
const auto MapSegment = [&](const CodeSet::Segment& segment, VMAPermission permissions,
MemoryState memory_state) {
@@ -327,4 +320,16 @@ void Process::ChangeStatus(ProcessStatus new_status) {
WakeupAllWaitingThreads();
}
void Process::AllocateMainThreadStack(u64 stack_size) {
// The kernel always ensures that the given stack size is page aligned.
main_thread_stack_size = Common::AlignUp(stack_size, Memory::PAGE_SIZE);
// Allocate and map the main thread stack
const VAddr mapping_address = vm_manager.GetTLSIORegionEndAddress() - main_thread_stack_size;
vm_manager
.MapMemoryBlock(mapping_address, std::make_shared<PhysicalMemory>(main_thread_stack_size),
0, main_thread_stack_size, MemoryState::Stack)
.Unwrap();
}
} // namespace Kernel

View File

@@ -135,6 +135,11 @@ public:
return mutex;
}
/// Gets the address to the process' dedicated TLS region.
VAddr GetTLSRegionAddress() const {
return tls_region_address;
}
/// Gets the current status of the process
ProcessStatus GetStatus() const {
return status;
@@ -168,8 +173,24 @@ public:
return capabilities.GetPriorityMask();
}
u32 IsVirtualMemoryEnabled() const {
return is_virtual_address_memory_enabled;
/// Gets the amount of secure memory to allocate for memory management.
u32 GetSystemResourceSize() const {
return system_resource_size;
}
/// Gets the amount of secure memory currently in use for memory management.
u32 GetSystemResourceUsage() const {
// On hardware, this returns the amount of system resource memory that has
// been used by the kernel. This is problematic for Yuzu to emulate, because
// system resource memory is used for page tables -- and yuzu doesn't really
// have a way to calculate how much memory is required for page tables for
// the current process at any given time.
// TODO: Is this even worth implementing? Games may retrieve this value via
// an SDK function that gets used + available system resource size for debug
// or diagnostic purposes. However, it seems unlikely that a game would make
// decisions based on how much system memory is dedicated to its page tables.
// Is returning a value other than zero wise?
return 0;
}
/// Whether this process is an AArch64 or AArch32 process.
@@ -196,15 +217,15 @@ public:
u64 GetTotalPhysicalMemoryAvailable() const;
/// Retrieves the total physical memory available to this process in bytes,
/// without the size of the personal heap added to it.
u64 GetTotalPhysicalMemoryAvailableWithoutMmHeap() const;
/// without the size of the personal system resource heap added to it.
u64 GetTotalPhysicalMemoryAvailableWithoutSystemResource() const;
/// Retrieves the total physical memory used by this process in bytes.
u64 GetTotalPhysicalMemoryUsed() const;
/// Retrieves the total physical memory used by this process in bytes,
/// without the size of the personal heap added to it.
u64 GetTotalPhysicalMemoryUsedWithoutMmHeap() const;
/// without the size of the personal system resource heap added to it.
u64 GetTotalPhysicalMemoryUsedWithoutSystemResource() const;
/// Gets the list of all threads created with this process as their owner.
const std::list<const Thread*>& GetThreadList() const {
@@ -280,6 +301,9 @@ private:
/// a process signal.
void ChangeStatus(ProcessStatus new_status);
/// Allocates the main thread stack for the process, given the stack size in bytes.
void AllocateMainThreadStack(u64 stack_size);
/// Memory manager for this process.
Kernel::VMManager vm_manager;
@@ -298,12 +322,16 @@ private:
/// Title ID corresponding to the process
u64 program_id = 0;
/// Specifies additional memory to be reserved for the process's memory management by the
/// system. When this is non-zero, secure memory is allocated and used for page table allocation
/// instead of using the normal global page tables/memory block management.
u32 system_resource_size = 0;
/// Resource limit descriptor for this process
SharedPtr<ResourceLimit> resource_limit;
/// The ideal CPU core for this process, threads are scheduled on this core by default.
u8 ideal_core = 0;
u32 is_virtual_address_memory_enabled = 0;
/// The Thread Local Storage area is allocated as processes create threads,
/// each TLS area is 0x200 bytes, so one page (0x1000) is split up in 8 parts, and each part
@@ -338,6 +366,9 @@ private:
/// variable related facilities.
Mutex mutex;
/// Address indicating the location of the process' dedicated TLS region.
VAddr tls_region_address = 0;
/// Random values for svcGetInfo RandomEntropy
std::array<u64, RANDOM_ENTROPY_SIZE> random_entropy{};

View File

@@ -28,7 +28,7 @@ SharedPtr<SharedMemory> SharedMemory::Create(KernelCore& kernel, Process* owner_
shared_memory->other_permissions = other_permissions;
if (address == 0) {
shared_memory->backing_block = std::make_shared<std::vector<u8>>(size);
shared_memory->backing_block = std::make_shared<Kernel::PhysicalMemory>(size);
shared_memory->backing_block_offset = 0;
// Refresh the address mappings for the current process.
@@ -59,8 +59,8 @@ SharedPtr<SharedMemory> SharedMemory::Create(KernelCore& kernel, Process* owner_
}
SharedPtr<SharedMemory> SharedMemory::CreateForApplet(
KernelCore& kernel, std::shared_ptr<std::vector<u8>> heap_block, std::size_t offset, u64 size,
MemoryPermission permissions, MemoryPermission other_permissions, std::string name) {
KernelCore& kernel, std::shared_ptr<Kernel::PhysicalMemory> heap_block, std::size_t offset,
u64 size, MemoryPermission permissions, MemoryPermission other_permissions, std::string name) {
SharedPtr<SharedMemory> shared_memory(new SharedMemory(kernel));
shared_memory->owner_process = nullptr;

View File

@@ -10,6 +10,7 @@
#include "common/common_types.h"
#include "core/hle/kernel/object.h"
#include "core/hle/kernel/physical_memory.h"
#include "core/hle/kernel/process.h"
#include "core/hle/result.h"
@@ -62,12 +63,10 @@ public:
* block.
* @param name Optional object name, used for debugging purposes.
*/
static SharedPtr<SharedMemory> CreateForApplet(KernelCore& kernel,
std::shared_ptr<std::vector<u8>> heap_block,
std::size_t offset, u64 size,
MemoryPermission permissions,
MemoryPermission other_permissions,
std::string name = "Unknown Applet");
static SharedPtr<SharedMemory> CreateForApplet(
KernelCore& kernel, std::shared_ptr<Kernel::PhysicalMemory> heap_block, std::size_t offset,
u64 size, MemoryPermission permissions, MemoryPermission other_permissions,
std::string name = "Unknown Applet");
std::string GetTypeName() const override {
return "SharedMemory";
@@ -135,7 +134,7 @@ private:
~SharedMemory() override;
/// Backing memory for this shared memory block.
std::shared_ptr<std::vector<u8>> backing_block;
std::shared_ptr<PhysicalMemory> backing_block;
/// Offset into the backing block for this shared memory.
std::size_t backing_block_offset = 0;
/// Size of the memory block. Page-aligned.

View File

@@ -318,7 +318,14 @@ static ResultCode UnmapMemory(Core::System& system, VAddr dst_addr, VAddr src_ad
return result;
}
return vm_manager.UnmapRange(dst_addr, size);
const auto unmap_res = vm_manager.UnmapRange(dst_addr, size);
// Reprotect the source mapping on success
if (unmap_res.IsSuccess()) {
ASSERT(vm_manager.ReprotectRange(src_addr, size, VMAPermission::ReadWrite).IsSuccess());
}
return unmap_res;
}
/// Connect to an OS service given the port name, returns the handle to the port to out
@@ -729,16 +736,16 @@ static ResultCode GetInfo(Core::System& system, u64* result, u64 info_id, u64 ha
StackRegionBaseAddr = 14,
StackRegionSize = 15,
// 3.0.0+
IsVirtualAddressMemoryEnabled = 16,
PersonalMmHeapUsage = 17,
SystemResourceSize = 16,
SystemResourceUsage = 17,
TitleId = 18,
// 4.0.0+
PrivilegedProcessId = 19,
// 5.0.0+
UserExceptionContextAddr = 20,
// 6.0.0+
TotalPhysicalMemoryAvailableWithoutMmHeap = 21,
TotalPhysicalMemoryUsedWithoutMmHeap = 22,
TotalPhysicalMemoryAvailableWithoutSystemResource = 21,
TotalPhysicalMemoryUsedWithoutSystemResource = 22,
};
const auto info_id_type = static_cast<GetInfoType>(info_id);
@@ -756,12 +763,12 @@ static ResultCode GetInfo(Core::System& system, u64* result, u64 info_id, u64 ha
case GetInfoType::StackRegionSize:
case GetInfoType::TotalPhysicalMemoryAvailable:
case GetInfoType::TotalPhysicalMemoryUsed:
case GetInfoType::IsVirtualAddressMemoryEnabled:
case GetInfoType::PersonalMmHeapUsage:
case GetInfoType::SystemResourceSize:
case GetInfoType::SystemResourceUsage:
case GetInfoType::TitleId:
case GetInfoType::UserExceptionContextAddr:
case GetInfoType::TotalPhysicalMemoryAvailableWithoutMmHeap:
case GetInfoType::TotalPhysicalMemoryUsedWithoutMmHeap: {
case GetInfoType::TotalPhysicalMemoryAvailableWithoutSystemResource:
case GetInfoType::TotalPhysicalMemoryUsedWithoutSystemResource: {
if (info_sub_id != 0) {
return ERR_INVALID_ENUM_VALUE;
}
@@ -822,8 +829,13 @@ static ResultCode GetInfo(Core::System& system, u64* result, u64 info_id, u64 ha
*result = process->GetTotalPhysicalMemoryUsed();
return RESULT_SUCCESS;
case GetInfoType::IsVirtualAddressMemoryEnabled:
*result = process->IsVirtualMemoryEnabled();
case GetInfoType::SystemResourceSize:
*result = process->GetSystemResourceSize();
return RESULT_SUCCESS;
case GetInfoType::SystemResourceUsage:
LOG_WARNING(Kernel_SVC, "(STUBBED) Attempted to query system resource usage");
*result = process->GetSystemResourceUsage();
return RESULT_SUCCESS;
case GetInfoType::TitleId:
@@ -831,17 +843,15 @@ static ResultCode GetInfo(Core::System& system, u64* result, u64 info_id, u64 ha
return RESULT_SUCCESS;
case GetInfoType::UserExceptionContextAddr:
LOG_WARNING(Kernel_SVC,
"(STUBBED) Attempted to query user exception context address, returned 0");
*result = 0;
*result = process->GetTLSRegionAddress();
return RESULT_SUCCESS;
case GetInfoType::TotalPhysicalMemoryAvailableWithoutMmHeap:
*result = process->GetTotalPhysicalMemoryAvailable();
case GetInfoType::TotalPhysicalMemoryAvailableWithoutSystemResource:
*result = process->GetTotalPhysicalMemoryAvailableWithoutSystemResource();
return RESULT_SUCCESS;
case GetInfoType::TotalPhysicalMemoryUsedWithoutMmHeap:
*result = process->GetTotalPhysicalMemoryUsedWithoutMmHeap();
case GetInfoType::TotalPhysicalMemoryUsedWithoutSystemResource:
*result = process->GetTotalPhysicalMemoryUsedWithoutSystemResource();
return RESULT_SUCCESS;
default:
@@ -946,6 +956,86 @@ static ResultCode GetInfo(Core::System& system, u64* result, u64 info_id, u64 ha
}
}
/// Maps memory at a desired address
static ResultCode MapPhysicalMemory(Core::System& system, VAddr addr, u64 size) {
LOG_DEBUG(Kernel_SVC, "called, addr=0x{:016X}, size=0x{:X}", addr, size);
if (!Common::Is4KBAligned(addr)) {
LOG_ERROR(Kernel_SVC, "Address is not aligned to 4KB, 0x{:016X}", addr);
return ERR_INVALID_ADDRESS;
}
if (!Common::Is4KBAligned(size)) {
LOG_ERROR(Kernel_SVC, "Size is not aligned to 4KB, 0x{:X}", size);
return ERR_INVALID_SIZE;
}
if (size == 0) {
LOG_ERROR(Kernel_SVC, "Size is zero");
return ERR_INVALID_SIZE;
}
if (!(addr < addr + size)) {
LOG_ERROR(Kernel_SVC, "Size causes 64-bit overflow of address");
return ERR_INVALID_MEMORY_RANGE;
}
Process* const current_process = system.Kernel().CurrentProcess();
auto& vm_manager = current_process->VMManager();
if (current_process->GetSystemResourceSize() == 0) {
LOG_ERROR(Kernel_SVC, "System Resource Size is zero");
return ERR_INVALID_STATE;
}
if (!vm_manager.IsWithinMapRegion(addr, size)) {
LOG_ERROR(Kernel_SVC, "Range not within map region");
return ERR_INVALID_MEMORY_RANGE;
}
return vm_manager.MapPhysicalMemory(addr, size);
}
/// Unmaps memory previously mapped via MapPhysicalMemory
static ResultCode UnmapPhysicalMemory(Core::System& system, VAddr addr, u64 size) {
LOG_DEBUG(Kernel_SVC, "called, addr=0x{:016X}, size=0x{:X}", addr, size);
if (!Common::Is4KBAligned(addr)) {
LOG_ERROR(Kernel_SVC, "Address is not aligned to 4KB, 0x{:016X}", addr);
return ERR_INVALID_ADDRESS;
}
if (!Common::Is4KBAligned(size)) {
LOG_ERROR(Kernel_SVC, "Size is not aligned to 4KB, 0x{:X}", size);
return ERR_INVALID_SIZE;
}
if (size == 0) {
LOG_ERROR(Kernel_SVC, "Size is zero");
return ERR_INVALID_SIZE;
}
if (!(addr < addr + size)) {
LOG_ERROR(Kernel_SVC, "Size causes 64-bit overflow of address");
return ERR_INVALID_MEMORY_RANGE;
}
Process* const current_process = system.Kernel().CurrentProcess();
auto& vm_manager = current_process->VMManager();
if (current_process->GetSystemResourceSize() == 0) {
LOG_ERROR(Kernel_SVC, "System Resource Size is zero");
return ERR_INVALID_STATE;
}
if (!vm_manager.IsWithinMapRegion(addr, size)) {
LOG_ERROR(Kernel_SVC, "Range not within map region");
return ERR_INVALID_MEMORY_RANGE;
}
return vm_manager.UnmapPhysicalMemory(addr, size);
}
/// Sets the thread activity
static ResultCode SetThreadActivity(Core::System& system, Handle handle, u32 activity) {
LOG_DEBUG(Kernel_SVC, "called, handle=0x{:08X}, activity=0x{:08X}", handle, activity);
@@ -1647,8 +1737,8 @@ static ResultCode SignalProcessWideKey(Core::System& system, VAddr condition_var
// Wait for an address (via Address Arbiter)
static ResultCode WaitForAddress(Core::System& system, VAddr address, u32 type, s32 value,
s64 timeout) {
LOG_WARNING(Kernel_SVC, "called, address=0x{:X}, type=0x{:X}, value=0x{:X}, timeout={}",
address, type, value, timeout);
LOG_TRACE(Kernel_SVC, "called, address=0x{:X}, type=0x{:X}, value=0x{:X}, timeout={}", address,
type, value, timeout);
// If the passed address is a kernel virtual address, return invalid memory state.
if (Memory::IsKernelVirtualAddress(address)) {
@@ -1670,8 +1760,8 @@ static ResultCode WaitForAddress(Core::System& system, VAddr address, u32 type,
// Signals to an address (via Address Arbiter)
static ResultCode SignalToAddress(Core::System& system, VAddr address, u32 type, s32 value,
s32 num_to_wake) {
LOG_WARNING(Kernel_SVC, "called, address=0x{:X}, type=0x{:X}, value=0x{:X}, num_to_wake=0x{:X}",
address, type, value, num_to_wake);
LOG_TRACE(Kernel_SVC, "called, address=0x{:X}, type=0x{:X}, value=0x{:X}, num_to_wake=0x{:X}",
address, type, value, num_to_wake);
// If the passed address is a kernel virtual address, return invalid memory state.
if (Memory::IsKernelVirtualAddress(address)) {
@@ -2303,8 +2393,8 @@ static const FunctionDef SVC_Table[] = {
{0x29, SvcWrap<GetInfo>, "GetInfo"},
{0x2A, nullptr, "FlushEntireDataCache"},
{0x2B, nullptr, "FlushDataCache"},
{0x2C, nullptr, "MapPhysicalMemory"},
{0x2D, nullptr, "UnmapPhysicalMemory"},
{0x2C, SvcWrap<MapPhysicalMemory>, "MapPhysicalMemory"},
{0x2D, SvcWrap<UnmapPhysicalMemory>, "UnmapPhysicalMemory"},
{0x2E, nullptr, "GetFutureThreadInfo"},
{0x2F, nullptr, "GetLastThreadInfo"},
{0x30, SvcWrap<GetResourceLimitLimitValue>, "GetResourceLimitLimitValue"},

View File

@@ -32,6 +32,11 @@ void SvcWrap(Core::System& system) {
FuncReturn(system, func(system, Param(system, 0)).raw);
}
template <ResultCode func(Core::System&, u64, u64)>
void SvcWrap(Core::System& system) {
FuncReturn(system, func(system, Param(system, 0), Param(system, 1)).raw);
}
template <ResultCode func(Core::System&, u32)>
void SvcWrap(Core::System& system) {
FuncReturn(system, func(system, static_cast<u32>(Param(system, 0))).raw);

View File

@@ -47,7 +47,7 @@ ResultCode TransferMemory::MapMemory(VAddr address, u64 size, MemoryPermission p
return ERR_INVALID_STATE;
}
backing_block = std::make_shared<std::vector<u8>>(size);
backing_block = std::make_shared<PhysicalMemory>(size);
const auto map_state = owner_permissions == MemoryPermission::None
? MemoryState::TransferMemoryIsolated

View File

@@ -8,6 +8,7 @@
#include <vector>
#include "core/hle/kernel/object.h"
#include "core/hle/kernel/physical_memory.h"
union ResultCode;
@@ -82,7 +83,7 @@ private:
~TransferMemory() override;
/// Memory block backing this instance.
std::shared_ptr<std::vector<u8>> backing_block;
std::shared_ptr<PhysicalMemory> backing_block;
/// The base address for the memory managed by this instance.
VAddr base_address = 0;

View File

@@ -5,13 +5,15 @@
#include <algorithm>
#include <iterator>
#include <utility>
#include "common/alignment.h"
#include "common/assert.h"
#include "common/logging/log.h"
#include "common/memory_hook.h"
#include "core/arm/arm_interface.h"
#include "core/core.h"
#include "core/file_sys/program_metadata.h"
#include "core/hle/kernel/errors.h"
#include "core/hle/kernel/process.h"
#include "core/hle/kernel/resource_limit.h"
#include "core/hle/kernel/vm_manager.h"
#include "core/memory.h"
#include "core/memory_setup.h"
@@ -49,10 +51,14 @@ bool VirtualMemoryArea::CanBeMergedWith(const VirtualMemoryArea& next) const {
type != next.type) {
return false;
}
if (type == VMAType::AllocatedMemoryBlock &&
(backing_block != next.backing_block || offset + size != next.offset)) {
if ((attribute & MemoryAttribute::DeviceMapped) == MemoryAttribute::DeviceMapped) {
// TODO: Can device mapped memory be merged sanely?
// Not merging it may cause inaccuracies versus hardware when memory layout is queried.
return false;
}
if (type == VMAType::AllocatedMemoryBlock) {
return true;
}
if (type == VMAType::BackingMemory && backing_memory + size != next.backing_memory) {
return false;
}
@@ -98,9 +104,9 @@ bool VMManager::IsValidHandle(VMAHandle handle) const {
}
ResultVal<VMManager::VMAHandle> VMManager::MapMemoryBlock(VAddr target,
std::shared_ptr<std::vector<u8>> block,
std::shared_ptr<PhysicalMemory> block,
std::size_t offset, u64 size,
MemoryState state) {
MemoryState state, VMAPermission perm) {
ASSERT(block != nullptr);
ASSERT(offset + size <= block->size());
@@ -109,17 +115,8 @@ ResultVal<VMManager::VMAHandle> VMManager::MapMemoryBlock(VAddr target,
VirtualMemoryArea& final_vma = vma_handle->second;
ASSERT(final_vma.size == size);
system.ArmInterface(0).MapBackingMemory(target, size, block->data() + offset,
VMAPermission::ReadWriteExecute);
system.ArmInterface(1).MapBackingMemory(target, size, block->data() + offset,
VMAPermission::ReadWriteExecute);
system.ArmInterface(2).MapBackingMemory(target, size, block->data() + offset,
VMAPermission::ReadWriteExecute);
system.ArmInterface(3).MapBackingMemory(target, size, block->data() + offset,
VMAPermission::ReadWriteExecute);
final_vma.type = VMAType::AllocatedMemoryBlock;
final_vma.permissions = VMAPermission::ReadWrite;
final_vma.permissions = perm;
final_vma.state = state;
final_vma.backing_block = std::move(block);
final_vma.offset = offset;
@@ -137,11 +134,6 @@ ResultVal<VMManager::VMAHandle> VMManager::MapBackingMemory(VAddr target, u8* me
VirtualMemoryArea& final_vma = vma_handle->second;
ASSERT(final_vma.size == size);
system.ArmInterface(0).MapBackingMemory(target, size, memory, VMAPermission::ReadWriteExecute);
system.ArmInterface(1).MapBackingMemory(target, size, memory, VMAPermission::ReadWriteExecute);
system.ArmInterface(2).MapBackingMemory(target, size, memory, VMAPermission::ReadWriteExecute);
system.ArmInterface(3).MapBackingMemory(target, size, memory, VMAPermission::ReadWriteExecute);
final_vma.type = VMAType::BackingMemory;
final_vma.permissions = VMAPermission::ReadWrite;
final_vma.state = state;
@@ -230,11 +222,6 @@ ResultCode VMManager::UnmapRange(VAddr target, u64 size) {
ASSERT(FindVMA(target)->second.size >= size);
system.ArmInterface(0).UnmapMemory(target, size);
system.ArmInterface(1).UnmapMemory(target, size);
system.ArmInterface(2).UnmapMemory(target, size);
system.ArmInterface(3).UnmapMemory(target, size);
return RESULT_SUCCESS;
}
@@ -274,7 +261,7 @@ ResultVal<VAddr> VMManager::SetHeapSize(u64 size) {
if (heap_memory == nullptr) {
// Initialize heap
heap_memory = std::make_shared<std::vector<u8>>(size);
heap_memory = std::make_shared<PhysicalMemory>(size);
heap_end = heap_region_base + size;
} else {
UnmapRange(heap_region_base, GetCurrentHeapSize());
@@ -308,6 +295,166 @@ ResultVal<VAddr> VMManager::SetHeapSize(u64 size) {
return MakeResult<VAddr>(heap_region_base);
}
ResultCode VMManager::MapPhysicalMemory(VAddr target, u64 size) {
const auto end_addr = target + size;
const auto last_addr = end_addr - 1;
VAddr cur_addr = target;
ResultCode result = RESULT_SUCCESS;
// Check how much memory we've already mapped.
const auto mapped_size_result = SizeOfAllocatedVMAsInRange(target, size);
if (mapped_size_result.Failed()) {
return mapped_size_result.Code();
}
// If we've already mapped the desired amount, return early.
const std::size_t mapped_size = *mapped_size_result;
if (mapped_size == size) {
return RESULT_SUCCESS;
}
// Check that we can map the memory we want.
const auto res_limit = system.CurrentProcess()->GetResourceLimit();
const u64 physmem_remaining = res_limit->GetMaxResourceValue(ResourceType::PhysicalMemory) -
res_limit->GetCurrentResourceValue(ResourceType::PhysicalMemory);
if (physmem_remaining < (size - mapped_size)) {
return ERR_RESOURCE_LIMIT_EXCEEDED;
}
// Keep track of the memory regions we unmap.
std::vector<std::pair<u64, u64>> mapped_regions;
// Iterate, trying to map memory.
{
cur_addr = target;
auto iter = FindVMA(target);
ASSERT_MSG(iter != vma_map.end(), "MapPhysicalMemory iter != end");
while (true) {
const auto& vma = iter->second;
const auto vma_start = vma.base;
const auto vma_end = vma_start + vma.size;
const auto vma_last = vma_end - 1;
// Map the memory block
const auto map_size = std::min(end_addr - cur_addr, vma_end - cur_addr);
if (vma.state == MemoryState::Unmapped) {
const auto map_res =
MapMemoryBlock(cur_addr, std::make_shared<PhysicalMemory>(map_size, 0), 0,
map_size, MemoryState::Heap, VMAPermission::ReadWrite);
result = map_res.Code();
if (result.IsError()) {
break;
}
mapped_regions.emplace_back(cur_addr, map_size);
}
// Break once we hit the end of the range.
if (last_addr <= vma_last) {
break;
}
// Advance to the next block.
cur_addr = vma_end;
iter = FindVMA(cur_addr);
ASSERT_MSG(iter != vma_map.end(), "MapPhysicalMemory iter != end");
}
}
// If we failed, unmap memory.
if (result.IsError()) {
for (const auto [unmap_address, unmap_size] : mapped_regions) {
ASSERT_MSG(UnmapRange(unmap_address, unmap_size).IsSuccess(),
"MapPhysicalMemory un-map on error");
}
return result;
}
// Update amount of mapped physical memory.
physical_memory_mapped += size - mapped_size;
return RESULT_SUCCESS;
}
ResultCode VMManager::UnmapPhysicalMemory(VAddr target, u64 size) {
const auto end_addr = target + size;
const auto last_addr = end_addr - 1;
VAddr cur_addr = target;
ResultCode result = RESULT_SUCCESS;
// Check how much memory is currently mapped.
const auto mapped_size_result = SizeOfUnmappablePhysicalMemoryInRange(target, size);
if (mapped_size_result.Failed()) {
return mapped_size_result.Code();
}
// If we've already unmapped all the memory, return early.
const std::size_t mapped_size = *mapped_size_result;
if (mapped_size == 0) {
return RESULT_SUCCESS;
}
// Keep track of the memory regions we unmap.
std::vector<std::pair<u64, u64>> unmapped_regions;
// Try to unmap regions.
{
cur_addr = target;
auto iter = FindVMA(target);
ASSERT_MSG(iter != vma_map.end(), "UnmapPhysicalMemory iter != end");
while (true) {
const auto& vma = iter->second;
const auto vma_start = vma.base;
const auto vma_end = vma_start + vma.size;
const auto vma_last = vma_end - 1;
// Unmap the memory block
const auto unmap_size = std::min(end_addr - cur_addr, vma_end - cur_addr);
if (vma.state == MemoryState::Heap) {
result = UnmapRange(cur_addr, unmap_size);
if (result.IsError()) {
break;
}
unmapped_regions.emplace_back(cur_addr, unmap_size);
}
// Break once we hit the end of the range.
if (last_addr <= vma_last) {
break;
}
// Advance to the next block.
cur_addr = vma_end;
iter = FindVMA(cur_addr);
ASSERT_MSG(iter != vma_map.end(), "UnmapPhysicalMemory iter != end");
}
}
// If we failed, re-map regions.
// TODO: Preserve memory contents?
if (result.IsError()) {
for (const auto [map_address, map_size] : unmapped_regions) {
const auto remap_res =
MapMemoryBlock(map_address, std::make_shared<PhysicalMemory>(map_size, 0), 0,
map_size, MemoryState::Heap, VMAPermission::None);
ASSERT_MSG(remap_res.Succeeded(), "UnmapPhysicalMemory re-map on error");
}
}
// Update mapped amount
physical_memory_mapped -= mapped_size;
return RESULT_SUCCESS;
}
ResultCode VMManager::MapCodeMemory(VAddr dst_address, VAddr src_address, u64 size) {
constexpr auto ignore_attribute = MemoryAttribute::LockedForIPC | MemoryAttribute::DeviceMapped;
const auto src_check_result = CheckRangeState(
@@ -447,7 +594,7 @@ ResultCode VMManager::MirrorMemory(VAddr dst_addr, VAddr src_addr, u64 size, Mem
ASSERT_MSG(vma_offset + size <= vma->second.size,
"Shared memory exceeds bounds of mapped block");
const std::shared_ptr<std::vector<u8>>& backing_block = vma->second.backing_block;
const std::shared_ptr<PhysicalMemory>& backing_block = vma->second.backing_block;
const std::size_t backing_block_offset = vma->second.offset + vma_offset;
CASCADE_RESULT(auto new_vma,
@@ -455,12 +602,12 @@ ResultCode VMManager::MirrorMemory(VAddr dst_addr, VAddr src_addr, u64 size, Mem
// Protect mirror with permissions from old region
Reprotect(new_vma, vma->second.permissions);
// Remove permissions from old region
Reprotect(vma, VMAPermission::None);
ReprotectRange(src_addr, size, VMAPermission::None);
return RESULT_SUCCESS;
}
void VMManager::RefreshMemoryBlockMappings(const std::vector<u8>* block) {
void VMManager::RefreshMemoryBlockMappings(const PhysicalMemory* block) {
// If this ever proves to have a noticeable performance impact, allow users of the function to
// specify a specific range of addresses to limit the scan to.
for (const auto& p : vma_map) {
@@ -588,14 +735,14 @@ VMManager::VMAIter VMManager::SplitVMA(VMAIter vma_handle, u64 offset_in_vma) {
VMManager::VMAIter VMManager::MergeAdjacent(VMAIter iter) {
const VMAIter next_vma = std::next(iter);
if (next_vma != vma_map.end() && iter->second.CanBeMergedWith(next_vma->second)) {
iter->second.size += next_vma->second.size;
MergeAdjacentVMA(iter->second, next_vma->second);
vma_map.erase(next_vma);
}
if (iter != vma_map.begin()) {
VMAIter prev_vma = std::prev(iter);
if (prev_vma->second.CanBeMergedWith(iter->second)) {
prev_vma->second.size += iter->second.size;
MergeAdjacentVMA(prev_vma->second, iter->second);
vma_map.erase(iter);
iter = prev_vma;
}
@@ -604,6 +751,38 @@ VMManager::VMAIter VMManager::MergeAdjacent(VMAIter iter) {
return iter;
}
void VMManager::MergeAdjacentVMA(VirtualMemoryArea& left, const VirtualMemoryArea& right) {
ASSERT(left.CanBeMergedWith(right));
// Always merge allocated memory blocks, even when they don't share the same backing block.
if (left.type == VMAType::AllocatedMemoryBlock &&
(left.backing_block != right.backing_block || left.offset + left.size != right.offset)) {
// Check if we can save work.
if (left.offset == 0 && left.size == left.backing_block->size()) {
// Fast case: left is an entire backing block.
left.backing_block->insert(left.backing_block->end(),
right.backing_block->begin() + right.offset,
right.backing_block->begin() + right.offset + right.size);
} else {
// Slow case: make a new memory block for left and right.
auto new_memory = std::make_shared<PhysicalMemory>();
new_memory->insert(new_memory->end(), left.backing_block->begin() + left.offset,
left.backing_block->begin() + left.offset + left.size);
new_memory->insert(new_memory->end(), right.backing_block->begin() + right.offset,
right.backing_block->begin() + right.offset + right.size);
left.backing_block = new_memory;
left.offset = 0;
}
// Page table update is needed, because backing memory changed.
left.size += right.size;
UpdatePageTableForVMA(left);
} else {
// Just update the size.
left.size += right.size;
}
}
void VMManager::UpdatePageTableForVMA(const VirtualMemoryArea& vma) {
switch (vma.type) {
case VMAType::Free:
@@ -778,6 +957,84 @@ VMManager::CheckResults VMManager::CheckRangeState(VAddr address, u64 size, Memo
std::make_tuple(initial_state, initial_permissions, initial_attributes & ~ignore_mask));
}
ResultVal<std::size_t> VMManager::SizeOfAllocatedVMAsInRange(VAddr address,
std::size_t size) const {
const VAddr end_addr = address + size;
const VAddr last_addr = end_addr - 1;
std::size_t mapped_size = 0;
VAddr cur_addr = address;
auto iter = FindVMA(cur_addr);
ASSERT_MSG(iter != vma_map.end(), "SizeOfAllocatedVMAsInRange iter != end");
while (true) {
const auto& vma = iter->second;
const VAddr vma_start = vma.base;
const VAddr vma_end = vma_start + vma.size;
const VAddr vma_last = vma_end - 1;
// Add size if relevant.
if (vma.state != MemoryState::Unmapped) {
mapped_size += std::min(end_addr - cur_addr, vma_end - cur_addr);
}
// Break once we hit the end of the range.
if (last_addr <= vma_last) {
break;
}
// Advance to the next block.
cur_addr = vma_end;
iter = std::next(iter);
ASSERT_MSG(iter != vma_map.end(), "SizeOfAllocatedVMAsInRange iter != end");
}
return MakeResult(mapped_size);
}
ResultVal<std::size_t> VMManager::SizeOfUnmappablePhysicalMemoryInRange(VAddr address,
std::size_t size) const {
const VAddr end_addr = address + size;
const VAddr last_addr = end_addr - 1;
std::size_t mapped_size = 0;
VAddr cur_addr = address;
auto iter = FindVMA(cur_addr);
ASSERT_MSG(iter != vma_map.end(), "SizeOfUnmappablePhysicalMemoryInRange iter != end");
while (true) {
const auto& vma = iter->second;
const auto vma_start = vma.base;
const auto vma_end = vma_start + vma.size;
const auto vma_last = vma_end - 1;
const auto state = vma.state;
const auto attr = vma.attribute;
// Memory within region must be free or mapped heap.
if (!((state == MemoryState::Heap && attr == MemoryAttribute::None) ||
(state == MemoryState::Unmapped))) {
return ERR_INVALID_ADDRESS_STATE;
}
// Add size if relevant.
if (state != MemoryState::Unmapped) {
mapped_size += std::min(end_addr - cur_addr, vma_end - cur_addr);
}
// Break once we hit the end of the range.
if (last_addr <= vma_last) {
break;
}
// Advance to the next block.
cur_addr = vma_end;
iter = std::next(iter);
ASSERT_MSG(iter != vma_map.end(), "SizeOfUnmappablePhysicalMemoryInRange iter != end");
}
return MakeResult(mapped_size);
}
u64 VMManager::GetTotalPhysicalMemoryAvailable() const {
LOG_WARNING(Kernel, "(STUBBED) called");
return 0xF8000000;

View File

@@ -11,6 +11,7 @@
#include "common/common_types.h"
#include "common/memory_hook.h"
#include "common/page_table.h"
#include "core/hle/kernel/physical_memory.h"
#include "core/hle/result.h"
#include "core/memory.h"
@@ -290,7 +291,7 @@ struct VirtualMemoryArea {
// Settings for type = AllocatedMemoryBlock
/// Memory block backing this VMA.
std::shared_ptr<std::vector<u8>> backing_block = nullptr;
std::shared_ptr<PhysicalMemory> backing_block = nullptr;
/// Offset into the backing_memory the mapping starts from.
std::size_t offset = 0;
@@ -348,8 +349,9 @@ public:
* @param size Size of the mapping.
* @param state MemoryState tag to attach to the VMA.
*/
ResultVal<VMAHandle> MapMemoryBlock(VAddr target, std::shared_ptr<std::vector<u8>> block,
std::size_t offset, u64 size, MemoryState state);
ResultVal<VMAHandle> MapMemoryBlock(VAddr target, std::shared_ptr<PhysicalMemory> block,
std::size_t offset, u64 size, MemoryState state,
VMAPermission perm = VMAPermission::ReadWrite);
/**
* Maps an unmanaged host memory pointer at a given address.
@@ -450,6 +452,34 @@ public:
///
ResultVal<VAddr> SetHeapSize(u64 size);
/// Maps memory at a given address.
///
/// @param addr The virtual address to map memory at.
/// @param size The amount of memory to map.
///
/// @note The destination address must lie within the Map region.
///
/// @note This function requires that SystemResourceSize be non-zero,
/// however, this is just because if it were not then the
/// resulting page tables could be exploited on hardware by
/// a malicious program. SystemResource usage does not need
/// to be explicitly checked or updated here.
ResultCode MapPhysicalMemory(VAddr target, u64 size);
/// Unmaps memory at a given address.
///
/// @param addr The virtual address to unmap memory at.
/// @param size The amount of memory to unmap.
///
/// @note The destination address must lie within the Map region.
///
/// @note This function requires that SystemResourceSize be non-zero,
/// however, this is just because if it were not then the
/// resulting page tables could be exploited on hardware by
/// a malicious program. SystemResource usage does not need
/// to be explicitly checked or updated here.
ResultCode UnmapPhysicalMemory(VAddr target, u64 size);
/// Maps a region of memory as code memory.
///
/// @param dst_address The base address of the region to create the aliasing memory region.
@@ -518,7 +548,7 @@ public:
* Scans all VMAs and updates the page table range of any that use the given vector as backing
* memory. This should be called after any operation that causes reallocation of the vector.
*/
void RefreshMemoryBlockMappings(const std::vector<u8>* block);
void RefreshMemoryBlockMappings(const PhysicalMemory* block);
/// Dumps the address space layout to the log, for debugging
void LogLayout() const;
@@ -657,6 +687,11 @@ private:
*/
VMAIter MergeAdjacent(VMAIter vma);
/**
* Merges two adjacent VMAs.
*/
void MergeAdjacentVMA(VirtualMemoryArea& left, const VirtualMemoryArea& right);
/// Updates the pages corresponding to this VMA so they match the VMA's attributes.
void UpdatePageTableForVMA(const VirtualMemoryArea& vma);
@@ -701,6 +736,13 @@ private:
MemoryAttribute attribute_mask, MemoryAttribute attribute,
MemoryAttribute ignore_mask) const;
/// Gets the amount of memory currently mapped (state != Unmapped) in a range.
ResultVal<std::size_t> SizeOfAllocatedVMAsInRange(VAddr address, std::size_t size) const;
/// Gets the amount of memory unmappable by UnmapPhysicalMemory in a range.
ResultVal<std::size_t> SizeOfUnmappablePhysicalMemoryInRange(VAddr address,
std::size_t size) const;
/**
* A map covering the entirety of the managed address space, keyed by the `base` field of each
* VMA. It must always be modified by splitting or merging VMAs, so that the invariant
@@ -736,12 +778,17 @@ private:
// the entire virtual address space extents that bound the allocations, including any holes.
// This makes deallocation and reallocation of holes fast and keeps process memory contiguous
// in the emulator address space, allowing Memory::GetPointer to be reasonably safe.
std::shared_ptr<std::vector<u8>> heap_memory;
std::shared_ptr<PhysicalMemory> heap_memory;
// The end of the currently allocated heap. This is not an inclusive
// end of the range. This is essentially 'base_address + current_size'.
VAddr heap_end = 0;
// The current amount of memory mapped via MapPhysicalMemory.
// This is used here (and in Nintendo's kernel) only for debugging, and does not impact
// any behavior.
u64 physical_memory_mapped = 0;
Core::System& system;
};
} // namespace Kernel

View File

@@ -29,7 +29,8 @@
#include "core/hle/service/am/omm.h"
#include "core/hle/service/am/spsm.h"
#include "core/hle/service/am/tcap.h"
#include "core/hle/service/apm/apm.h"
#include "core/hle/service/apm/controller.h"
#include "core/hle/service/apm/interface.h"
#include "core/hle/service/filesystem/filesystem.h"
#include "core/hle/service/ns/ns.h"
#include "core/hle/service/nvflinger/nvflinger.h"
@@ -265,8 +266,8 @@ ISelfController::ISelfController(std::shared_ptr<NVFlinger::NVFlinger> nvflinger
{65, nullptr, "ReportUserIsActive"},
{66, nullptr, "GetCurrentIlluminance"},
{67, nullptr, "IsIlluminanceAvailable"},
{68, nullptr, "SetAutoSleepDisabled"},
{69, nullptr, "IsAutoSleepDisabled"},
{68, &ISelfController::SetAutoSleepDisabled, "SetAutoSleepDisabled"},
{69, &ISelfController::IsAutoSleepDisabled, "IsAutoSleepDisabled"},
{70, nullptr, "ReportMultimediaError"},
{71, nullptr, "GetCurrentIlluminanceEx"},
{80, nullptr, "SetWirelessPriorityMode"},
@@ -453,6 +454,34 @@ void ISelfController::GetIdleTimeDetectionExtension(Kernel::HLERequestContext& c
rb.Push<u32>(idle_time_detection_extension);
}
void ISelfController::SetAutoSleepDisabled(Kernel::HLERequestContext& ctx) {
IPC::RequestParser rp{ctx};
is_auto_sleep_disabled = rp.Pop<bool>();
// On the system itself, if the previous state of is_auto_sleep_disabled
// differed from the current value passed in, it'd signify the internal
// window manager to update (and also increment some statistics like update counts)
//
// It'd also indicate this change to an idle handling context.
//
// However, given we're emulating this behavior, most of this can be ignored
// and it's sufficient to simply set the member variable for querying via
// IsAutoSleepDisabled().
LOG_DEBUG(Service_AM, "called. is_auto_sleep_disabled={}", is_auto_sleep_disabled);
IPC::ResponseBuilder rb{ctx, 2};
rb.Push(RESULT_SUCCESS);
}
void ISelfController::IsAutoSleepDisabled(Kernel::HLERequestContext& ctx) {
LOG_DEBUG(Service_AM, "called.");
IPC::ResponseBuilder rb{ctx, 3};
rb.Push(RESULT_SUCCESS);
rb.Push(is_auto_sleep_disabled);
}
void ISelfController::GetAccumulatedSuspendedTickValue(Kernel::HLERequestContext& ctx) {
LOG_DEBUG(Service_AM, "called.");
@@ -520,8 +549,9 @@ void AppletMessageQueue::OperationModeChanged() {
on_operation_mode_changed.writable->Signal();
}
ICommonStateGetter::ICommonStateGetter(std::shared_ptr<AppletMessageQueue> msg_queue)
: ServiceFramework("ICommonStateGetter"), msg_queue(std::move(msg_queue)) {
ICommonStateGetter::ICommonStateGetter(Core::System& system,
std::shared_ptr<AppletMessageQueue> msg_queue)
: ServiceFramework("ICommonStateGetter"), system(system), msg_queue(std::move(msg_queue)) {
// clang-format off
static const FunctionInfo functions[] = {
{0, &ICommonStateGetter::GetEventHandle, "GetEventHandle"},
@@ -554,7 +584,7 @@ ICommonStateGetter::ICommonStateGetter(std::shared_ptr<AppletMessageQueue> msg_q
{63, nullptr, "GetHdcpAuthenticationStateChangeEvent"},
{64, nullptr, "SetTvPowerStateMatchingMode"},
{65, nullptr, "GetApplicationIdByContentActionName"},
{66, nullptr, "SetCpuBoostMode"},
{66, &ICommonStateGetter::SetCpuBoostMode, "SetCpuBoostMode"},
{80, nullptr, "PerformSystemButtonPressingIfInFocus"},
{90, nullptr, "SetPerformanceConfigurationChangedNotification"},
{91, nullptr, "GetCurrentPerformanceConfiguration"},
@@ -635,6 +665,16 @@ void ICommonStateGetter::GetDefaultDisplayResolution(Kernel::HLERequestContext&
}
}
void ICommonStateGetter::SetCpuBoostMode(Kernel::HLERequestContext& ctx) {
LOG_DEBUG(Service_AM, "called, forwarding to APM:SYS");
const auto& sm = system.ServiceManager();
const auto apm_sys = sm.GetService<APM::APM_Sys>("apm:sys");
ASSERT(apm_sys != nullptr);
apm_sys->SetCpuBoostMode(ctx);
}
IStorage::IStorage(std::vector<u8> buffer)
: ServiceFramework("IStorage"), buffer(std::move(buffer)) {
// clang-format off
@@ -663,13 +703,11 @@ void ICommonStateGetter::GetOperationMode(Kernel::HLERequestContext& ctx) {
}
void ICommonStateGetter::GetPerformanceMode(Kernel::HLERequestContext& ctx) {
const bool use_docked_mode{Settings::values.use_docked_mode};
LOG_DEBUG(Service_AM, "called, use_docked_mode={}", use_docked_mode);
LOG_DEBUG(Service_AM, "called");
IPC::ResponseBuilder rb{ctx, 3};
rb.Push(RESULT_SUCCESS);
rb.Push(static_cast<u32>(use_docked_mode ? APM::PerformanceMode::Docked
: APM::PerformanceMode::Handheld));
rb.PushEnum(system.GetAPMController().GetCurrentPerformanceMode());
}
class ILibraryAppletAccessor final : public ServiceFramework<ILibraryAppletAccessor> {

View File

@@ -133,6 +133,8 @@ private:
void SetHandlesRequestToDisplay(Kernel::HLERequestContext& ctx);
void SetIdleTimeDetectionExtension(Kernel::HLERequestContext& ctx);
void GetIdleTimeDetectionExtension(Kernel::HLERequestContext& ctx);
void SetAutoSleepDisabled(Kernel::HLERequestContext& ctx);
void IsAutoSleepDisabled(Kernel::HLERequestContext& ctx);
void GetAccumulatedSuspendedTickValue(Kernel::HLERequestContext& ctx);
void GetAccumulatedSuspendedTickChangedEvent(Kernel::HLERequestContext& ctx);
@@ -142,11 +144,13 @@ private:
u32 idle_time_detection_extension = 0;
u64 num_fatal_sections_entered = 0;
bool is_auto_sleep_disabled = false;
};
class ICommonStateGetter final : public ServiceFramework<ICommonStateGetter> {
public:
explicit ICommonStateGetter(std::shared_ptr<AppletMessageQueue> msg_queue);
explicit ICommonStateGetter(Core::System& system,
std::shared_ptr<AppletMessageQueue> msg_queue);
~ICommonStateGetter() override;
private:
@@ -168,7 +172,9 @@ private:
void GetPerformanceMode(Kernel::HLERequestContext& ctx);
void GetBootMode(Kernel::HLERequestContext& ctx);
void GetDefaultDisplayResolution(Kernel::HLERequestContext& ctx);
void SetCpuBoostMode(Kernel::HLERequestContext& ctx);
Core::System& system;
std::shared_ptr<AppletMessageQueue> msg_queue;
};

View File

@@ -42,7 +42,7 @@ private:
IPC::ResponseBuilder rb{ctx, 2, 0, 1};
rb.Push(RESULT_SUCCESS);
rb.PushIpcInterface<ICommonStateGetter>(msg_queue);
rb.PushIpcInterface<ICommonStateGetter>(system, msg_queue);
}
void GetSelfController(Kernel::HLERequestContext& ctx) {
@@ -146,7 +146,7 @@ private:
IPC::ResponseBuilder rb{ctx, 2, 0, 1};
rb.Push(RESULT_SUCCESS);
rb.PushIpcInterface<ICommonStateGetter>(msg_queue);
rb.PushIpcInterface<ICommonStateGetter>(system, msg_queue);
}
void GetSelfController(Kernel::HLERequestContext& ctx) {

View File

@@ -80,7 +80,7 @@ private:
IPC::ResponseBuilder rb{ctx, 2, 0, 1};
rb.Push(RESULT_SUCCESS);
rb.PushIpcInterface<ICommonStateGetter>(msg_queue);
rb.PushIpcInterface<ICommonStateGetter>(system, msg_queue);
}
void GetLibraryAppletCreator(Kernel::HLERequestContext& ctx) {

View File

@@ -2,7 +2,6 @@
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include "common/logging/log.h"
#include "core/hle/ipc_helpers.h"
#include "core/hle/service/apm/apm.h"
#include "core/hle/service/apm/interface.h"
@@ -12,11 +11,15 @@ namespace Service::APM {
Module::Module() = default;
Module::~Module() = default;
void InstallInterfaces(SM::ServiceManager& service_manager) {
void InstallInterfaces(Core::System& system) {
auto module_ = std::make_shared<Module>();
std::make_shared<APM>(module_, "apm")->InstallAsService(service_manager);
std::make_shared<APM>(module_, "apm:p")->InstallAsService(service_manager);
std::make_shared<APM_Sys>()->InstallAsService(service_manager);
std::make_shared<APM>(module_, system.GetAPMController(), "apm")
->InstallAsService(system.ServiceManager());
std::make_shared<APM>(module_, system.GetAPMController(), "apm:p")
->InstallAsService(system.ServiceManager());
std::make_shared<APM>(module_, system.GetAPMController(), "apm:am")
->InstallAsService(system.ServiceManager());
std::make_shared<APM_Sys>(system.GetAPMController())->InstallAsService(system.ServiceManager());
}
} // namespace Service::APM

View File

@@ -8,11 +8,6 @@
namespace Service::APM {
enum class PerformanceMode : u8 {
Handheld = 0,
Docked = 1,
};
class Module final {
public:
Module();
@@ -20,6 +15,6 @@ public:
};
/// Registers all AM services with the specified service manager.
void InstallInterfaces(SM::ServiceManager& service_manager);
void InstallInterfaces(Core::System& system);
} // namespace Service::APM

View File

@@ -0,0 +1,68 @@
// Copyright 2019 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include "common/logging/log.h"
#include "core/core_timing.h"
#include "core/hle/service/apm/controller.h"
#include "core/settings.h"
namespace Service::APM {
constexpr PerformanceConfiguration DEFAULT_PERFORMANCE_CONFIGURATION =
PerformanceConfiguration::Config7;
Controller::Controller(Core::Timing::CoreTiming& core_timing)
: core_timing(core_timing), configs{
{PerformanceMode::Handheld, DEFAULT_PERFORMANCE_CONFIGURATION},
{PerformanceMode::Docked, DEFAULT_PERFORMANCE_CONFIGURATION},
} {}
Controller::~Controller() = default;
void Controller::SetPerformanceConfiguration(PerformanceMode mode,
PerformanceConfiguration config) {
static const std::map<PerformanceConfiguration, u32> PCONFIG_TO_SPEED_MAP{
{PerformanceConfiguration::Config1, 1020}, {PerformanceConfiguration::Config2, 1020},
{PerformanceConfiguration::Config3, 1224}, {PerformanceConfiguration::Config4, 1020},
{PerformanceConfiguration::Config5, 1020}, {PerformanceConfiguration::Config6, 1224},
{PerformanceConfiguration::Config7, 1020}, {PerformanceConfiguration::Config8, 1020},
{PerformanceConfiguration::Config9, 1020}, {PerformanceConfiguration::Config10, 1020},
{PerformanceConfiguration::Config11, 1020}, {PerformanceConfiguration::Config12, 1020},
{PerformanceConfiguration::Config13, 1785}, {PerformanceConfiguration::Config14, 1785},
{PerformanceConfiguration::Config15, 1020}, {PerformanceConfiguration::Config16, 1020},
};
SetClockSpeed(PCONFIG_TO_SPEED_MAP.find(config)->second);
configs.insert_or_assign(mode, config);
}
void Controller::SetFromCpuBoostMode(CpuBoostMode mode) {
constexpr std::array<PerformanceConfiguration, 3> BOOST_MODE_TO_CONFIG_MAP{{
PerformanceConfiguration::Config7,
PerformanceConfiguration::Config13,
PerformanceConfiguration::Config15,
}};
SetPerformanceConfiguration(PerformanceMode::Docked,
BOOST_MODE_TO_CONFIG_MAP.at(static_cast<u32>(mode)));
}
PerformanceMode Controller::GetCurrentPerformanceMode() {
return Settings::values.use_docked_mode ? PerformanceMode::Docked : PerformanceMode::Handheld;
}
PerformanceConfiguration Controller::GetCurrentPerformanceConfiguration(PerformanceMode mode) {
if (configs.find(mode) == configs.end()) {
configs.insert_or_assign(mode, DEFAULT_PERFORMANCE_CONFIGURATION);
}
return configs[mode];
}
void Controller::SetClockSpeed(u32 mhz) {
LOG_INFO(Service_APM, "called, mhz={:08X}", mhz);
// TODO(DarkLordZach): Actually signal core_timing to change clock speed.
}
} // namespace Service::APM

View File

@@ -0,0 +1,70 @@
// Copyright 2019 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <map>
#include "common/common_types.h"
namespace Core::Timing {
class CoreTiming;
}
namespace Service::APM {
enum class PerformanceConfiguration : u32 {
Config1 = 0x00010000,
Config2 = 0x00010001,
Config3 = 0x00010002,
Config4 = 0x00020000,
Config5 = 0x00020001,
Config6 = 0x00020002,
Config7 = 0x00020003,
Config8 = 0x00020004,
Config9 = 0x00020005,
Config10 = 0x00020006,
Config11 = 0x92220007,
Config12 = 0x92220008,
Config13 = 0x92220009,
Config14 = 0x9222000A,
Config15 = 0x9222000B,
Config16 = 0x9222000C,
};
enum class CpuBoostMode : u32 {
Disabled = 0,
Full = 1, // CPU + GPU -> Config 13, 14, 15, or 16
Partial = 2, // GPU Only -> Config 15 or 16
};
enum class PerformanceMode : u8 {
Handheld = 0,
Docked = 1,
};
// Class to manage the state and change of the emulated system performance.
// Specifically, this deals with PerformanceMode, which corresponds to the system being docked or
// undocked, and PerformanceConfig which specifies the exact CPU, GPU, and Memory clocks to operate
// at. Additionally, this manages 'Boost Mode', which allows games to temporarily overclock the
// system during times of high load -- this simply maps to different PerformanceConfigs to use.
class Controller {
public:
Controller(Core::Timing::CoreTiming& core_timing);
~Controller();
void SetPerformanceConfiguration(PerformanceMode mode, PerformanceConfiguration config);
void SetFromCpuBoostMode(CpuBoostMode mode);
PerformanceMode GetCurrentPerformanceMode();
PerformanceConfiguration GetCurrentPerformanceConfiguration(PerformanceMode mode);
private:
void SetClockSpeed(u32 mhz);
std::map<PerformanceMode, PerformanceConfiguration> configs;
Core::Timing::CoreTiming& core_timing;
};
} // namespace Service::APM

View File

@@ -5,43 +5,32 @@
#include "common/logging/log.h"
#include "core/hle/ipc_helpers.h"
#include "core/hle/service/apm/apm.h"
#include "core/hle/service/apm/controller.h"
#include "core/hle/service/apm/interface.h"
namespace Service::APM {
class ISession final : public ServiceFramework<ISession> {
public:
ISession() : ServiceFramework("ISession") {
ISession(Controller& controller) : ServiceFramework("ISession"), controller(controller) {
static const FunctionInfo functions[] = {
{0, &ISession::SetPerformanceConfiguration, "SetPerformanceConfiguration"},
{1, &ISession::GetPerformanceConfiguration, "GetPerformanceConfiguration"},
{2, nullptr, "SetCpuOverclockEnabled"},
};
RegisterHandlers(functions);
}
private:
enum class PerformanceConfiguration : u32 {
Config1 = 0x00010000,
Config2 = 0x00010001,
Config3 = 0x00010002,
Config4 = 0x00020000,
Config5 = 0x00020001,
Config6 = 0x00020002,
Config7 = 0x00020003,
Config8 = 0x00020004,
Config9 = 0x00020005,
Config10 = 0x00020006,
Config11 = 0x92220007,
Config12 = 0x92220008,
};
void SetPerformanceConfiguration(Kernel::HLERequestContext& ctx) {
IPC::RequestParser rp{ctx};
auto mode = static_cast<PerformanceMode>(rp.Pop<u32>());
u32 config = rp.Pop<u32>();
LOG_WARNING(Service_APM, "(STUBBED) called mode={} config={}", static_cast<u32>(mode),
config);
const auto mode = rp.PopEnum<PerformanceMode>();
const auto config = rp.PopEnum<PerformanceConfiguration>();
LOG_DEBUG(Service_APM, "called mode={} config={}", static_cast<u32>(mode),
static_cast<u32>(config));
controller.SetPerformanceConfiguration(mode, config);
IPC::ResponseBuilder rb{ctx, 2};
rb.Push(RESULT_SUCCESS);
@@ -50,20 +39,23 @@ private:
void GetPerformanceConfiguration(Kernel::HLERequestContext& ctx) {
IPC::RequestParser rp{ctx};
auto mode = static_cast<PerformanceMode>(rp.Pop<u32>());
LOG_WARNING(Service_APM, "(STUBBED) called mode={}", static_cast<u32>(mode));
const auto mode = rp.PopEnum<PerformanceMode>();
LOG_DEBUG(Service_APM, "called mode={}", static_cast<u32>(mode));
IPC::ResponseBuilder rb{ctx, 3};
rb.Push(RESULT_SUCCESS);
rb.Push<u32>(static_cast<u32>(PerformanceConfiguration::Config1));
rb.PushEnum(controller.GetCurrentPerformanceConfiguration(mode));
}
Controller& controller;
};
APM::APM(std::shared_ptr<Module> apm, const char* name)
: ServiceFramework(name), apm(std::move(apm)) {
APM::APM(std::shared_ptr<Module> apm, Controller& controller, const char* name)
: ServiceFramework(name), apm(std::move(apm)), controller(controller) {
static const FunctionInfo functions[] = {
{0, &APM::OpenSession, "OpenSession"},
{1, nullptr, "GetPerformanceMode"},
{1, &APM::GetPerformanceMode, "GetPerformanceMode"},
{6, nullptr, "IsCpuOverclockEnabled"},
};
RegisterHandlers(functions);
}
@@ -75,10 +67,17 @@ void APM::OpenSession(Kernel::HLERequestContext& ctx) {
IPC::ResponseBuilder rb{ctx, 2, 0, 1};
rb.Push(RESULT_SUCCESS);
rb.PushIpcInterface<ISession>();
rb.PushIpcInterface<ISession>(controller);
}
APM_Sys::APM_Sys() : ServiceFramework{"apm:sys"} {
void APM::GetPerformanceMode(Kernel::HLERequestContext& ctx) {
LOG_DEBUG(Service_APM, "called");
IPC::ResponseBuilder rb{ctx, 2};
rb.PushEnum(controller.GetCurrentPerformanceMode());
}
APM_Sys::APM_Sys(Controller& controller) : ServiceFramework{"apm:sys"}, controller(controller) {
// clang-format off
static const FunctionInfo functions[] = {
{0, nullptr, "RequestPerformanceMode"},
@@ -87,8 +86,8 @@ APM_Sys::APM_Sys() : ServiceFramework{"apm:sys"} {
{3, nullptr, "GetLastThrottlingState"},
{4, nullptr, "ClearLastThrottlingState"},
{5, nullptr, "LoadAndApplySettings"},
{6, nullptr, "SetCpuBoostMode"},
{7, nullptr, "GetCurrentPerformanceConfiguration"},
{6, &APM_Sys::SetCpuBoostMode, "SetCpuBoostMode"},
{7, &APM_Sys::GetCurrentPerformanceConfiguration, "GetCurrentPerformanceConfiguration"},
};
// clang-format on
@@ -102,7 +101,28 @@ void APM_Sys::GetPerformanceEvent(Kernel::HLERequestContext& ctx) {
IPC::ResponseBuilder rb{ctx, 2, 0, 1};
rb.Push(RESULT_SUCCESS);
rb.PushIpcInterface<ISession>();
rb.PushIpcInterface<ISession>(controller);
}
void APM_Sys::SetCpuBoostMode(Kernel::HLERequestContext& ctx) {
IPC::RequestParser rp{ctx};
const auto mode = rp.PopEnum<CpuBoostMode>();
LOG_DEBUG(Service_APM, "called, mode={:08X}", static_cast<u32>(mode));
controller.SetFromCpuBoostMode(mode);
IPC::ResponseBuilder rb{ctx, 2};
rb.Push(RESULT_SUCCESS);
}
void APM_Sys::GetCurrentPerformanceConfiguration(Kernel::HLERequestContext& ctx) {
LOG_DEBUG(Service_APM, "called");
IPC::ResponseBuilder rb{ctx, 3};
rb.Push(RESULT_SUCCESS);
rb.PushEnum(
controller.GetCurrentPerformanceConfiguration(controller.GetCurrentPerformanceMode()));
}
} // namespace Service::APM

View File

@@ -8,24 +8,34 @@
namespace Service::APM {
class Controller;
class Module;
class APM final : public ServiceFramework<APM> {
public:
explicit APM(std::shared_ptr<Module> apm, const char* name);
explicit APM(std::shared_ptr<Module> apm, Controller& controller, const char* name);
~APM() override;
private:
void OpenSession(Kernel::HLERequestContext& ctx);
void GetPerformanceMode(Kernel::HLERequestContext& ctx);
std::shared_ptr<Module> apm;
Controller& controller;
};
class APM_Sys final : public ServiceFramework<APM_Sys> {
public:
explicit APM_Sys();
explicit APM_Sys(Controller& controller);
~APM_Sys() override;
void SetCpuBoostMode(Kernel::HLERequestContext& ctx);
private:
void GetPerformanceEvent(Kernel::HLERequestContext& ctx);
void GetCurrentPerformanceConfiguration(Kernel::HLERequestContext& ctx);
Controller& controller;
};
} // namespace Service::APM

View File

@@ -19,16 +19,16 @@
namespace Service::Audio {
void InstallInterfaces(SM::ServiceManager& service_manager) {
void InstallInterfaces(SM::ServiceManager& service_manager, Core::System& system) {
std::make_shared<AudCtl>()->InstallAsService(service_manager);
std::make_shared<AudOutA>()->InstallAsService(service_manager);
std::make_shared<AudOutU>()->InstallAsService(service_manager);
std::make_shared<AudOutU>(system)->InstallAsService(service_manager);
std::make_shared<AudInA>()->InstallAsService(service_manager);
std::make_shared<AudInU>()->InstallAsService(service_manager);
std::make_shared<AudRecA>()->InstallAsService(service_manager);
std::make_shared<AudRecU>()->InstallAsService(service_manager);
std::make_shared<AudRenA>()->InstallAsService(service_manager);
std::make_shared<AudRenU>()->InstallAsService(service_manager);
std::make_shared<AudRenU>(system)->InstallAsService(service_manager);
std::make_shared<CodecCtl>()->InstallAsService(service_manager);
std::make_shared<HwOpus>()->InstallAsService(service_manager);

View File

@@ -4,6 +4,10 @@
#pragma once
namespace Core {
class System;
}
namespace Service::SM {
class ServiceManager;
}
@@ -11,6 +15,6 @@ class ServiceManager;
namespace Service::Audio {
/// Registers all Audio services with the specified service manager.
void InstallInterfaces(SM::ServiceManager& service_manager);
void InstallInterfaces(SM::ServiceManager& service_manager, Core::System& system);
} // namespace Service::Audio

View File

@@ -40,8 +40,8 @@ enum class AudioState : u32 {
class IAudioOut final : public ServiceFramework<IAudioOut> {
public:
IAudioOut(AudoutParams audio_params, AudioCore::AudioOut& audio_core, std::string&& device_name,
std::string&& unique_name)
IAudioOut(Core::System& system, AudoutParams audio_params, AudioCore::AudioOut& audio_core,
std::string&& device_name, std::string&& unique_name)
: ServiceFramework("IAudioOut"), audio_core(audio_core),
device_name(std::move(device_name)), audio_params(audio_params) {
// clang-format off
@@ -65,7 +65,6 @@ public:
RegisterHandlers(functions);
// This is the event handle used to check if the audio buffer was released
auto& system = Core::System::GetInstance();
buffer_event = Kernel::WritableEvent::CreateEventPair(
system.Kernel(), Kernel::ResetType::Manual, "IAudioOutBufferReleased");
@@ -212,6 +211,22 @@ private:
Kernel::EventPair buffer_event;
};
AudOutU::AudOutU(Core::System& system_) : ServiceFramework("audout:u"), system{system_} {
// clang-format off
static const FunctionInfo functions[] = {
{0, &AudOutU::ListAudioOutsImpl, "ListAudioOuts"},
{1, &AudOutU::OpenAudioOutImpl, "OpenAudioOut"},
{2, &AudOutU::ListAudioOutsImpl, "ListAudioOutsAuto"},
{3, &AudOutU::OpenAudioOutImpl, "OpenAudioOutAuto"},
};
// clang-format on
RegisterHandlers(functions);
audio_core = std::make_unique<AudioCore::AudioOut>();
}
AudOutU::~AudOutU() = default;
void AudOutU::ListAudioOutsImpl(Kernel::HLERequestContext& ctx) {
LOG_DEBUG(Service_Audio, "called");
@@ -248,7 +263,7 @@ void AudOutU::OpenAudioOutImpl(Kernel::HLERequestContext& ctx) {
std::string unique_name{fmt::format("{}-{}", device_name, audio_out_interfaces.size())};
auto audio_out_interface = std::make_shared<IAudioOut>(
params, *audio_core, std::move(device_name), std::move(unique_name));
system, params, *audio_core, std::move(device_name), std::move(unique_name));
IPC::ResponseBuilder rb{ctx, 6, 0, 1};
rb.Push(RESULT_SUCCESS);
@@ -256,20 +271,9 @@ void AudOutU::OpenAudioOutImpl(Kernel::HLERequestContext& ctx) {
rb.Push<u32>(params.channel_count);
rb.Push<u32>(static_cast<u32>(AudioCore::Codec::PcmFormat::Int16));
rb.Push<u32>(static_cast<u32>(AudioState::Stopped));
rb.PushIpcInterface<Audio::IAudioOut>(audio_out_interface);
rb.PushIpcInterface<IAudioOut>(audio_out_interface);
audio_out_interfaces.push_back(std::move(audio_out_interface));
}
AudOutU::AudOutU() : ServiceFramework("audout:u") {
static const FunctionInfo functions[] = {{0, &AudOutU::ListAudioOutsImpl, "ListAudioOuts"},
{1, &AudOutU::OpenAudioOutImpl, "OpenAudioOut"},
{2, &AudOutU::ListAudioOutsImpl, "ListAudioOutsAuto"},
{3, &AudOutU::OpenAudioOutImpl, "OpenAudioOutAuto"}};
RegisterHandlers(functions);
audio_core = std::make_unique<AudioCore::AudioOut>();
}
AudOutU::~AudOutU() = default;
} // namespace Service::Audio

View File

@@ -11,6 +11,10 @@ namespace AudioCore {
class AudioOut;
}
namespace Core {
class System;
}
namespace Kernel {
class HLERequestContext;
}
@@ -21,15 +25,17 @@ class IAudioOut;
class AudOutU final : public ServiceFramework<AudOutU> {
public:
AudOutU();
explicit AudOutU(Core::System& system_);
~AudOutU() override;
private:
void ListAudioOutsImpl(Kernel::HLERequestContext& ctx);
void OpenAudioOutImpl(Kernel::HLERequestContext& ctx);
std::vector<std::shared_ptr<IAudioOut>> audio_out_interfaces;
std::unique_ptr<AudioCore::AudioOut> audio_core;
void ListAudioOutsImpl(Kernel::HLERequestContext& ctx);
void OpenAudioOutImpl(Kernel::HLERequestContext& ctx);
Core::System& system;
};
} // namespace Service::Audio

View File

@@ -5,6 +5,7 @@
#include <algorithm>
#include <array>
#include <memory>
#include <string_view>
#include "audio_core/audio_renderer.h"
#include "common/alignment.h"
@@ -25,7 +26,8 @@ namespace Service::Audio {
class IAudioRenderer final : public ServiceFramework<IAudioRenderer> {
public:
explicit IAudioRenderer(AudioCore::AudioRendererParameter audren_params)
explicit IAudioRenderer(Core::System& system, AudioCore::AudioRendererParameter audren_params,
const std::size_t instance_number)
: ServiceFramework("IAudioRenderer") {
// clang-format off
static const FunctionInfo functions[] = {
@@ -45,11 +47,10 @@ public:
// clang-format on
RegisterHandlers(functions);
auto& system = Core::System::GetInstance();
system_event = Kernel::WritableEvent::CreateEventPair(
system.Kernel(), Kernel::ResetType::Manual, "IAudioRenderer:SystemEvent");
renderer = std::make_unique<AudioCore::AudioRenderer>(system.CoreTiming(), audren_params,
system_event.writable);
renderer = std::make_unique<AudioCore::AudioRenderer>(
system.CoreTiming(), audren_params, system_event.writable, instance_number);
}
private:
@@ -159,17 +160,18 @@ private:
class IAudioDevice final : public ServiceFramework<IAudioDevice> {
public:
IAudioDevice() : ServiceFramework("IAudioDevice") {
explicit IAudioDevice(Core::System& system, u32_le revision_num)
: ServiceFramework("IAudioDevice"), revision{revision_num} {
static const FunctionInfo functions[] = {
{0, &IAudioDevice::ListAudioDeviceName, "ListAudioDeviceName"},
{1, &IAudioDevice::SetAudioDeviceOutputVolume, "SetAudioDeviceOutputVolume"},
{2, nullptr, "GetAudioDeviceOutputVolume"},
{2, &IAudioDevice::GetAudioDeviceOutputVolume, "GetAudioDeviceOutputVolume"},
{3, &IAudioDevice::GetActiveAudioDeviceName, "GetActiveAudioDeviceName"},
{4, &IAudioDevice::QueryAudioDeviceSystemEvent, "QueryAudioDeviceSystemEvent"},
{5, &IAudioDevice::GetActiveChannelCount, "GetActiveChannelCount"},
{6, &IAudioDevice::ListAudioDeviceName, "ListAudioDeviceNameAuto"},
{7, &IAudioDevice::SetAudioDeviceOutputVolume, "SetAudioDeviceOutputVolumeAuto"},
{8, nullptr, "GetAudioDeviceOutputVolumeAuto"},
{8, &IAudioDevice::GetAudioDeviceOutputVolume, "GetAudioDeviceOutputVolumeAuto"},
{10, &IAudioDevice::GetActiveAudioDeviceName, "GetActiveAudioDeviceNameAuto"},
{11, nullptr, "QueryAudioDeviceInputEvent"},
{12, &IAudioDevice::QueryAudioDeviceOutputEvent, "QueryAudioDeviceOutputEvent"},
@@ -177,7 +179,7 @@ public:
};
RegisterHandlers(functions);
auto& kernel = Core::System::GetInstance().Kernel();
auto& kernel = system.Kernel();
buffer_event = Kernel::WritableEvent::CreateEventPair(kernel, Kernel::ResetType::Automatic,
"IAudioOutBufferReleasedEvent");
@@ -188,15 +190,47 @@ public:
}
private:
void ListAudioDeviceName(Kernel::HLERequestContext& ctx) {
LOG_WARNING(Service_Audio, "(STUBBED) called");
using AudioDeviceName = std::array<char, 256>;
static constexpr std::array<std::string_view, 4> audio_device_names{{
"AudioStereoJackOutput",
"AudioBuiltInSpeakerOutput",
"AudioTvOutput",
"AudioUsbDeviceOutput",
}};
enum class DeviceType {
AHUBHeadphones,
AHUBSpeakers,
HDA,
USBOutput,
};
constexpr std::array<char, 15> audio_interface{{"AudioInterface"}};
ctx.WriteBuffer(audio_interface);
void ListAudioDeviceName(Kernel::HLERequestContext& ctx) {
LOG_DEBUG(Service_Audio, "called");
const bool usb_output_supported =
IsFeatureSupported(AudioFeatures::AudioUSBDeviceOutput, revision);
const std::size_t count = ctx.GetWriteBufferSize() / sizeof(AudioDeviceName);
std::vector<AudioDeviceName> name_buffer;
name_buffer.reserve(audio_device_names.size());
for (std::size_t i = 0; i < count && i < audio_device_names.size(); i++) {
const auto type = static_cast<DeviceType>(i);
if (!usb_output_supported && type == DeviceType::USBOutput) {
continue;
}
const auto& device_name = audio_device_names[i];
auto& entry = name_buffer.emplace_back();
device_name.copy(entry.data(), device_name.size());
}
ctx.WriteBuffer(name_buffer);
IPC::ResponseBuilder rb{ctx, 3};
rb.Push(RESULT_SUCCESS);
rb.Push<u32>(1);
rb.Push(static_cast<u32>(name_buffer.size()));
}
void SetAudioDeviceOutputVolume(Kernel::HLERequestContext& ctx) {
@@ -212,15 +246,32 @@ private:
rb.Push(RESULT_SUCCESS);
}
void GetActiveAudioDeviceName(Kernel::HLERequestContext& ctx) {
LOG_WARNING(Service_Audio, "(STUBBED) called");
void GetAudioDeviceOutputVolume(Kernel::HLERequestContext& ctx) {
IPC::RequestParser rp{ctx};
constexpr std::array<char, 12> audio_interface{{"AudioDevice"}};
ctx.WriteBuffer(audio_interface);
const auto device_name_buffer = ctx.ReadBuffer();
const std::string name = Common::StringFromBuffer(device_name_buffer);
LOG_WARNING(Service_Audio, "(STUBBED) called. name={}", name);
IPC::ResponseBuilder rb{ctx, 3};
rb.Push(RESULT_SUCCESS);
rb.Push<u32>(1);
rb.Push(1.0f);
}
void GetActiveAudioDeviceName(Kernel::HLERequestContext& ctx) {
LOG_WARNING(Service_Audio, "(STUBBED) called");
// Currently set to always be TV audio output.
const auto& device_name = audio_device_names[2];
AudioDeviceName out_device_name{};
device_name.copy(out_device_name.data(), device_name.size());
ctx.WriteBuffer(out_device_name);
IPC::ResponseBuilder rb{ctx, 2};
rb.Push(RESULT_SUCCESS);
}
void QueryAudioDeviceSystemEvent(Kernel::HLERequestContext& ctx) {
@@ -249,12 +300,13 @@ private:
rb.PushCopyObjects(audio_output_device_switch_event.readable);
}
u32_le revision = 0;
Kernel::EventPair buffer_event;
Kernel::EventPair audio_output_device_switch_event;
}; // namespace Audio
AudRenU::AudRenU() : ServiceFramework("audren:u") {
AudRenU::AudRenU(Core::System& system_) : ServiceFramework("audren:u"), system{system_} {
// clang-format off
static const FunctionInfo functions[] = {
{0, &AudRenU::OpenAudioRenderer, "OpenAudioRenderer"},
@@ -327,7 +379,7 @@ void AudRenU::GetAudioRendererWorkBufferSize(Kernel::HLERequestContext& ctx) {
};
// Calculates the portion of the size related to the mix data (and the sorting thereof).
const auto calculate_mix_info_size = [this](const AudioCore::AudioRendererParameter& params) {
const auto calculate_mix_info_size = [](const AudioCore::AudioRendererParameter& params) {
// The size of the mixing info data structure.
constexpr u64 mix_info_size = 0x940;
@@ -399,7 +451,7 @@ void AudRenU::GetAudioRendererWorkBufferSize(Kernel::HLERequestContext& ctx) {
// Calculates the part of the size related to the splitter context.
const auto calculate_splitter_context_size =
[this](const AudioCore::AudioRendererParameter& params) -> u64 {
[](const AudioCore::AudioRendererParameter& params) -> u64 {
if (!IsFeatureSupported(AudioFeatures::Splitter, params.revision)) {
return 0;
}
@@ -446,7 +498,7 @@ void AudRenU::GetAudioRendererWorkBufferSize(Kernel::HLERequestContext& ctx) {
};
// Calculates the part of the size related to performance statistics.
const auto calculate_perf_size = [this](const AudioCore::AudioRendererParameter& params) {
const auto calculate_perf_size = [](const AudioCore::AudioRendererParameter& params) {
// Extra size value appended to the end of the calculation.
constexpr u64 appended = 128;
@@ -473,78 +525,76 @@ void AudRenU::GetAudioRendererWorkBufferSize(Kernel::HLERequestContext& ctx) {
};
// Calculates the part of the size that relates to the audio command buffer.
const auto calculate_command_buffer_size =
[this](const AudioCore::AudioRendererParameter& params) {
constexpr u64 alignment = (buffer_alignment_size - 1) * 2;
const auto calculate_command_buffer_size = [](const AudioCore::AudioRendererParameter& params) {
constexpr u64 alignment = (buffer_alignment_size - 1) * 2;
if (!IsFeatureSupported(AudioFeatures::VariadicCommandBuffer, params.revision)) {
constexpr u64 command_buffer_size = 0x18000;
if (!IsFeatureSupported(AudioFeatures::VariadicCommandBuffer, params.revision)) {
constexpr u64 command_buffer_size = 0x18000;
return command_buffer_size + alignment;
}
return command_buffer_size + alignment;
}
// When the variadic command buffer is supported, this means
// the command generator for the audio renderer can issue commands
// that are (as one would expect), variable in size. So what we need to do
// is determine the maximum possible size for a few command data structures
// then multiply them by the amount of present commands indicated by the given
// respective audio parameters.
// When the variadic command buffer is supported, this means
// the command generator for the audio renderer can issue commands
// that are (as one would expect), variable in size. So what we need to do
// is determine the maximum possible size for a few command data structures
// then multiply them by the amount of present commands indicated by the given
// respective audio parameters.
constexpr u64 max_biquad_filters = 2;
constexpr u64 max_mix_buffers = 24;
constexpr u64 max_biquad_filters = 2;
constexpr u64 max_mix_buffers = 24;
constexpr u64 biquad_filter_command_size = 0x2C;
constexpr u64 biquad_filter_command_size = 0x2C;
constexpr u64 depop_mix_command_size = 0x24;
constexpr u64 depop_setup_command_size = 0x50;
constexpr u64 depop_mix_command_size = 0x24;
constexpr u64 depop_setup_command_size = 0x50;
constexpr u64 effect_command_max_size = 0x540;
constexpr u64 effect_command_max_size = 0x540;
constexpr u64 mix_command_size = 0x1C;
constexpr u64 mix_ramp_command_size = 0x24;
constexpr u64 mix_ramp_grouped_command_size = 0x13C;
constexpr u64 mix_command_size = 0x1C;
constexpr u64 mix_ramp_command_size = 0x24;
constexpr u64 mix_ramp_grouped_command_size = 0x13C;
constexpr u64 perf_command_size = 0x28;
constexpr u64 perf_command_size = 0x28;
constexpr u64 sink_command_size = 0x130;
constexpr u64 sink_command_size = 0x130;
constexpr u64 submix_command_max_size =
depop_mix_command_size + (mix_command_size * max_mix_buffers) * max_mix_buffers;
constexpr u64 submix_command_max_size =
depop_mix_command_size + (mix_command_size * max_mix_buffers) * max_mix_buffers;
constexpr u64 volume_command_size = 0x1C;
constexpr u64 volume_ramp_command_size = 0x20;
constexpr u64 volume_command_size = 0x1C;
constexpr u64 volume_ramp_command_size = 0x20;
constexpr u64 voice_biquad_filter_command_size =
biquad_filter_command_size * max_biquad_filters;
constexpr u64 voice_data_command_size = 0x9C;
const u64 voice_command_max_size =
(params.splitter_count * depop_setup_command_size) +
(voice_data_command_size + voice_biquad_filter_command_size +
volume_ramp_command_size + mix_ramp_grouped_command_size);
constexpr u64 voice_biquad_filter_command_size =
biquad_filter_command_size * max_biquad_filters;
constexpr u64 voice_data_command_size = 0x9C;
const u64 voice_command_max_size =
(params.splitter_count * depop_setup_command_size) +
(voice_data_command_size + voice_biquad_filter_command_size + volume_ramp_command_size +
mix_ramp_grouped_command_size);
// Now calculate the individual elements that comprise the size and add them together.
const u64 effect_commands_size = params.effect_count * effect_command_max_size;
// Now calculate the individual elements that comprise the size and add them together.
const u64 effect_commands_size = params.effect_count * effect_command_max_size;
const u64 final_mix_commands_size =
depop_mix_command_size + volume_command_size * max_mix_buffers;
const u64 final_mix_commands_size =
depop_mix_command_size + volume_command_size * max_mix_buffers;
const u64 perf_commands_size =
perf_command_size *
(CalculateNumPerformanceEntries(params) + max_perf_detail_entries);
const u64 perf_commands_size =
perf_command_size * (CalculateNumPerformanceEntries(params) + max_perf_detail_entries);
const u64 sink_commands_size = params.sink_count * sink_command_size;
const u64 sink_commands_size = params.sink_count * sink_command_size;
const u64 splitter_commands_size =
params.num_splitter_send_channels * max_mix_buffers * mix_ramp_command_size;
const u64 splitter_commands_size =
params.num_splitter_send_channels * max_mix_buffers * mix_ramp_command_size;
const u64 submix_commands_size = params.submix_count * submix_command_max_size;
const u64 submix_commands_size = params.submix_count * submix_command_max_size;
const u64 voice_commands_size = params.voice_count * voice_command_max_size;
const u64 voice_commands_size = params.voice_count * voice_command_max_size;
return effect_commands_size + final_mix_commands_size + perf_commands_size +
sink_commands_size + splitter_commands_size + submix_commands_size +
voice_commands_size + alignment;
};
return effect_commands_size + final_mix_commands_size + perf_commands_size +
sink_commands_size + splitter_commands_size + submix_commands_size +
voice_commands_size + alignment;
};
IPC::RequestParser rp{ctx};
const auto params = rp.PopRaw<AudioCore::AudioRendererParameter>();
@@ -577,12 +627,16 @@ void AudRenU::GetAudioRendererWorkBufferSize(Kernel::HLERequestContext& ctx) {
}
void AudRenU::GetAudioDeviceService(Kernel::HLERequestContext& ctx) {
LOG_DEBUG(Service_Audio, "called");
IPC::RequestParser rp{ctx};
const u64 aruid = rp.Pop<u64>();
LOG_DEBUG(Service_Audio, "called. aruid={:016X}", aruid);
// Revisionless variant of GetAudioDeviceServiceWithRevisionInfo that
// always assumes the initial release revision (REV1).
IPC::ResponseBuilder rb{ctx, 2, 0, 1};
rb.Push(RESULT_SUCCESS);
rb.PushIpcInterface<Audio::IAudioDevice>();
rb.PushIpcInterface<IAudioDevice>(system, Common::MakeMagic('R', 'E', 'V', '1'));
}
void AudRenU::OpenAudioRendererAuto(Kernel::HLERequestContext& ctx) {
@@ -592,13 +646,19 @@ void AudRenU::OpenAudioRendererAuto(Kernel::HLERequestContext& ctx) {
}
void AudRenU::GetAudioDeviceServiceWithRevisionInfo(Kernel::HLERequestContext& ctx) {
LOG_WARNING(Service_Audio, "(STUBBED) called");
struct Parameters {
u32 revision;
u64 aruid;
};
IPC::RequestParser rp{ctx};
const auto [revision, aruid] = rp.PopRaw<Parameters>();
LOG_DEBUG(Service_Audio, "called. revision={:08X}, aruid={:016X}", revision, aruid);
IPC::ResponseBuilder rb{ctx, 2, 0, 1};
rb.Push(RESULT_SUCCESS);
rb.PushIpcInterface<Audio::IAudioDevice>(); // TODO(ogniK): Figure out what is different
// based on the current revision
rb.PushIpcInterface<IAudioDevice>(system, revision);
}
void AudRenU::OpenAudioRendererImpl(Kernel::HLERequestContext& ctx) {
@@ -607,14 +667,16 @@ void AudRenU::OpenAudioRendererImpl(Kernel::HLERequestContext& ctx) {
IPC::ResponseBuilder rb{ctx, 2, 0, 1};
rb.Push(RESULT_SUCCESS);
rb.PushIpcInterface<IAudioRenderer>(params);
rb.PushIpcInterface<IAudioRenderer>(system, params, audren_instance_count++);
}
bool AudRenU::IsFeatureSupported(AudioFeatures feature, u32_le revision) const {
bool IsFeatureSupported(AudioFeatures feature, u32_le revision) {
// Byte swap
const u32_be version_num = revision - Common::MakeMagic('R', 'E', 'V', '0');
switch (feature) {
case AudioFeatures::AudioUSBDeviceOutput:
return version_num >= 4U;
case AudioFeatures::Splitter:
return version_num >= 2U;
case AudioFeatures::PerformanceMetricsVersion2:

View File

@@ -6,6 +6,10 @@
#include "core/hle/service/service.h"
namespace Core {
class System;
}
namespace Kernel {
class HLERequestContext;
}
@@ -14,7 +18,7 @@ namespace Service::Audio {
class AudRenU final : public ServiceFramework<AudRenU> {
public:
explicit AudRenU();
explicit AudRenU(Core::System& system_);
~AudRenU() override;
private:
@@ -26,13 +30,19 @@ private:
void OpenAudioRendererImpl(Kernel::HLERequestContext& ctx);
enum class AudioFeatures : u32 {
Splitter,
PerformanceMetricsVersion2,
VariadicCommandBuffer,
};
bool IsFeatureSupported(AudioFeatures feature, u32_le revision) const;
std::size_t audren_instance_count = 0;
Core::System& system;
};
// Describes a particular audio feature that may be supported in a particular revision.
enum class AudioFeatures : u32 {
AudioUSBDeviceOutput,
Splitter,
PerformanceMetricsVersion2,
VariadicCommandBuffer,
};
// Tests if a particular audio feature is supported with a given audio revision.
bool IsFeatureSupported(AudioFeatures feature, u32_le revision);
} // namespace Service::Audio

View File

@@ -472,12 +472,12 @@ void CreateFactories(FileSys::VfsFilesystem& vfs, bool overwrite) {
}
}
void InstallInterfaces(SM::ServiceManager& service_manager, FileSys::VfsFilesystem& vfs) {
void InstallInterfaces(Core::System& system) {
romfs_factory = nullptr;
CreateFactories(vfs, false);
std::make_shared<FSP_LDR>()->InstallAsService(service_manager);
std::make_shared<FSP_PR>()->InstallAsService(service_manager);
std::make_shared<FSP_SRV>()->InstallAsService(service_manager);
CreateFactories(*system.GetFilesystem(), false);
std::make_shared<FSP_LDR>()->InstallAsService(system.ServiceManager());
std::make_shared<FSP_PR>()->InstallAsService(system.ServiceManager());
std::make_shared<FSP_SRV>(system.GetReporter())->InstallAsService(system.ServiceManager());
}
} // namespace Service::FileSystem

View File

@@ -65,7 +65,7 @@ FileSys::VirtualDir GetModificationDumpRoot(u64 title_id);
// above is called.
void CreateFactories(FileSys::VfsFilesystem& vfs, bool overwrite = true);
void InstallInterfaces(SM::ServiceManager& service_manager, FileSys::VfsFilesystem& vfs);
void InstallInterfaces(Core::System& system);
// A class that wraps a VfsDirectory with methods that return ResultVal and ResultCode instead of
// pointers and booleans. This makes using a VfsDirectory with switch services much easier and

View File

@@ -26,6 +26,7 @@
#include "core/hle/kernel/process.h"
#include "core/hle/service/filesystem/filesystem.h"
#include "core/hle/service/filesystem/fsp_srv.h"
#include "core/reporter.h"
namespace Service::FileSystem {
@@ -613,7 +614,7 @@ private:
u64 next_entry_index = 0;
};
FSP_SRV::FSP_SRV() : ServiceFramework("fsp-srv") {
FSP_SRV::FSP_SRV(const Core::Reporter& reporter) : ServiceFramework("fsp-srv"), reporter(reporter) {
// clang-format off
static const FunctionInfo functions[] = {
{0, nullptr, "OpenFileSystem"},
@@ -710,14 +711,14 @@ FSP_SRV::FSP_SRV() : ServiceFramework("fsp-srv") {
{1001, nullptr, "SetSaveDataSize"},
{1002, nullptr, "SetSaveDataRootPath"},
{1003, nullptr, "DisableAutoSaveDataCreation"},
{1004, nullptr, "SetGlobalAccessLogMode"},
{1004, &FSP_SRV::SetGlobalAccessLogMode, "SetGlobalAccessLogMode"},
{1005, &FSP_SRV::GetGlobalAccessLogMode, "GetGlobalAccessLogMode"},
{1006, nullptr, "OutputAccessLogToSdCard"},
{1006, &FSP_SRV::OutputAccessLogToSdCard, "OutputAccessLogToSdCard"},
{1007, nullptr, "RegisterUpdatePartition"},
{1008, nullptr, "OpenRegisteredUpdatePartition"},
{1009, nullptr, "GetAndClearMemoryReportInfo"},
{1010, nullptr, "SetDataStorageRedirectTarget"},
{1011, nullptr, "OutputAccessLogToSdCard2"},
{1011, &FSP_SRV::GetAccessLogVersionInfo, "GetAccessLogVersionInfo"},
{1100, nullptr, "OverrideSaveDataTransferTokenSignVerificationKey"},
{1110, nullptr, "CorruptSaveDataFileSystemBySaveDataSpaceId2"},
{1200, nullptr, "OpenMultiCommitManager"},
@@ -814,21 +815,22 @@ void FSP_SRV::OpenSaveDataInfoReaderBySaveDataSpaceId(Kernel::HLERequestContext&
rb.PushIpcInterface<ISaveDataInfoReader>(std::make_shared<ISaveDataInfoReader>(space));
}
void FSP_SRV::SetGlobalAccessLogMode(Kernel::HLERequestContext& ctx) {
IPC::RequestParser rp{ctx};
log_mode = rp.PopEnum<LogMode>();
LOG_DEBUG(Service_FS, "called, log_mode={:08X}", static_cast<u32>(log_mode));
IPC::ResponseBuilder rb{ctx, 2};
rb.Push(RESULT_SUCCESS);
}
void FSP_SRV::GetGlobalAccessLogMode(Kernel::HLERequestContext& ctx) {
LOG_WARNING(Service_FS, "(STUBBED) called");
LOG_DEBUG(Service_FS, "called");
enum class LogMode : u32 {
Off,
Log,
RedirectToSdCard,
LogToSdCard = Log | RedirectToSdCard,
};
// Given we always want to receive logging information,
// we always specify logging as enabled.
IPC::ResponseBuilder rb{ctx, 3};
rb.Push(RESULT_SUCCESS);
rb.PushEnum(LogMode::Log);
rb.PushEnum(log_mode);
}
void FSP_SRV::OpenDataStorageByCurrentProcess(Kernel::HLERequestContext& ctx) {
@@ -902,4 +904,26 @@ void FSP_SRV::OpenPatchDataStorageByCurrentProcess(Kernel::HLERequestContext& ct
rb.Push(FileSys::ERROR_ENTITY_NOT_FOUND);
}
void FSP_SRV::OutputAccessLogToSdCard(Kernel::HLERequestContext& ctx) {
const auto raw = ctx.ReadBuffer();
auto log = Common::StringFromFixedZeroTerminatedBuffer(
reinterpret_cast<const char*>(raw.data()), raw.size());
LOG_DEBUG(Service_FS, "called, log='{}'", log);
reporter.SaveFilesystemAccessReport(log_mode, std::move(log));
IPC::ResponseBuilder rb{ctx, 2};
rb.Push(RESULT_SUCCESS);
}
void FSP_SRV::GetAccessLogVersionInfo(Kernel::HLERequestContext& ctx) {
LOG_DEBUG(Service_FS, "called");
IPC::ResponseBuilder rb{ctx, 4};
rb.Push(RESULT_SUCCESS);
rb.PushEnum(AccessLogVersion::Latest);
rb.Push(access_log_program_index);
}
} // namespace Service::FileSystem

View File

@@ -7,15 +7,32 @@
#include <memory>
#include "core/hle/service/service.h"
namespace Core {
class Reporter;
}
namespace FileSys {
class FileSystemBackend;
}
namespace Service::FileSystem {
enum class AccessLogVersion : u32 {
V7_0_0 = 2,
Latest = V7_0_0,
};
enum class LogMode : u32 {
Off,
Log,
RedirectToSdCard,
LogToSdCard = Log | RedirectToSdCard,
};
class FSP_SRV final : public ServiceFramework<FSP_SRV> {
public:
explicit FSP_SRV();
explicit FSP_SRV(const Core::Reporter& reporter);
~FSP_SRV() override;
private:
@@ -26,13 +43,20 @@ private:
void OpenSaveDataFileSystem(Kernel::HLERequestContext& ctx);
void OpenReadOnlySaveDataFileSystem(Kernel::HLERequestContext& ctx);
void OpenSaveDataInfoReaderBySaveDataSpaceId(Kernel::HLERequestContext& ctx);
void SetGlobalAccessLogMode(Kernel::HLERequestContext& ctx);
void GetGlobalAccessLogMode(Kernel::HLERequestContext& ctx);
void OpenDataStorageByCurrentProcess(Kernel::HLERequestContext& ctx);
void OpenDataStorageByDataId(Kernel::HLERequestContext& ctx);
void OpenPatchDataStorageByCurrentProcess(Kernel::HLERequestContext& ctx);
void OutputAccessLogToSdCard(Kernel::HLERequestContext& ctx);
void GetAccessLogVersionInfo(Kernel::HLERequestContext& ctx);
FileSys::VirtualFile romfs;
u64 current_process_id = 0;
u32 access_log_program_index = 0;
LogMode log_mode = LogMode::LogToSdCard;
const Core::Reporter& reporter;
};
} // namespace Service::FileSystem

View File

@@ -22,7 +22,7 @@ public:
{0, nullptr, "GetCompletionEvent"},
{1, nullptr, "Cancel"},
{10100, nullptr, "GetFriendListIds"},
{10101, nullptr, "GetFriendList"},
{10101, &IFriendService::GetFriendList, "GetFriendList"},
{10102, nullptr, "UpdateFriendInfo"},
{10110, nullptr, "GetFriendProfileImage"},
{10200, nullptr, "SendFriendRequestForApplication"},
@@ -99,6 +99,23 @@ public:
}
private:
enum class PresenceFilter : u32 {
None = 0,
Online = 1,
OnlinePlay = 2,
OnlineOrOnlinePlay = 3,
};
struct SizedFriendFilter {
PresenceFilter presence;
u8 is_favorite;
u8 same_app;
u8 same_app_played;
u8 arbitary_app_played;
u64 group_id;
};
static_assert(sizeof(SizedFriendFilter) == 0x10, "SizedFriendFilter is an invalid size");
void DeclareCloseOnlinePlaySession(Kernel::HLERequestContext& ctx) {
// Stub used by Splatoon 2
LOG_WARNING(Service_ACC, "(STUBBED) called");
@@ -112,6 +129,22 @@ private:
IPC::ResponseBuilder rb{ctx, 2};
rb.Push(RESULT_SUCCESS);
}
void GetFriendList(Kernel::HLERequestContext& ctx) {
IPC::RequestParser rp{ctx};
const auto friend_offset = rp.Pop<u32>();
const auto uuid = rp.PopRaw<Common::UUID>();
[[maybe_unused]] const auto filter = rp.PopRaw<SizedFriendFilter>();
const auto pid = rp.Pop<u64>();
LOG_WARNING(Service_ACC, "(STUBBED) called, offset={}, uuid={}, pid={}", friend_offset,
uuid.Format(), pid);
IPC::ResponseBuilder rb{ctx, 3};
rb.Push(RESULT_SUCCESS);
rb.Push<u32>(0); // Friend count
// TODO(ogniK): Return a buffer of u64s which are the "NetworkServiceAccountId"
}
};
class INotificationService final : public ServiceFramework<INotificationService> {

View File

@@ -548,6 +548,37 @@ void Controller_NPad::DisconnectNPad(u32 npad_id) {
connected_controllers[NPadIdToIndex(npad_id)].is_connected = false;
}
void Controller_NPad::StartLRAssignmentMode() {
// Nothing internally is used for lr assignment mode. Since we have the ability to set the
// controller types from boot, it doesn't really matter about showing a selection screen
is_in_lr_assignment_mode = true;
}
void Controller_NPad::StopLRAssignmentMode() {
is_in_lr_assignment_mode = false;
}
bool Controller_NPad::SwapNpadAssignment(u32 npad_id_1, u32 npad_id_2) {
if (npad_id_1 == NPAD_HANDHELD || npad_id_2 == NPAD_HANDHELD || npad_id_1 == NPAD_UNKNOWN ||
npad_id_2 == NPAD_UNKNOWN) {
return true;
}
const auto npad_index_1 = NPadIdToIndex(npad_id_1);
const auto npad_index_2 = NPadIdToIndex(npad_id_2);
if (!IsControllerSupported(connected_controllers[npad_index_1].type) ||
!IsControllerSupported(connected_controllers[npad_index_2].type)) {
return false;
}
std::swap(connected_controllers[npad_index_1].type, connected_controllers[npad_index_2].type);
InitNewlyAddedControler(npad_index_1);
InitNewlyAddedControler(npad_index_2);
return true;
}
bool Controller_NPad::IsControllerSupported(NPadControllerType controller) {
if (controller == NPadControllerType::Handheld) {
// Handheld is not even a supported type, lets stop here

View File

@@ -124,6 +124,10 @@ public:
void ConnectAllDisconnectedControllers();
void ClearAllControllers();
void StartLRAssignmentMode();
void StopLRAssignmentMode();
bool SwapNpadAssignment(u32 npad_id_1, u32 npad_id_2);
// Logical OR for all buttons presses on all controllers
// Specifically for cheat engine and other features.
u32 GetAndResetPressState();
@@ -321,5 +325,6 @@ private:
void RequestPadStateUpdate(u32 npad_id);
std::array<ControllerPad, 10> npad_pad_states{};
bool IsControllerSupported(NPadControllerType controller);
bool is_in_lr_assignment_mode{false};
};
} // namespace Service::HID

View File

@@ -0,0 +1,13 @@
// Copyright 2019 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include "core/hle/result.h"
namespace Service::HID {
constexpr ResultCode ERR_NPAD_NOT_CONNECTED{ErrorModule::HID, 710};
} // namespace Service::HID

View File

@@ -16,6 +16,7 @@
#include "core/hle/kernel/readable_event.h"
#include "core/hle/kernel/shared_memory.h"
#include "core/hle/kernel/writable_event.h"
#include "core/hle/service/hid/errors.h"
#include "core/hle/service/hid/hid.h"
#include "core/hle/service/hid/irs.h"
#include "core/hle/service/hid/xcd.h"
@@ -202,11 +203,11 @@ Hid::Hid() : ServiceFramework("hid") {
{123, nullptr, "SetNpadJoyAssignmentModeSingleByDefault"},
{124, &Hid::SetNpadJoyAssignmentModeDual, "SetNpadJoyAssignmentModeDual"},
{125, &Hid::MergeSingleJoyAsDualJoy, "MergeSingleJoyAsDualJoy"},
{126, nullptr, "StartLrAssignmentMode"},
{127, nullptr, "StopLrAssignmentMode"},
{126, &Hid::StartLrAssignmentMode, "StartLrAssignmentMode"},
{127, &Hid::StopLrAssignmentMode, "StopLrAssignmentMode"},
{128, &Hid::SetNpadHandheldActivationMode, "SetNpadHandheldActivationMode"},
{129, nullptr, "GetNpadHandheldActivationMode"},
{130, nullptr, "SwapNpadAssignment"},
{130, &Hid::SwapNpadAssignment, "SwapNpadAssignment"},
{131, nullptr, "IsUnintendedHomeButtonInputProtectionEnabled"},
{132, nullptr, "EnableUnintendedHomeButtonInputProtection"},
{133, nullptr, "SetNpadJoyAssignmentModeSingleWithDestination"},
@@ -733,6 +734,49 @@ void Hid::SetPalmaBoostMode(Kernel::HLERequestContext& ctx) {
rb.Push(RESULT_SUCCESS);
}
void Hid::StartLrAssignmentMode(Kernel::HLERequestContext& ctx) {
IPC::RequestParser rp{ctx};
const auto applet_resource_user_id{rp.Pop<u64>()};
LOG_DEBUG(Service_HID, "called, applet_resource_user_id={}", applet_resource_user_id);
auto& controller = applet_resource->GetController<Controller_NPad>(HidController::NPad);
controller.StartLRAssignmentMode();
IPC::ResponseBuilder rb{ctx, 2};
rb.Push(RESULT_SUCCESS);
}
void Hid::StopLrAssignmentMode(Kernel::HLERequestContext& ctx) {
IPC::RequestParser rp{ctx};
const auto applet_resource_user_id{rp.Pop<u64>()};
LOG_DEBUG(Service_HID, "called, applet_resource_user_id={}", applet_resource_user_id);
auto& controller = applet_resource->GetController<Controller_NPad>(HidController::NPad);
controller.StopLRAssignmentMode();
IPC::ResponseBuilder rb{ctx, 2};
rb.Push(RESULT_SUCCESS);
}
void Hid::SwapNpadAssignment(Kernel::HLERequestContext& ctx) {
IPC::RequestParser rp{ctx};
const auto npad_1{rp.Pop<u32>()};
const auto npad_2{rp.Pop<u32>()};
const auto applet_resource_user_id{rp.Pop<u64>()};
LOG_DEBUG(Service_HID, "called, applet_resource_user_id={}, npad_1={}, npad_2={}",
applet_resource_user_id, npad_1, npad_2);
auto& controller = applet_resource->GetController<Controller_NPad>(HidController::NPad);
IPC::ResponseBuilder rb{ctx, 2};
if (controller.SwapNpadAssignment(npad_1, npad_2)) {
rb.Push(RESULT_SUCCESS);
} else {
LOG_ERROR(Service_HID, "Npads are not connected!");
rb.Push(ERR_NPAD_NOT_CONNECTED);
}
}
class HidDbg final : public ServiceFramework<HidDbg> {
public:
explicit HidDbg() : ServiceFramework{"hid:dbg"} {

View File

@@ -119,6 +119,9 @@ private:
void StopSixAxisSensor(Kernel::HLERequestContext& ctx);
void SetIsPalmaAllConnectable(Kernel::HLERequestContext& ctx);
void SetPalmaBoostMode(Kernel::HLERequestContext& ctx);
void StartLrAssignmentMode(Kernel::HLERequestContext& ctx);
void StopLrAssignmentMode(Kernel::HLERequestContext& ctx);
void SwapNpadAssignment(Kernel::HLERequestContext& ctx);
std::shared_ptr<IAppletResource> applet_resource;
};

View File

@@ -345,14 +345,16 @@ public:
vm_manager
.MirrorMemory(*map_address, nro_address, nro_size, Kernel::MemoryState::ModuleCode)
.IsSuccess());
ASSERT(vm_manager.UnmapRange(nro_address, nro_size).IsSuccess());
ASSERT(vm_manager.ReprotectRange(nro_address, nro_size, Kernel::VMAPermission::None)
.IsSuccess());
if (bss_size > 0) {
ASSERT(vm_manager
.MirrorMemory(*map_address + nro_size, bss_address, bss_size,
Kernel::MemoryState::ModuleCode)
.IsSuccess());
ASSERT(vm_manager.UnmapRange(bss_address, bss_size).IsSuccess());
ASSERT(vm_manager.ReprotectRange(bss_address, bss_size, Kernel::VMAPermission::None)
.IsSuccess());
}
vm_manager.ReprotectRange(*map_address, header.text_size,
@@ -364,7 +366,8 @@ public:
Core::System::GetInstance().InvalidateCpuInstructionCaches();
nro.insert_or_assign(*map_address, NROInfo{hash, nro_size + bss_size});
nro.insert_or_assign(*map_address,
NROInfo{hash, nro_address, nro_size, bss_address, bss_size});
IPC::ResponseBuilder rb{ctx, 4};
rb.Push(RESULT_SUCCESS);
@@ -409,9 +412,23 @@ public:
}
auto& vm_manager = Core::CurrentProcess()->VMManager();
const auto& nro_size = iter->second.size;
const auto& nro_info = iter->second;
ASSERT(vm_manager.UnmapRange(nro_address, nro_size).IsSuccess());
// Unmap the mirrored memory
ASSERT(
vm_manager.UnmapRange(nro_address, nro_info.nro_size + nro_info.bss_size).IsSuccess());
// Reprotect the source memory
ASSERT(vm_manager
.ReprotectRange(nro_info.nro_address, nro_info.nro_size,
Kernel::VMAPermission::ReadWrite)
.IsSuccess());
if (nro_info.bss_size > 0) {
ASSERT(vm_manager
.ReprotectRange(nro_info.bss_address, nro_info.bss_size,
Kernel::VMAPermission::ReadWrite)
.IsSuccess());
}
Core::System::GetInstance().InvalidateCpuInstructionCaches();
@@ -473,7 +490,10 @@ private:
struct NROInfo {
SHA256Hash hash;
u64 size;
VAddr nro_address;
u64 nro_size;
VAddr bss_address;
u64 bss_size;
};
bool initialized = false;

View File

@@ -48,7 +48,7 @@ public:
{19, nullptr, "Export"},
{20, nullptr, "IsBrokenDatabaseWithClearFlag"},
{21, &IDatabaseService::GetIndex, "GetIndex"},
{22, nullptr, "SetInterfaceVersion"},
{22, &IDatabaseService::SetInterfaceVersion, "SetInterfaceVersion"},
{23, nullptr, "Convert"},
};
// clang-format on
@@ -350,8 +350,22 @@ private:
rb.Push(index);
}
void SetInterfaceVersion(Kernel::HLERequestContext& ctx) {
IPC::RequestParser rp{ctx};
current_interface_version = rp.PopRaw<u32>();
LOG_DEBUG(Service_Mii, "called, interface_version={:08X}", current_interface_version);
UNIMPLEMENTED_IF(current_interface_version != 1);
IPC::ResponseBuilder rb{ctx, 2};
rb.Push(RESULT_SUCCESS);
}
MiiManager db;
u32 current_interface_version = 0;
// Last read offsets of Get functions
std::array<u32, 4> offsets{};
};

View File

@@ -77,7 +77,7 @@ enum class LoadState : u32 {
Done = 1,
};
static void DecryptSharedFont(const std::vector<u32>& input, std::vector<u8>& output,
static void DecryptSharedFont(const std::vector<u32>& input, Kernel::PhysicalMemory& output,
std::size_t& offset) {
ASSERT_MSG(offset + (input.size() * sizeof(u32)) < SHARED_FONT_MEM_SIZE,
"Shared fonts exceeds 17mb!");
@@ -94,7 +94,7 @@ static void DecryptSharedFont(const std::vector<u32>& input, std::vector<u8>& ou
offset += transformed_font.size() * sizeof(u32);
}
static void EncryptSharedFont(const std::vector<u8>& input, std::vector<u8>& output,
static void EncryptSharedFont(const std::vector<u8>& input, Kernel::PhysicalMemory& output,
std::size_t& offset) {
ASSERT_MSG(offset + input.size() + 8 < SHARED_FONT_MEM_SIZE, "Shared fonts exceeds 17mb!");
const u32 KEY = EXPECTED_MAGIC ^ EXPECTED_RESULT;
@@ -121,7 +121,7 @@ struct PL_U::Impl {
return shared_font_regions.at(index);
}
void BuildSharedFontsRawRegions(const std::vector<u8>& input) {
void BuildSharedFontsRawRegions(const Kernel::PhysicalMemory& input) {
// As we can derive the xor key we can just populate the offsets
// based on the shared memory dump
unsigned cur_offset = 0;
@@ -144,7 +144,7 @@ struct PL_U::Impl {
Kernel::SharedPtr<Kernel::SharedMemory> shared_font_mem;
/// Backing memory for the shared font data
std::shared_ptr<std::vector<u8>> shared_font;
std::shared_ptr<Kernel::PhysicalMemory> shared_font;
// Automatically populated based on shared_fonts dump or system archives.
std::vector<FontRegion> shared_font_regions;
@@ -166,7 +166,7 @@ PL_U::PL_U() : ServiceFramework("pl:u"), impl{std::make_unique<Impl>()} {
// Rebuild shared fonts from data ncas
if (nand->HasEntry(static_cast<u64>(FontArchives::Standard),
FileSys::ContentRecordType::Data)) {
impl->shared_font = std::make_shared<std::vector<u8>>(SHARED_FONT_MEM_SIZE);
impl->shared_font = std::make_shared<Kernel::PhysicalMemory>(SHARED_FONT_MEM_SIZE);
for (auto font : SHARED_FONTS) {
const auto nca =
nand->GetEntry(static_cast<u64>(font.first), FileSys::ContentRecordType::Data);
@@ -207,7 +207,7 @@ PL_U::PL_U() : ServiceFramework("pl:u"), impl{std::make_unique<Impl>()} {
}
} else {
impl->shared_font = std::make_shared<std::vector<u8>>(
impl->shared_font = std::make_shared<Kernel::PhysicalMemory>(
SHARED_FONT_MEM_SIZE); // Shared memory needs to always be allocated and a fixed size
const std::string user_path = FileUtil::GetUserPath(FileUtil::UserPath::SysDataDir);

View File

@@ -8,6 +8,11 @@
#include "common/bit_field.h"
#include "common/common_types.h"
#include "common/swap.h"
#include "core/hle/service/nvdrv/nvdata.h"
namespace Core {
class System;
}
namespace Service::Nvidia::Devices {
@@ -15,7 +20,7 @@ namespace Service::Nvidia::Devices {
/// implement the ioctl interface.
class nvdevice {
public:
nvdevice() = default;
explicit nvdevice(Core::System& system) : system{system} {};
virtual ~nvdevice() = default;
union Ioctl {
u32_le raw;
@@ -33,7 +38,11 @@ public:
* @param output A buffer where the output data will be written to.
* @returns The result code of the ioctl.
*/
virtual u32 ioctl(Ioctl command, const std::vector<u8>& input, std::vector<u8>& output) = 0;
virtual u32 ioctl(Ioctl command, const std::vector<u8>& input, std::vector<u8>& output,
IoctlCtrl& ctrl) = 0;
protected:
Core::System& system;
};
} // namespace Service::Nvidia::Devices

View File

@@ -13,10 +13,12 @@
namespace Service::Nvidia::Devices {
nvdisp_disp0::nvdisp_disp0(std::shared_ptr<nvmap> nvmap_dev) : nvmap_dev(std::move(nvmap_dev)) {}
nvdisp_disp0::nvdisp_disp0(Core::System& system, std::shared_ptr<nvmap> nvmap_dev)
: nvdevice(system), nvmap_dev(std::move(nvmap_dev)) {}
nvdisp_disp0 ::~nvdisp_disp0() = default;
u32 nvdisp_disp0::ioctl(Ioctl command, const std::vector<u8>& input, std::vector<u8>& output) {
u32 nvdisp_disp0::ioctl(Ioctl command, const std::vector<u8>& input, std::vector<u8>& output,
IoctlCtrl& ctrl) {
UNIMPLEMENTED_MSG("Unimplemented ioctl");
return 0;
}
@@ -34,9 +36,8 @@ void nvdisp_disp0::flip(u32 buffer_handle, u32 offset, u32 format, u32 width, u3
addr, offset, width, height, stride, static_cast<PixelFormat>(format),
transform, crop_rect};
auto& instance = Core::System::GetInstance();
instance.GetPerfStats().EndGameFrame();
instance.GPU().SwapBuffers(framebuffer);
system.GetPerfStats().EndGameFrame();
system.GPU().SwapBuffers(framebuffer);
}
} // namespace Service::Nvidia::Devices

View File

@@ -17,10 +17,11 @@ class nvmap;
class nvdisp_disp0 final : public nvdevice {
public:
explicit nvdisp_disp0(std::shared_ptr<nvmap> nvmap_dev);
explicit nvdisp_disp0(Core::System& system, std::shared_ptr<nvmap> nvmap_dev);
~nvdisp_disp0() override;
u32 ioctl(Ioctl command, const std::vector<u8>& input, std::vector<u8>& output) override;
u32 ioctl(Ioctl command, const std::vector<u8>& input, std::vector<u8>& output,
IoctlCtrl& ctrl) override;
/// Performs a screen flip, drawing the buffer pointed to by the handle.
void flip(u32 buffer_handle, u32 offset, u32 format, u32 width, u32 height, u32 stride,

View File

@@ -22,10 +22,12 @@ enum {
};
}
nvhost_as_gpu::nvhost_as_gpu(std::shared_ptr<nvmap> nvmap_dev) : nvmap_dev(std::move(nvmap_dev)) {}
nvhost_as_gpu::nvhost_as_gpu(Core::System& system, std::shared_ptr<nvmap> nvmap_dev)
: nvdevice(system), nvmap_dev(std::move(nvmap_dev)) {}
nvhost_as_gpu::~nvhost_as_gpu() = default;
u32 nvhost_as_gpu::ioctl(Ioctl command, const std::vector<u8>& input, std::vector<u8>& output) {
u32 nvhost_as_gpu::ioctl(Ioctl command, const std::vector<u8>& input, std::vector<u8>& output,
IoctlCtrl& ctrl) {
LOG_DEBUG(Service_NVDRV, "called, command=0x{:08X}, input_size=0x{:X}, output_size=0x{:X}",
command.raw, input.size(), output.size());
@@ -65,7 +67,7 @@ u32 nvhost_as_gpu::AllocateSpace(const std::vector<u8>& input, std::vector<u8>&
LOG_DEBUG(Service_NVDRV, "called, pages={:X}, page_size={:X}, flags={:X}", params.pages,
params.page_size, params.flags);
auto& gpu = Core::System::GetInstance().GPU();
auto& gpu = system.GPU();
const u64 size{static_cast<u64>(params.pages) * static_cast<u64>(params.page_size)};
if (params.flags & 1) {
params.offset = gpu.MemoryManager().AllocateSpace(params.offset, size, 1);
@@ -85,7 +87,7 @@ u32 nvhost_as_gpu::Remap(const std::vector<u8>& input, std::vector<u8>& output)
std::vector<IoctlRemapEntry> entries(num_entries);
std::memcpy(entries.data(), input.data(), input.size());
auto& gpu = Core::System::GetInstance().GPU();
auto& gpu = system.GPU();
for (const auto& entry : entries) {
LOG_WARNING(Service_NVDRV, "remap entry, offset=0x{:X} handle=0x{:X} pages=0x{:X}",
entry.offset, entry.nvmap_handle, entry.pages);
@@ -136,7 +138,7 @@ u32 nvhost_as_gpu::MapBufferEx(const std::vector<u8>& input, std::vector<u8>& ou
// case to prevent unexpected behavior.
ASSERT(object->id == params.nvmap_handle);
auto& gpu = Core::System::GetInstance().GPU();
auto& gpu = system.GPU();
if (params.flags & 1) {
params.offset = gpu.MemoryManager().MapBufferEx(object->addr, params.offset, object->size);
@@ -173,8 +175,7 @@ u32 nvhost_as_gpu::UnmapBuffer(const std::vector<u8>& input, std::vector<u8>& ou
return 0;
}
params.offset = Core::System::GetInstance().GPU().MemoryManager().UnmapBuffer(params.offset,
itr->second.size);
params.offset = system.GPU().MemoryManager().UnmapBuffer(params.offset, itr->second.size);
buffer_mappings.erase(itr->second.offset);
std::memcpy(output.data(), &params, output.size());

View File

@@ -17,10 +17,11 @@ class nvmap;
class nvhost_as_gpu final : public nvdevice {
public:
explicit nvhost_as_gpu(std::shared_ptr<nvmap> nvmap_dev);
explicit nvhost_as_gpu(Core::System& system, std::shared_ptr<nvmap> nvmap_dev);
~nvhost_as_gpu() override;
u32 ioctl(Ioctl command, const std::vector<u8>& input, std::vector<u8>& output) override;
u32 ioctl(Ioctl command, const std::vector<u8>& input, std::vector<u8>& output,
IoctlCtrl& ctrl) override;
private:
enum class IoctlCommand : u32_le {

View File

@@ -7,14 +7,20 @@
#include "common/assert.h"
#include "common/logging/log.h"
#include "core/core.h"
#include "core/hle/kernel/readable_event.h"
#include "core/hle/kernel/writable_event.h"
#include "core/hle/service/nvdrv/devices/nvhost_ctrl.h"
#include "video_core/gpu.h"
namespace Service::Nvidia::Devices {
nvhost_ctrl::nvhost_ctrl() = default;
nvhost_ctrl::nvhost_ctrl(Core::System& system, EventInterface& events_interface)
: nvdevice(system), events_interface{events_interface} {}
nvhost_ctrl::~nvhost_ctrl() = default;
u32 nvhost_ctrl::ioctl(Ioctl command, const std::vector<u8>& input, std::vector<u8>& output) {
u32 nvhost_ctrl::ioctl(Ioctl command, const std::vector<u8>& input, std::vector<u8>& output,
IoctlCtrl& ctrl) {
LOG_DEBUG(Service_NVDRV, "called, command=0x{:08X}, input_size=0x{:X}, output_size=0x{:X}",
command.raw, input.size(), output.size());
@@ -22,11 +28,15 @@ u32 nvhost_ctrl::ioctl(Ioctl command, const std::vector<u8>& input, std::vector<
case IoctlCommand::IocGetConfigCommand:
return NvOsGetConfigU32(input, output);
case IoctlCommand::IocCtrlEventWaitCommand:
return IocCtrlEventWait(input, output, false);
return IocCtrlEventWait(input, output, false, ctrl);
case IoctlCommand::IocCtrlEventWaitAsyncCommand:
return IocCtrlEventWait(input, output, true);
return IocCtrlEventWait(input, output, true, ctrl);
case IoctlCommand::IocCtrlEventRegisterCommand:
return IocCtrlEventRegister(input, output);
case IoctlCommand::IocCtrlEventUnregisterCommand:
return IocCtrlEventUnregister(input, output);
case IoctlCommand::IocCtrlEventSignalCommand:
return IocCtrlEventSignal(input, output);
}
UNIMPLEMENTED_MSG("Unimplemented ioctl");
return 0;
@@ -41,23 +51,137 @@ u32 nvhost_ctrl::NvOsGetConfigU32(const std::vector<u8>& input, std::vector<u8>&
}
u32 nvhost_ctrl::IocCtrlEventWait(const std::vector<u8>& input, std::vector<u8>& output,
bool is_async) {
bool is_async, IoctlCtrl& ctrl) {
IocCtrlEventWaitParams params{};
std::memcpy(&params, input.data(), sizeof(params));
LOG_WARNING(Service_NVDRV,
"(STUBBED) called, syncpt_id={}, threshold={}, timeout={}, is_async={}",
params.syncpt_id, params.threshold, params.timeout, is_async);
LOG_DEBUG(Service_NVDRV, "syncpt_id={}, threshold={}, timeout={}, is_async={}",
params.syncpt_id, params.threshold, params.timeout, is_async);
// TODO(Subv): Implement actual syncpt waiting.
params.value = 0;
if (params.syncpt_id >= MaxSyncPoints) {
return NvResult::BadParameter;
}
auto& gpu = system.GPU();
// This is mostly to take into account unimplemented features. As synced
// gpu is always synced.
if (!gpu.IsAsync()) {
return NvResult::Success;
}
auto lock = gpu.LockSync();
const u32 current_syncpoint_value = gpu.GetSyncpointValue(params.syncpt_id);
const s32 diff = current_syncpoint_value - params.threshold;
if (diff >= 0) {
params.value = current_syncpoint_value;
std::memcpy(output.data(), &params, sizeof(params));
return NvResult::Success;
}
const u32 target_value = current_syncpoint_value - diff;
if (!is_async) {
params.value = 0;
}
if (params.timeout == 0) {
std::memcpy(output.data(), &params, sizeof(params));
return NvResult::Timeout;
}
u32 event_id;
if (is_async) {
event_id = params.value & 0x00FF;
if (event_id >= MaxNvEvents) {
std::memcpy(output.data(), &params, sizeof(params));
return NvResult::BadParameter;
}
} else {
if (ctrl.fresh_call) {
const auto result = events_interface.GetFreeEvent();
if (result) {
event_id = *result;
} else {
LOG_CRITICAL(Service_NVDRV, "No Free Events available!");
event_id = params.value & 0x00FF;
}
} else {
event_id = ctrl.event_id;
}
}
EventState status = events_interface.status[event_id];
if (event_id < MaxNvEvents || status == EventState::Free || status == EventState::Registered) {
events_interface.SetEventStatus(event_id, EventState::Waiting);
events_interface.assigned_syncpt[event_id] = params.syncpt_id;
events_interface.assigned_value[event_id] = target_value;
if (is_async) {
params.value = params.syncpt_id << 4;
} else {
params.value = ((params.syncpt_id & 0xfff) << 16) | 0x10000000;
}
params.value |= event_id;
events_interface.events[event_id].writable->Clear();
gpu.RegisterSyncptInterrupt(params.syncpt_id, target_value);
if (!is_async && ctrl.fresh_call) {
ctrl.must_delay = true;
ctrl.timeout = params.timeout;
ctrl.event_id = event_id;
return NvResult::Timeout;
}
std::memcpy(output.data(), &params, sizeof(params));
return NvResult::Timeout;
}
std::memcpy(output.data(), &params, sizeof(params));
return 0;
return NvResult::BadParameter;
}
u32 nvhost_ctrl::IocCtrlEventRegister(const std::vector<u8>& input, std::vector<u8>& output) {
LOG_WARNING(Service_NVDRV, "(STUBBED) called");
// TODO(bunnei): Implement this.
return 0;
IocCtrlEventRegisterParams params{};
std::memcpy(&params, input.data(), sizeof(params));
const u32 event_id = params.user_event_id & 0x00FF;
LOG_DEBUG(Service_NVDRV, " called, user_event_id: {:X}", event_id);
if (event_id >= MaxNvEvents) {
return NvResult::BadParameter;
}
if (events_interface.registered[event_id]) {
return NvResult::BadParameter;
}
events_interface.RegisterEvent(event_id);
return NvResult::Success;
}
u32 nvhost_ctrl::IocCtrlEventUnregister(const std::vector<u8>& input, std::vector<u8>& output) {
IocCtrlEventUnregisterParams params{};
std::memcpy(&params, input.data(), sizeof(params));
const u32 event_id = params.user_event_id & 0x00FF;
LOG_DEBUG(Service_NVDRV, " called, user_event_id: {:X}", event_id);
if (event_id >= MaxNvEvents) {
return NvResult::BadParameter;
}
if (!events_interface.registered[event_id]) {
return NvResult::BadParameter;
}
events_interface.UnregisterEvent(event_id);
return NvResult::Success;
}
u32 nvhost_ctrl::IocCtrlEventSignal(const std::vector<u8>& input, std::vector<u8>& output) {
IocCtrlEventSignalParams params{};
std::memcpy(&params, input.data(), sizeof(params));
// TODO(Blinkhawk): This is normally called when an NvEvents timeout on WaitSynchronization
// It is believed from RE to cancel the GPU Event. However, better research is required
u32 event_id = params.user_event_id & 0x00FF;
LOG_WARNING(Service_NVDRV, "(STUBBED) called, user_event_id: {:X}", event_id);
if (event_id >= MaxNvEvents) {
return NvResult::BadParameter;
}
if (events_interface.status[event_id] == EventState::Waiting) {
auto& gpu = system.GPU();
if (gpu.CancelSyncptInterrupt(events_interface.assigned_syncpt[event_id],
events_interface.assigned_value[event_id])) {
events_interface.LiberateEvent(event_id);
events_interface.events[event_id].writable->Signal();
}
}
return NvResult::Success;
}
} // namespace Service::Nvidia::Devices

View File

@@ -8,15 +8,17 @@
#include <vector>
#include "common/common_types.h"
#include "core/hle/service/nvdrv/devices/nvdevice.h"
#include "core/hle/service/nvdrv/nvdrv.h"
namespace Service::Nvidia::Devices {
class nvhost_ctrl final : public nvdevice {
public:
nvhost_ctrl();
explicit nvhost_ctrl(Core::System& system, EventInterface& events_interface);
~nvhost_ctrl() override;
u32 ioctl(Ioctl command, const std::vector<u8>& input, std::vector<u8>& output) override;
u32 ioctl(Ioctl command, const std::vector<u8>& input, std::vector<u8>& output,
IoctlCtrl& ctrl) override;
private:
enum class IoctlCommand : u32_le {
@@ -132,9 +134,16 @@ private:
u32 NvOsGetConfigU32(const std::vector<u8>& input, std::vector<u8>& output);
u32 IocCtrlEventWait(const std::vector<u8>& input, std::vector<u8>& output, bool is_async);
u32 IocCtrlEventWait(const std::vector<u8>& input, std::vector<u8>& output, bool is_async,
IoctlCtrl& ctrl);
u32 IocCtrlEventRegister(const std::vector<u8>& input, std::vector<u8>& output);
u32 IocCtrlEventUnregister(const std::vector<u8>& input, std::vector<u8>& output);
u32 IocCtrlEventSignal(const std::vector<u8>& input, std::vector<u8>& output);
EventInterface& events_interface;
};
} // namespace Service::Nvidia::Devices

View File

@@ -12,10 +12,11 @@
namespace Service::Nvidia::Devices {
nvhost_ctrl_gpu::nvhost_ctrl_gpu() = default;
nvhost_ctrl_gpu::nvhost_ctrl_gpu(Core::System& system) : nvdevice(system) {}
nvhost_ctrl_gpu::~nvhost_ctrl_gpu() = default;
u32 nvhost_ctrl_gpu::ioctl(Ioctl command, const std::vector<u8>& input, std::vector<u8>& output) {
u32 nvhost_ctrl_gpu::ioctl(Ioctl command, const std::vector<u8>& input, std::vector<u8>& output,
IoctlCtrl& ctrl) {
LOG_DEBUG(Service_NVDRV, "called, command=0x{:08X}, input_size=0x{:X}, output_size=0x{:X}",
command.raw, input.size(), output.size());
@@ -185,7 +186,7 @@ u32 nvhost_ctrl_gpu::GetGpuTime(const std::vector<u8>& input, std::vector<u8>& o
IoctlGetGpuTime params{};
std::memcpy(&params, input.data(), input.size());
const auto ns = Core::Timing::CyclesToNs(Core::System::GetInstance().CoreTiming().GetTicks());
const auto ns = Core::Timing::CyclesToNs(system.CoreTiming().GetTicks());
params.gpu_time = static_cast<u64_le>(ns.count());
std::memcpy(output.data(), &params, output.size());
return 0;

View File

@@ -13,10 +13,11 @@ namespace Service::Nvidia::Devices {
class nvhost_ctrl_gpu final : public nvdevice {
public:
nvhost_ctrl_gpu();
explicit nvhost_ctrl_gpu(Core::System& system);
~nvhost_ctrl_gpu() override;
u32 ioctl(Ioctl command, const std::vector<u8>& input, std::vector<u8>& output) override;
u32 ioctl(Ioctl command, const std::vector<u8>& input, std::vector<u8>& output,
IoctlCtrl& ctrl) override;
private:
enum class IoctlCommand : u32_le {

View File

@@ -13,10 +13,12 @@
namespace Service::Nvidia::Devices {
nvhost_gpu::nvhost_gpu(std::shared_ptr<nvmap> nvmap_dev) : nvmap_dev(std::move(nvmap_dev)) {}
nvhost_gpu::nvhost_gpu(Core::System& system, std::shared_ptr<nvmap> nvmap_dev)
: nvdevice(system), nvmap_dev(std::move(nvmap_dev)) {}
nvhost_gpu::~nvhost_gpu() = default;
u32 nvhost_gpu::ioctl(Ioctl command, const std::vector<u8>& input, std::vector<u8>& output) {
u32 nvhost_gpu::ioctl(Ioctl command, const std::vector<u8>& input, std::vector<u8>& output,
IoctlCtrl& ctrl) {
LOG_DEBUG(Service_NVDRV, "called, command=0x{:08X}, input_size=0x{:X}, output_size=0x{:X}",
command.raw, input.size(), output.size());
@@ -119,8 +121,10 @@ u32 nvhost_gpu::AllocGPFIFOEx2(const std::vector<u8>& input, std::vector<u8>& ou
params.num_entries, params.flags, params.unk0, params.unk1, params.unk2,
params.unk3);
params.fence_out.id = 0;
params.fence_out.value = 0;
auto& gpu = system.GPU();
params.fence_out.id = assigned_syncpoints;
params.fence_out.value = gpu.GetSyncpointValue(assigned_syncpoints);
assigned_syncpoints++;
std::memcpy(output.data(), &params, output.size());
return 0;
}
@@ -143,7 +147,7 @@ u32 nvhost_gpu::SubmitGPFIFO(const std::vector<u8>& input, std::vector<u8>& outp
IoctlSubmitGpfifo params{};
std::memcpy(&params, input.data(), sizeof(IoctlSubmitGpfifo));
LOG_WARNING(Service_NVDRV, "(STUBBED) called, gpfifo={:X}, num_entries={:X}, flags={:X}",
params.address, params.num_entries, params.flags);
params.address, params.num_entries, params.flags.raw);
ASSERT_MSG(input.size() == sizeof(IoctlSubmitGpfifo) +
params.num_entries * sizeof(Tegra::CommandListHeader),
@@ -153,10 +157,18 @@ u32 nvhost_gpu::SubmitGPFIFO(const std::vector<u8>& input, std::vector<u8>& outp
std::memcpy(entries.data(), &input[sizeof(IoctlSubmitGpfifo)],
params.num_entries * sizeof(Tegra::CommandListHeader));
Core::System::GetInstance().GPU().PushGPUEntries(std::move(entries));
UNIMPLEMENTED_IF(params.flags.add_wait.Value() != 0);
UNIMPLEMENTED_IF(params.flags.add_increment.Value() != 0);
auto& gpu = system.GPU();
u32 current_syncpoint_value = gpu.GetSyncpointValue(params.fence_out.id);
if (params.flags.increment.Value()) {
params.fence_out.value += current_syncpoint_value;
} else {
params.fence_out.value = current_syncpoint_value;
}
gpu.PushGPUEntries(std::move(entries));
params.fence_out.id = 0;
params.fence_out.value = 0;
std::memcpy(output.data(), &params, sizeof(IoctlSubmitGpfifo));
return 0;
}
@@ -168,16 +180,24 @@ u32 nvhost_gpu::KickoffPB(const std::vector<u8>& input, std::vector<u8>& output)
IoctlSubmitGpfifo params{};
std::memcpy(&params, input.data(), sizeof(IoctlSubmitGpfifo));
LOG_WARNING(Service_NVDRV, "(STUBBED) called, gpfifo={:X}, num_entries={:X}, flags={:X}",
params.address, params.num_entries, params.flags);
params.address, params.num_entries, params.flags.raw);
Tegra::CommandList entries(params.num_entries);
Memory::ReadBlock(params.address, entries.data(),
params.num_entries * sizeof(Tegra::CommandListHeader));
Core::System::GetInstance().GPU().PushGPUEntries(std::move(entries));
UNIMPLEMENTED_IF(params.flags.add_wait.Value() != 0);
UNIMPLEMENTED_IF(params.flags.add_increment.Value() != 0);
auto& gpu = system.GPU();
u32 current_syncpoint_value = gpu.GetSyncpointValue(params.fence_out.id);
if (params.flags.increment.Value()) {
params.fence_out.value += current_syncpoint_value;
} else {
params.fence_out.value = current_syncpoint_value;
}
gpu.PushGPUEntries(std::move(entries));
params.fence_out.id = 0;
params.fence_out.value = 0;
std::memcpy(output.data(), &params, output.size());
return 0;
}

View File

@@ -10,6 +10,7 @@
#include "common/common_types.h"
#include "common/swap.h"
#include "core/hle/service/nvdrv/devices/nvdevice.h"
#include "core/hle/service/nvdrv/nvdata.h"
namespace Service::Nvidia::Devices {
@@ -20,10 +21,11 @@ constexpr u32 NVGPU_IOCTL_CHANNEL_KICKOFF_PB(0x1b);
class nvhost_gpu final : public nvdevice {
public:
explicit nvhost_gpu(std::shared_ptr<nvmap> nvmap_dev);
explicit nvhost_gpu(Core::System& system, std::shared_ptr<nvmap> nvmap_dev);
~nvhost_gpu() override;
u32 ioctl(Ioctl command, const std::vector<u8>& input, std::vector<u8>& output) override;
u32 ioctl(Ioctl command, const std::vector<u8>& input, std::vector<u8>& output,
IoctlCtrl& ctrl) override;
private:
enum class IoctlCommand : u32_le {
@@ -113,11 +115,7 @@ private:
static_assert(sizeof(IoctlGetErrorNotification) == 16,
"IoctlGetErrorNotification is incorrect size");
struct IoctlFence {
u32_le id;
u32_le value;
};
static_assert(sizeof(IoctlFence) == 8, "IoctlFence is incorrect size");
static_assert(sizeof(Fence) == 8, "Fence is incorrect size");
struct IoctlAllocGpfifoEx {
u32_le num_entries;
@@ -132,13 +130,13 @@ private:
static_assert(sizeof(IoctlAllocGpfifoEx) == 32, "IoctlAllocGpfifoEx is incorrect size");
struct IoctlAllocGpfifoEx2 {
u32_le num_entries; // in
u32_le flags; // in
u32_le unk0; // in (1 works)
IoctlFence fence_out; // out
u32_le unk1; // in
u32_le unk2; // in
u32_le unk3; // in
u32_le num_entries; // in
u32_le flags; // in
u32_le unk0; // in (1 works)
Fence fence_out; // out
u32_le unk1; // in
u32_le unk2; // in
u32_le unk3; // in
};
static_assert(sizeof(IoctlAllocGpfifoEx2) == 32, "IoctlAllocGpfifoEx2 is incorrect size");
@@ -153,10 +151,16 @@ private:
struct IoctlSubmitGpfifo {
u64_le address; // pointer to gpfifo entry structs
u32_le num_entries; // number of fence objects being submitted
u32_le flags;
IoctlFence fence_out; // returned new fence object for others to wait on
union {
u32_le raw;
BitField<0, 1, u32_le> add_wait; // append a wait sync_point to the list
BitField<1, 1, u32_le> add_increment; // append an increment to the list
BitField<2, 1, u32_le> new_hw_format; // Mostly ignored
BitField<8, 1, u32_le> increment; // increment the returned fence
} flags;
Fence fence_out; // returned new fence object for others to wait on
};
static_assert(sizeof(IoctlSubmitGpfifo) == 16 + sizeof(IoctlFence),
static_assert(sizeof(IoctlSubmitGpfifo) == 16 + sizeof(Fence),
"IoctlSubmitGpfifo is incorrect size");
struct IoctlGetWaitbase {
@@ -184,6 +188,7 @@ private:
u32 ChannelSetTimeout(const std::vector<u8>& input, std::vector<u8>& output);
std::shared_ptr<nvmap> nvmap_dev;
u32 assigned_syncpoints{};
};
} // namespace Service::Nvidia::Devices

View File

@@ -10,10 +10,11 @@
namespace Service::Nvidia::Devices {
nvhost_nvdec::nvhost_nvdec() = default;
nvhost_nvdec::nvhost_nvdec(Core::System& system) : nvdevice(system) {}
nvhost_nvdec::~nvhost_nvdec() = default;
u32 nvhost_nvdec::ioctl(Ioctl command, const std::vector<u8>& input, std::vector<u8>& output) {
u32 nvhost_nvdec::ioctl(Ioctl command, const std::vector<u8>& input, std::vector<u8>& output,
IoctlCtrl& ctrl) {
LOG_DEBUG(Service_NVDRV, "called, command=0x{:08X}, input_size=0x{:X}, output_size=0x{:X}",
command.raw, input.size(), output.size());

Some files were not shown because too many files have changed in this diff Show More