Compare commits

...

198 Commits

Author SHA1 Message Date
ReinUsesLisp
223a89a19f shader: Remove curly braces initializers on shared pointers 2020-02-01 22:52:10 -03:00
bunnei
b5bbe7e752 Merge pull request #3282 from FernandoS27/indexed-samplers
Partially implement Indexed samplers in general and specific code in GLSL
2020-02-01 20:41:40 -05:00
bunnei
2916c1bc25 Merge pull request #3268 from CJBok/deadzone
GUI: Deadzone controls for sdl engine at configuration input
2020-02-01 16:35:15 -05:00
bunnei
69a6796de1 Merge pull request #3284 from CJBok/hid-fix
hid: Fix analog sticks directional states
2020-02-01 14:02:41 -05:00
bunnei
c18f9898d9 Merge pull request #3364 from lioncash/thread
core/arm: Remove usage of global GetCurrentThread()
2020-01-31 11:13:24 -05:00
bunnei
6b5b01b29f Merge pull request #3363 from lioncash/unique_ptr
kernel/physical_core: Make use of std::unique_ptr instead of std::shared_ptr
2020-01-30 23:33:02 -05:00
bunnei
1948fc0858 Merge pull request #3365 from yuzu-emu/revert-3151-fix-korean
Revert "system_archive: Fix Korean and Chinese fonts"
2020-01-30 22:03:47 -05:00
bunnei
91b0a3f799 Revert "system_archive: Fix Korean and Chinese fonts" 2020-01-30 22:02:15 -05:00
Lioncash
472319e573 core/arm: Remove usage of global GetCurrentThread()
Now both CPU backends go through their referenced system instance to
obtain the current thread.
2020-01-30 18:52:25 -05:00
Lioncash
2de2bb980e kernel/physical_core: Make use of std::unique_ptr
shared_ptr was used in 2d1984c20c due to a
misunderstanding of how the language generates move constructors and
move assignment operators.

If a destructor is user-provided, then the compiler won't generate the
move constructor and move assignment operators by default--they must be
explicitly opted into.

The reason for the compilation errors is due to the fact that the
language will fall back to attempting to use the copy constructor/copy
assignment operators if the respective move constructor or move
assignment operator is unavailable.

Given that we explicitly opt into them now, the the move constructor and
move assignment operators will be generated as expected.
2020-01-30 18:42:40 -05:00
Lioncash
16e7b7b83d core/cpu_manager: Remove unused includes
Nothing from these headers are used within this source file, so we can
remove them.
2020-01-30 18:30:57 -05:00
Lioncash
51927bc9dc kernel/physical_core: Remove unused kernel reference member variable
This isn't used within the class, so it can be removed to simplify the
overall interface.

While we're in the same area, we can simplify a unique_ptr reset() call.
2020-01-30 18:29:57 -05:00
bunnei
985d0f35e5 Merge pull request #3353 from FernandoS27/aries
System: Refactor CPU Core management and move ARMInterface and Schedulers to Kernel
2020-01-30 18:13:59 -05:00
bunnei
8a7cdfc3ff Merge pull request #3151 from FearlessTobi/fix-korean
system_archive: Fix Korean and Chinese fonts
2020-01-30 15:09:55 -05:00
bunnei
c593e45dbd Merge pull request #3347 from ReinUsesLisp/local-mem
shader/memory: Implement LDL.S16, LDS.S16, STL.S16 and STS.S16
2020-01-30 10:59:52 -05:00
bunnei
2db7adc42a Merge pull request #3350 from ReinUsesLisp/atom
shader/memory: Implement ATOM.ADD
2020-01-29 16:49:54 -05:00
bunnei
b11aeced18 Merge pull request #3355 from ReinUsesLisp/break-down
texture_cache/surface_base: Fix layered break down
2020-01-29 12:29:56 -05:00
bunnei
91f79225e7 Merge pull request #3358 from ReinUsesLisp/implicit-texture-cache
gl_texture_cache: Silence implicit sign cast warnings
2020-01-29 11:23:50 -05:00
bunnei
c457e47297 Merge pull request #3359 from ReinUsesLisp/assert-point-size
gl_shader_decompiler: Remove UNIMPLEMENTED for gl_PointSize
2020-01-28 15:19:51 -05:00
ReinUsesLisp
8178fe8960 gl_shader_decompiler: Remove UNIMPLEMENTED for gl_PointSize
This was implemented by a previous commit and it's no longer required.
2020-01-28 16:32:30 -03:00
bunnei
283f3253bc Merge pull request #3352 from Simek/dark-theme-refinements
GUI: dark themes refinements and QSS cleanup
2020-01-28 14:05:36 -05:00
bunnei
bea6327d74 Merge pull request #3354 from ReinUsesLisp/depth-stencil
gl_texture_cache: Properly implement depth/stencil sampling
2020-01-28 12:06:11 -05:00
ReinUsesLisp
abae795986 gl_texture_cache: Silence implicit sign cast warnings 2020-01-27 20:59:11 -03:00
bunnei
acfb0b4852 Merge pull request #3346 from bunnei/bsd-stub
bsd: Stub several more functions.
2020-01-27 13:06:05 -05:00
Fernando Sahmkow
2d1984c20c System: Address Feedback 2020-01-27 09:54:11 -04:00
ReinUsesLisp
f55f6ff9bb texture_cache/surface_base: Fix layered break down
Layered break downs was passing "layer" as a "depth" parameter. This
commit addresses that.
2020-01-26 21:48:07 -03:00
ReinUsesLisp
d17dfa6104 gl_texture_cache: Properly implement depth/stencil sampling
This addresses the long standing issue of compatibility vs. core
profiles on OpenGL, properly implementing depth vs. stencil sampling
depending on the texture swizzle.
2020-01-26 21:44:08 -03:00
Fernando Sahmkow
de4b01f75d System: Correct PrepareReschedule. 2020-01-26 14:32:50 -04:00
Fernando Sahmkow
a1630ab53e Kernel: Remove a few global instances from the kernel. 2020-01-26 14:23:46 -04:00
Fernando Sahmkow
e4a1ead897 Core: Refactor CpuCoreManager to CpuManager and Cpu to Core Manager.
This commit instends on better naming the new purpose of this classes.
2020-01-26 14:07:22 -04:00
Fernando Sahmkow
450341b397 ArmInterface: Delegate Exclusive monitor factory to exclusive monitor interfasce. 2020-01-26 10:28:23 -04:00
Bartosz Kaszubowski
f68bb4f55e dark themes refinements and cleanup 2020-01-26 11:50:01 +01:00
ReinUsesLisp
d95d4ac843 shader/memory: Implement ATOM.ADD
ATOM operates atomically on global memory. For now only add ATOM.ADD
since that's what was found in commercial games.

This asserts for ATOM.ADD.S32 (handling the others as unimplemented),
although ATOM.ADD.U32 shouldn't be any different.

This change forces us to change the default type on SPIR-V storage
buffers from float to uint. We could also alias the buffers, but it's
simpler for now to just use uint. While we are at it, abstract the code
to avoid repetition.
2020-01-26 01:54:24 -03:00
Fernando Sahmkow
4d6a86b03f Core: Refactor CPU Management.
This commit moves ARM Interface and Scheduler handling into the kernel.
2020-01-25 18:55:32 -04:00
Fernando Sahmkow
bb8eb15d39 Shader_IR: Address feedback. 2020-01-25 09:04:59 -04:00
ReinUsesLisp
d26e74f0a3 shader/memory: Implement STL.S16 and STS.S16 2020-01-25 03:16:10 -03:00
ReinUsesLisp
9a2cdf8520 shader/memory: Implement unaligned LDL.S16 and LDS.S16 2020-01-25 03:16:10 -03:00
ReinUsesLisp
531f25a037 shader/memory: Move unaligned load/store to functions 2020-01-25 03:16:10 -03:00
ReinUsesLisp
96638f57c9 shader/memory: Implement LDL.S16 and LDS.S16 2020-01-25 03:15:55 -03:00
bunnei
2a822f3378 bsd: Stub several more functions.
- Required for Little Town Hero to boot further.
2020-01-25 00:47:15 -05:00
bunnei
05df4a8c94 Merge pull request #3343 from FearlessTobi/ui-tab
yuzu/configuration: create UI tab and move gamelist settings there
2020-01-25 00:40:13 -05:00
bunnei
2b1d66eda3 Merge pull request #3326 from FearlessTobi/port-5039
Port citra-emu/citra#5039: "common/logging: don't use regex for path trimming"
2020-01-24 20:59:57 -05:00
FearlessTobi
845a5dbca9 Disable clang-format for font files 2020-01-24 23:54:19 +01:00
bunnei
dfd998216c Merge pull request #3344 from ReinUsesLisp/vk-botw
vk_shader_decompiler: Disable default values on unwritten render targets
2020-01-24 17:31:55 -05:00
Fernando Sahmkow
806f569143 Shader_IR: Change name of TrackSampler function so it does not confuse with the type. 2020-01-24 16:44:48 -04:00
Fernando Sahmkow
3919b7b8a9 Shader_IR: Corrections, styling and extras. 2020-01-24 16:44:48 -04:00
Fernando Sahmkow
37b8504faa Shader_IR: Correct Custom Variable assignment. 2020-01-24 16:44:47 -04:00
Fernando Sahmkow
7c530e0666 Shader_IR: Propagate bindless index into the GL compiler. 2020-01-24 16:44:47 -04:00
Fernando Sahmkow
3c34678627 Shader_IR: Implement Injectable Custom Variables to the IR. 2020-01-24 16:43:31 -04:00
Fernando Sahmkow
2b02f29a2d GL Backend: Introduce indexed samplers into the GL backend 2020-01-24 16:43:31 -04:00
Fernando Sahmkow
037ea431ce Shader_IR: deduce size of indexed samplers 2020-01-24 16:43:31 -04:00
Fernando Sahmkow
f4603d23c5 Shader_IR: Setup Indexed Samplers on the IR 2020-01-24 16:43:30 -04:00
Fernando Sahmkow
603c861532 Shader_IR: Implement initial code for tracking indexed samplers. 2020-01-24 16:43:30 -04:00
Fernando Sahmkow
64496f2456 Shader_IR: Address Feedback 2020-01-24 16:43:30 -04:00
Fernando Sahmkow
b97608ca64 Shader_IR: Allow constant access of guest driver. 2020-01-24 16:43:30 -04:00
Fernando Sahmkow
dc5cfa8d28 Shader_IR: Address Feedback 2020-01-24 16:43:29 -04:00
Fernando Sahmkow
74aa7de5e3 Guest_driver: Correct compiling errors in GCC. 2020-01-24 16:43:29 -04:00
Fernando Sahmkow
1e4b6bef6f Shader_IR: Store Bound buffer on Shader Usage 2020-01-24 16:43:29 -04:00
Fernando Sahmkow
c921e496eb GPU: Implement guest driver profile and deduce texture handler sizes. 2020-01-24 16:43:29 -04:00
Fernando Sahmkow
ab89ced244 Kernel: Implement Physical Core. 2020-01-24 15:38:20 -04:00
bunnei
a104b985a8 Merge pull request #3273 from FernandoS27/txd-array
Shader_IR: Implement TXD Array.
2020-01-24 14:02:40 -05:00
bunnei
f64adcfc37 Merge pull request #3340 from SciresM/pmdx
loader: provide default arguments (zero byte) to NSOs
2020-01-24 10:31:43 -05:00
ReinUsesLisp
1690f1adba vk_shader_decompiler: Disable default values on unwritten render targets
Some games like The Legend of Zelda: Breath of the Wild assign
render targets without writing them from the fragment shader. This
generates Vulkan validation errors, so silence these I previously
introduced a commit to set "vec4(0, 0, 0, 1)" for these attachments. The
problem is that this is not what games expect. This commit reverts that
change.
2020-01-24 01:16:21 -03:00
bunnei
deb97f6a8e Merge pull request #2800 from FearlessTobi/port-4049
Port citra-emu/citra#4049: "Input: UDP Client to provide motion and touch controls"
2020-01-23 20:18:47 -05:00
FearlessTobi
d0e4f1c6f4 yuzu/configuration: create UI tab and move gamelist settings there 2020-01-24 00:15:51 +01:00
BreadFish64
a31ed02ae4 common/logging: don't use regex for path trimming 2020-01-23 23:08:05 +01:00
FearlessTobi
d01eb12f36 Replace GetString with Get function
This should hopefully fix compilation errors.
2020-01-23 20:55:26 +01:00
FearlessTobi
bbd85a495a Address second part of review comments 2020-01-23 20:55:26 +01:00
FearlessTobi
0fe11746fc Address review comments 2020-01-23 20:55:26 +01:00
fearlessTobi
ac3690f205 Input: UDP Client to provide motion and touch controls
An implementation of the cemuhook motion/touch protocol, this adds the
ability for users to connect several different devices to citra to send
direct motion and touch data to citra.

Co-Authored-By: jroweboy <jroweboy@gmail.com>
2020-01-23 20:55:26 +01:00
bunnei
a167da4278 Merge pull request #3341 from bunnei/time-posix-myrule
service: time: Implement ToPosixTimeWithMyRule.
2020-01-23 12:04:01 -05:00
Fernando Sahmkow
9c6b5cae68 Merge pull request #3338 from ReinUsesLisp/no-fastmath
gl_shader_cache: Disable fastmath on Nvidia
2020-01-23 10:08:45 -04:00
bunnei
ed76c71319 service: time: Implement ToPosixTimeWithMyRule.
- Used by Pokemon Mystery Dungeon.
2020-01-22 23:20:19 -05:00
Michael Scire
5a7eecc3ad loader: provide default arguments (zero byte) to NSOs
Certain newer unity games (Terraria, Pokemon Mystery Dungeon) require
that the argument region be populated. Failure to do so results in
an integer underflow in argument count, and eventually an unmapped
read at 0x800000000. Providing this default fixes this.

Note that the behavior of official software is as yet unverified,
arguments-wise.
2020-01-22 20:14:06 -08:00
bunnei
89b326e396 Merge pull request #3324 from FearlessTobi/port-5037
Port citra-emu/citra#5037: "CMake: Create thin archives on Linux"
2020-01-22 19:48:15 -05:00
Zach Hilman
d8e0d839bd Merge pull request #3339 from Simek/dark-theme-update
GUI: fix minor issues with dark themes + rename and reorder themes
2020-01-22 18:39:21 -05:00
Bartosz Kaszubowski
c7055f3670 fix qss stylesheet whitespaces 2020-01-22 22:36:42 +01:00
Bartosz Kaszubowski
9a22b6dced GUI: fix minor issues with dark themes
GUI: rename and reorder themes
2020-01-22 21:12:45 +01:00
ReinUsesLisp
3ce28342a2 gl_shader_cache: Disable fastmath on Nvidia 2020-01-21 19:08:08 -03:00
Fernando Sahmkow
79e0991d9b Merge pull request #3330 from ReinUsesLisp/vk-blit-screen
vk_blit_screen: Initial implementation
2020-01-20 22:32:16 -04:00
ReinUsesLisp
a665581684 vk_blit_screen: Address feedback 2020-01-20 18:43:11 -03:00
bunnei
bc55c05947 Merge pull request #3334 from bunnei/time-fix
time: Fix month off-by-one error.
2020-01-20 15:36:30 -05:00
bunnei
7113236b30 time: Fix month off-by-one error.
- Fixes timestamp in ZLA and Astral Chain saves.
2020-01-20 14:20:32 -05:00
bunnei
4ea073c286 Merge pull request #3332 from bunnei/config-audio-tab
yuzu_qt: config: Move audio to its own tab.
2020-01-20 04:24:51 -05:00
Bartosz Kaszubowski
4043ba5222 GUI/gamelist: add "None" as an option for second row and remove dynamically duplicate row options (#3309)
* GUI/gamelist: add "None" as an option for second row and remove duplicated row options

* fix clang-format warnings
2020-01-20 00:44:03 -05:00
bunnei
69b44392a7 Merge pull request #3328 from ReinUsesLisp/vulkan-atoms
vk_shader_decompiler: Implement UAtomicAdd (ATOMS) on SPIR-V
2020-01-20 00:01:52 -05:00
bunnei
5a077c95ce Merge pull request #3322 from ReinUsesLisp/vk-front-face
vk_graphics_pipeline: Set front facing properly
2020-01-19 23:22:34 -05:00
bunnei
690732bc0d yuzu_qt: config: Move audio to its own tab.
- We have some important audio settings, makes them more discoverable.
2020-01-19 23:17:53 -05:00
bunnei
8b9f433d95 Merge pull request #3271 from bunnei/time-rewrite
service: time: Rewrite implementation of glue services.
2020-01-19 22:45:05 -05:00
ReinUsesLisp
f5dfe68a94 vk_blit_screen: Initial implementation
This abstraction takes care of presenting accelerated and
non-accelerated or "framebuffer" images to the Vulkan swapchain.
2020-01-19 21:12:43 -03:00
bunnei
41373d212e Merge pull request #3313 from ReinUsesLisp/vk-rasterizer
vk_rasterizer: Implement Vulkan's rasterizer
2020-01-19 18:09:01 -05:00
Bartosz Kaszubowski
c610a8ac5a GUI/gamelist: add "None" as an option for second row and remove dynamically duplicate row options (#3309)
* GUI/gamelist: add "None" as an option for second row and remove duplicated row options

* fix clang-format warnings
2020-01-19 15:58:14 -05:00
Bartosz Kaszubowski
265fe40451 GUI/gamelist: add "None" as an option for second row and remove dynamically duplicate row options (#3309)
* GUI/gamelist: add "None" as an option for second row and remove duplicated row options

* fix clang-format warnings
2020-01-19 15:57:14 -05:00
Bartosz Kaszubowski
9ac33c2620 GUI/gamelist: add "None" as an option for second row and remove dynamically duplicate row options (#3309)
* GUI/gamelist: add "None" as an option for second row and remove duplicated row options

* fix clang-format warnings
2020-01-19 15:56:49 -05:00
ReinUsesLisp
b2c976ad0e vk_shader_decompiler: Implement UAtomicAdd (ATOMS) on SPIR-V
Also updates sirit to include atomic instructions.
2020-01-19 16:40:31 -03:00
FearlessTobi
4e9331f45d system_archive: Fix Chinese font
Adds the proper OSS font for the Chinese language.
2020-01-19 15:09:53 +01:00
FearlessTobi
999e3f89b9 system_archive: Fix Korean font
Fixes Korean fonts when using Open-source system archives.
2020-01-19 15:09:50 +01:00
Léo Lam
f98cd210ab CMake: Create thin archives on Linux
This significantly reduces unnecessary disk writes and space usage
when building Citra.

libcore.a is now only ~1MB rather than several hundred megabytes.
2020-01-19 12:52:09 +01:00
Fernando Sahmkow
51c8aea979 Merge pull request #3317 from ReinUsesLisp/gl-decomp-cc-decomp
gl_shader_decompiler: Fix decompilation of condition codes
2020-01-18 19:56:55 -04:00
bunnei
94c41ab1d1 Merge pull request #3323 from ReinUsesLisp/fix-template-res
gl_state: Use bool instead of GLboolean
2020-01-18 17:37:05 -05:00
ReinUsesLisp
d110a371bb gl_state: Use bool instead of GLboolean
This fixes template resolution considering GLboolean an integer instead
of a bool.
2020-01-18 19:10:34 -03:00
ReinUsesLisp
94915d4ea1 vk_graphics_pipeline: Set front facing properly
Front face was being forced to a certain value when cull face is
disabled. Set a default value on initialization and drop the forcefully
set front facing value with culling disabled.
2020-01-18 18:50:47 -03:00
bunnei
e972016456 Merge pull request #3298 from Simek/missing_hotkeys
GUI: add few missing hotkeys to main menu
2020-01-18 13:07:13 -05:00
bunnei
278264b9e5 Merge pull request #3314 from degasus/physical_mem
core/hle/kernel: Simplify PhysicalMemory usages.
2020-01-18 03:03:48 -05:00
Markus Wick
56672b8c98 core/memory: Create a special MapMemoryRegion for physical memory.
This allows us to create a fastmem arena within the memory.cpp helpers.
2020-01-18 08:38:47 +01:00
Markus Wick
55103da066 core/hle: Simplify PhysicalMemory usage in vm_manager. 2020-01-18 08:29:19 +01:00
Markus Wick
7e94e544f4 core/loaders: Simplify PhysicalMemory usage.
It is currently a std::vector, however we might want to replace it with a more fancy allocator.
So we can't use the C++ iterators any more.
2020-01-18 08:29:19 +01:00
bunnei
9bf4850f74 Merge pull request #3305 from ReinUsesLisp/point-size-program
gl_state: Implement PROGRAM_POINT_SIZE
2020-01-18 01:56:32 -05:00
bunnei
15163edaaa Merge pull request #3312 from ReinUsesLisp/atoms-u32
shader/memory: Implement ATOMS.ADD.U32
2020-01-18 00:54:07 -05:00
bunnei
3cce5056ff Merge pull request #3318 from jroweboy/remove-cpu-vendor
Remove unused CPU Vendor string and telemtry field
2020-01-17 22:24:52 -05:00
James Rowe
4512a6bbfc Remove unused CPU Vendor string and telemtry field
The information is duplicated in the brand string and the telemetry field is unused
2020-01-17 18:41:18 -07:00
ReinUsesLisp
09b1d762d7 vk_rasterizer: Address feedback 2020-01-17 21:40:01 -03:00
ReinUsesLisp
f34e519da3 gl_shader_decompiler: Fix decompilation of condition codes
Use Visit instead of reimplementing it. Fixes unimplemented negations
for condition codes.
2020-01-17 21:23:01 -03:00
bunnei
530a761e7a Merge pull request #3316 from TotalCaesar659/linux-headbar-icon
Add headbar icon on Linux
2020-01-17 19:04:10 -05:00
TotalCaesar659
dd74fd014b Add headbar icon on Linux 2020-01-18 02:46:07 +03:00
bunnei
48863afb65 Merge pull request #3306 from ReinUsesLisp/gl-texture
gl_texture_cache: Minor fixes and style changes
2020-01-17 15:44:02 -05:00
bunnei
657b3a366e Merge pull request #3311 from ReinUsesLisp/z32fx24s8
format_lookup_table: Fix ZF32_X24S8 component types
2020-01-17 08:22:32 -05:00
ReinUsesLisp
fe5356d223 vk_rasterizer: Implement Vulkan's rasterizer
This abstraction is Vulkan's equivalent to OpenGL's rasterizer. It takes
care of joining all parts of the backend and rendering accordingly on
demand.
2020-01-16 23:05:15 -03:00
ReinUsesLisp
38e789c761 renderer_vulkan: Add header as placeholder 2020-01-16 22:54:15 -03:00
bunnei
e041f33569 Merge pull request #3300 from ReinUsesLisp/vk-texture-cache
vk_texture_cache: Implement generic texture cache on Vulkan
2020-01-16 19:19:26 -05:00
ReinUsesLisp
f09cd52980 vk_texture_cache: Address feedback 2020-01-16 18:23:10 -03:00
ReinUsesLisp
63ba41a26d shader/memory: Implement ATOMS.ADD.U32 2020-01-16 17:30:55 -03:00
ReinUsesLisp
0caab54b5d format_lookup_table: Fix ZF32_X24S8 component types
Component types for ZF32_X24S8 were using UNORM. Drivers will set FLOAT,
UINT, UNORM, UNORM; causing a format mismatch. This commit addresses
that.
2020-01-16 17:29:13 -03:00
Rodrigo Locatti
82e1285c1e vk_texture_cache: Fix typo in commentary
Co-Authored-By: MysticExile <30736337+MysticExile@users.noreply.github.com>
2020-01-16 16:59:46 -03:00
bunnei
30faf6a964 Merge pull request #3308 from lioncash/private
maxwell_3d: Make dirty_pointers private
2020-01-16 13:26:35 -05:00
bunnei
d23869811d Merge pull request #3304 from lioncash/fwd-decl
renderer_opengl/utils: Forward declare private structs
2020-01-16 11:21:18 -05:00
bunnei
a43ac8c79e Merge pull request #3307 from jroweboy/fix-git
Fix git version in scm_rev.cpp
2020-01-16 10:00:43 -05:00
Lioncash
9e874898f5 maxwell_3d: Make dirty_pointers private
This isn't used outside of the class itself, so we can make it private
for the time being.
2020-01-16 04:07:15 -05:00
James Rowe
b429095b61 Fix git version in scm_rev.cpp 2020-01-16 00:12:50 -07:00
ReinUsesLisp
c375d735e6 gl_state: Implement PROGRAM_POINT_SIZE
For gl_PointSize to have effect we have to activate
GL_PROGRAM_POINT_SIZE.
2020-01-15 16:14:17 -03:00
Lioncash
7af56dfa76 renderer_opengl/utils: Remove unused header inclusions
Nothing from these headers are used, so they can be removed.
2020-01-15 06:31:23 -05:00
Lioncash
06d30fbcca renderer_opengl/utils: Forward declare private structs
Keeps the definitions hidden and allows changes to the structs without
needing to recompile all users of classes containing said structs.
2020-01-15 06:30:01 -05:00
CJBok
635deb70d4 Moved analog direction logic to sdl_impl 2020-01-15 11:25:15 +01:00
CJBok
231d9c10f3 Corrected directional states sensitivity 2020-01-14 21:51:58 +01:00
ReinUsesLisp
66a1c777c9 gl_texture_cache: Use local variables to simplify DownloadTexture 2020-01-14 17:39:48 -03:00
ReinUsesLisp
cdb00546f0 gl_texture_cache: Fix format for RGBX16F 2020-01-14 17:38:33 -03:00
ReinUsesLisp
2d09467f6f gl_texture_cache: Use Snorm internal format for RG8S 2020-01-14 17:37:58 -03:00
ReinUsesLisp
02624c35ec gl_texture_cache: Use Snorm internal format for ABGR8S 2020-01-14 17:37:23 -03:00
Rodrigo Locatti
64cd46579b Merge pull request #3303 from lioncash/reorder
control_flow: Silence -Wreorder warning for CFGRebuildState
2020-01-14 16:15:18 -03:00
Rodrigo Locatti
81e9e229fa Merge pull request #3302 from lioncash/unused-var
gl_shader_cache: Remove unused variables
2020-01-14 16:14:47 -03:00
Lioncash
a1eee1749e control_flow: Silence -Wreorder warning for CFGRebuildState
Organizes the initializer list in the same order that the variables
would actually be initialized in.
2020-01-14 13:28:48 -05:00
bunnei
a83e28b237 Merge pull request #3296 from Simek/hotkeys_resize
GUI/configure: resize hotkeys action column to fit content
2020-01-14 13:17:16 -05:00
Lioncash
f10ea944e0 gl_shader_cache: Remove unused STAGE_RESERVED_UBOS constant
Given this isn't used, this can be removed entirely.
2020-01-14 13:16:52 -05:00
Lioncash
4cd5ad90f3 gl_shader_cache: std::move entries in CachedShader constructor
Avoids several reallocations of std::vector instances where applicable.
2020-01-14 13:14:16 -05:00
Lioncash
15a6840e7a gl_shader_cache: Remove unused entries variable in BuildShader()
Eliminates a few unnecessary constructions of std::vectors.
2020-01-14 13:11:49 -05:00
bunnei
55f95e7f26 Merge pull request #3287 from ReinUsesLisp/ldg-stg-16
shader_ir/memory: Implement u16 and u8 for STG and LDG
2020-01-14 09:57:08 -05:00
bunnei
15788ffcde Merge pull request #3288 from ReinUsesLisp/uncurse-aoffi
shader_ir/texture: Simplify AOFFI code
2020-01-13 23:52:12 -05:00
bunnei
6985eea519 Merge pull request #3290 from ReinUsesLisp/gl-clamp
maxwell_to_vk: Implement GL_CLAMP hacking Nvidia's driver
2020-01-13 19:16:06 -05:00
bunnei
e749f17257 Merge pull request #3292 from degasus/heap_space_fix
core/kernel: Fix GetTotalPhysicalMemoryUsed.
2020-01-13 19:15:43 -05:00
ReinUsesLisp
09e17fbb0f vk_texture_cache: Implement generic texture cache on Vulkan
It currently ignores PBO linearizations since these should be dropped as
soon as possible on OpenGL.
2020-01-13 20:37:50 -03:00
ReinUsesLisp
2b2712fa95 texture_cache/surface_params: Make GetNumLayers public 2020-01-13 20:35:43 -03:00
Bartosz Kaszubowski
da3049aa74 GUI: add few missing hotkeys to main menu 2020-01-13 00:49:44 +01:00
CJBok
83be9fc96d Merge remote-tracking branch 'upstream/master' 2020-01-12 23:21:30 +01:00
Bartosz Kaszubowski
6726e8b784 GUI/configure: resize hotkeys column to content 2020-01-12 22:46:28 +01:00
Fernando Sahmkow
43fc793439 Merge pull request #3283 from ReinUsesLisp/vk-compute-pass
vk_compute_pass: Add compute passes to emulate missing Vulkan features
2020-01-12 11:14:59 -04:00
Markus Wick
c76ffa5019 core/kernel: Fix GetTotalPhysicalMemoryUsed.
module._memory was already moved over to a new shared_ptr.
So code_memory_size was not increased at all.

This lowers the heap space and so saves a bit of memory, usually between 50 to 100 MB.

This fixes a regression of c0a01f3adc
2020-01-11 14:04:44 +01:00
Rodrigo Locatti
b1138e5ea1 vk_compute_pass: Address feedback
Comment hardcoded SPIR-V modules.
2020-01-10 22:46:34 -03:00
ReinUsesLisp
3d46709b7f maxwell_to_vk: Implement GL_CLAMP hacking Nvidia's driver
Nvidia's driver defaults invalid enumerations to GL_CLAMP. Vulkan
doesn't expose GL_CLAMP through its API, but we can hack it on Nvidia's
driver using the internal driver defaults.
2020-01-10 17:12:50 -03:00
ReinUsesLisp
13021b534c shader_ir/texture: Simplify AOFFI code 2020-01-09 03:50:37 -03:00
ReinUsesLisp
e2a2a556b9 shader_ir/memory: Implement u16 and u8 for STG and LDG
Using the same technique we used for u8 on LDG, implement u16.

In the case of STG, load memory and insert the value we want to set
into it with bitfieldInsert. Then set that value.
2020-01-09 02:12:29 -03:00
CJBok
ae7fd01e38 hid: Fix analog sticks directional states 2020-01-09 02:40:55 +01:00
ReinUsesLisp
908e085d02 vk_compute_pass: Add compute passes to emulate missing Vulkan features
This currently only supports quad arrays and u8 indices.

In the future we can remove quad arrays with a table written from the
CPU, but this was used to bootstrap the other passes helpers and it
was left in the code.

The blob code is generated from the "shaders/" directory. Read the
instructions there to know how to generate the SPIR-V.
2020-01-08 19:24:26 -03:00
ReinUsesLisp
82a64da077 vk_shader_util: Add helper to build SPIR-V shaders 2020-01-08 19:22:20 -03:00
Fernando Sahmkow
80436c1330 Merge pull request #3279 from ReinUsesLisp/vk-pipeline-cache
vk_pipeline_cache: Initial implementation
2020-01-08 17:31:20 -04:00
bunnei
319c4d2108 Merge pull request #3272 from bunnei/vi-close-layer
service: vi: Implement CloseLayer.
2020-01-07 12:45:34 -05:00
ReinUsesLisp
6888d776ff vk_pipeline_cache: Initial implementation
Given a pipeline key, this cache returns a pipeline abstraction (for
graphics or compute).
2020-01-06 22:02:26 -03:00
ReinUsesLisp
2effdeb924 vk_graphics_pipeline: Initial implementation
This abstractio represents the state of the 3D engine at a given draw.
Instead of changing individual bits of the pipeline how it's done in
APIs like D3D11, OpenGL and NVN; on Vulkan we are forced to put
everything together into a single, immutable object.

It takes advantage of the few dynamic states Vulkan offers.
2020-01-06 22:02:26 -03:00
ReinUsesLisp
dc96a59fa0 vk_compute_pipeline: Initial implementation
This abstraction represents a Vulkan compute pipeline.
2020-01-06 22:02:26 -03:00
ReinUsesLisp
b392a5986e vk_pipeline_cache: Add file and define descriptor update template filler
This function allows us to share code between compute and graphics
pipelines compilation.
2020-01-06 22:02:26 -03:00
ReinUsesLisp
3142f1b597 fixed_pipeline_state: Add depth clamp 2020-01-06 22:02:26 -03:00
ReinUsesLisp
9c548146ca vk_rasterizer: Add placeholder 2020-01-06 22:02:26 -03:00
bunnei
5be00cba15 Merge pull request #3276 from ReinUsesLisp/pipeline-reqs
vk_update_descriptor/vk_renderpass_cache: Add pipeline cache dependencies
2020-01-06 17:03:34 -05:00
bunnei
ee9b4a7f9a Merge pull request #3278 from ReinUsesLisp/vk-memory-manager
renderer_vulkan: Buffer cache, stream buffer and memory manager changes
2020-01-06 17:03:04 -05:00
ReinUsesLisp
5aeff9aff5 vk_renderpass_cache: Initial implementation
The renderpass cache is used to avoid creating renderpasses on each
draw. The hashed structure is not currently optimized.
2020-01-06 18:28:32 -03:00
ReinUsesLisp
322d6a0311 vk_update_descriptor: Initial implementation
The update descriptor is used to store in flat memory a large chunk of
staging data used to update descriptor sets through templates. It
provides a push interface to easily insert descriptors following the
current pipeline. The order used in the descriptor update template has
to be implicitly followed. We can catch bugs here using validation
layers.
2020-01-06 18:28:32 -03:00
ReinUsesLisp
5b01f80a12 vk_stream_buffer/vk_buffer_cache: Avoid halting and use generic cache
The stream buffer before this commit once it was full (no more bytes to
write before looping) waiting for all previous operations to finish.
This was a temporary solution and had a noticeable performance penalty
in performance (from what a profiler showed).

To avoid this mark with fences usages of the stream buffer and once it
loops wait for them to be signaled. On average this will never wait.
Each fence knows where its usage finishes, resulting in a non-paged
stream buffer.

On the other side, the buffer cache is reimplemented using the generic
buffer cache. It makes use of the staging buffer pool and the new
stream buffer.
2020-01-06 18:13:41 -03:00
ReinUsesLisp
ceb851b590 vk_memory_manager: Misc changes
* Allocate memory in discrete exponentially increasing chunks until the
128 MiB threshold. Allocations larger thant that increase linearly by
256 MiB (depending on the required size). This allows to use small
allocations for small resources.

* Move memory maps to a RAII abstraction. To optimize for debugging
tools (like RenderDoc) users will map/unmap on usage. If this ever
becomes a noticeable overhead (from my profiling it doesn't) we can
transparently move to persistent memory maps without harming the API,
getting optimal performance for both gameplay and debugging.

* Improve messages on exceptional situations.

* Fix typos "requeriments" -> "requirements".

* Small style changes.
2020-01-06 18:13:41 -03:00
ReinUsesLisp
85bb6a6f08 vk_buffer_cache: Temporarily remove buffer cache
This is intended for a follow up commit to avoid circular dependencies.
2020-01-06 17:58:46 -03:00
bunnei
984563b773 Merge pull request #3277 from ReinUsesLisp/make-current
yuzu/bootmanager: Remove {glx,wgl}MakeCurrent on SwapBuffers
2020-01-06 14:09:19 -05:00
ReinUsesLisp
8306703a7d yuzu/bootmanager: Remove {glx,wgl}MakeCurrent on SwapBuffers
MakeCurrent is a costly (according to Nsight's profiler it takes a tenth
of a millisecond to complete), and we don't have a reason to call it
because:
- Qt no longer signals a warning if it's not called
- yuzu no longer supports macOS
2020-01-06 14:02:47 -03:00
bunnei
09908207fb Merge pull request #3261 from degasus/page_table
core/memory + arm/dynarmic: Use a global offset within our arm page table.
2020-01-06 11:56:59 -05:00
bunnei
89fc75d769 Merge pull request #3257 from degasus/no_busy_loops
video_core: Block in WaitFence.
2020-01-06 00:09:57 -05:00
Fernando Sahmkow
56e450a3f7 Merge pull request #3264 from ReinUsesLisp/vk-descriptor-pool
vk_descriptor_pool: Initial implementation
2020-01-05 15:54:41 -04:00
bunnei
6fe51f398f Merge pull request #2945 from FernandoS27/fix-bcat
nifm: Only return that there's an internet connection when there's a BCATServer
2020-01-05 02:17:16 -05:00
bunnei
cd0a7dfdbc Merge pull request #3258 from FernandoS27/shader-amend
Shader_IR: add the ability to amend code in the shader ir.
2020-01-04 14:05:17 -05:00
Fernando Sahmkow
3dd6b55851 Shader_IR: Address Feedback 2020-01-04 14:40:57 -04:00
Fernando Sahmkow
a1667a7b46 Shader_IR: Implement TXD Array.
This commit extends the compilation of TXD to support array samplers on
TXD.
2020-01-04 13:28:02 -04:00
bunnei
64c5631579 service: vi: Implement CloseLayer.
- Needed for Undertale.
2020-01-04 00:45:06 -05:00
Rodrigo Locatti
6e347d8d1b Update src/video_core/renderer_vulkan/vk_descriptor_pool.cpp
Co-Authored-By: Mat M. <mathew1800@gmail.com>
2020-01-03 17:34:30 -03:00
CJBok
2fa9a96309 const correction 2020-01-03 10:30:51 +01:00
CJBok
90f9c830ca clang 2020-01-03 09:31:54 +01:00
CJBok
351e3fb72e Update configure_input_player.cpp 2020-01-03 09:11:34 +01:00
CJBok
4a566b9828 Added deadzone controls for sdl engine at input settings 2020-01-03 08:54:57 +01:00
ReinUsesLisp
1fe7df4517 vk_descriptor_pool: Initial implementation
Create a large descriptor pool where we allocate all our descriptors
from. It has to be wide enough to support any pipeline, hence its large
numbers.

If the descritor pool is filled, we allocate more memory at that moment.
This way we can take advantage of permissive drivers like Nvidia's that
allocate more descriptors than what the spec requires.
2020-01-01 16:44:06 -03:00
Markus Wick
0986caa8d8 core/memory + arm/dynarmic: Use a global offset within our arm page table.
This saves us two x64 instructions per load/store instruction.

TODO: Clean up our memory code. We can use this optimization here as well.
2020-01-01 12:24:54 +01:00
Fernando Sahmkow
b3371ed09e Shader_IR: add the ability to amend code in the shader ir.
This commit introduces a mechanism by which shader IR code can be
amended and extended. This useful for track algorithms where certain
information can derived from before the track such as indexes to array
samplers.
2019-12-30 15:31:48 -04:00
Markus Wick
cb9dd01ffd video_core: Block in WaitFence.
This function is called rarely and blocks quite often for a long time.
So don't waste power and let the CPU sleep.

This might also increase the performance as the other cores might be allowed to clock higher.
2019-12-30 13:04:53 +01:00
Fernando Sahmkow
3c95e49c42 nifm: Only return that there's an internet connection when there's a BCATServer
This helps games that need internet for other purposes boot as the rest
of our internet infrastructure is incomplete.
2019-11-06 23:10:32 -05:00
170 changed files with 8728 additions and 1611 deletions

View File

@@ -350,6 +350,13 @@ function(create_target_directory_groups target_name)
endforeach()
endfunction()
# Prevent boost from linking against libs when building
add_definitions(-DBOOST_ERROR_CODE_HEADER_ONLY
-DBOOST_SYSTEM_NO_LIB
-DBOOST_DATE_TIME_NO_LIB
-DBOOST_REGEX_NO_LIB
)
enable_testing()
add_subdirectory(externals)
add_subdirectory(src)

View File

@@ -5,6 +5,10 @@ function(get_timestamp _var)
endfunction()
list(APPEND CMAKE_MODULE_PATH "${SRC_DIR}/externals/cmake-modules")
# Find the package here with the known path so that the GetGit commands can find it as well
find_package(Git QUIET PATHS "${GIT_EXECUTABLE}")
# generate git/build information
include(GetGitRevisionDescription)
get_git_head_revision(GIT_REF_SPEC GIT_REV)

View File

@@ -2,7 +2,8 @@ QToolTip {
border: 1px solid #76797C;
background-color: #5A7566;
color: white;
padding: 0px; /*remove padding, for fix combobox tooltip.*/
/*remove padding, for fix combobox tooltip.*/
padding: 0;
opacity: 200;
}
@@ -13,7 +14,7 @@ QWidget {
selection-color: #eff0f1;
background-clip: border;
border-image: none;
border: 0px transparent black;
border: 0;
outline: 0;
}
@@ -27,10 +28,10 @@ QWidget:item:selected {
}
QCheckBox {
spacing: 5px;
spacing: 6px;
outline: none;
color: #eff0f1;
margin-bottom: 2px;
margin: 0 2px 1px 0;
}
QCheckBox:disabled {
@@ -163,7 +164,7 @@ QMenuBar::item:selected {
}
QMenuBar::item:pressed {
border: 1px solid #76797C;
border: 1px solid #18465d;
background-color: #3daee9;
color: #eff0f1;
margin-bottom: -1px;
@@ -171,9 +172,9 @@ QMenuBar::item:pressed {
}
QMenu {
border: 1px solid #76797C;
border: 1px solid #434242;
padding: 2px;
color: #eff0f1;
margin: 2px;
}
QMenu::icon {
@@ -181,7 +182,7 @@ QMenu::icon {
}
QMenu::item {
padding: 5px 30px 5px 30px;
padding: 5px 16px 5px 40px;
border: 1px solid transparent;
/* reserve space for selection border */
}
@@ -190,22 +191,30 @@ QMenu::item:selected {
color: #eff0f1;
}
QMenu::separator {
height: 2px;
background: lightblue;
margin-left: 10px;
margin-right: 5px;
QMenu::item:disabled {
color: #54575B;
}
QMenu::item:disabled:hover,
QMenu::item:disabled:selected {
background-color: #393e43;
color: #666;
}
QMenu::separator,
QMenuBar::separator {
height: 1px;
background-color: #54575B;
margin: 2px 4px 2px 40px;
}
QMenu::indicator {
margin: 0 -26px 0 8px;
width: 18px;
height: 18px;
}
/* non-exclusive indicator = check box style indicator
(see QActionGroup::setExclusive) */
/* non-exclusive indicator = check box style indicator (see QActionGroup::setExclusive) */
QMenu::indicator:non-exclusive:unchecked {
image: url(:/qss_icons/rc/checkbox_unchecked.png);
}
@@ -222,9 +231,7 @@ QMenu::indicator:non-exclusive:checked:selected {
image: url(:/qss_icons/rc/checkbox_checked_disabled.png);
}
/* exclusive indicator = radio button style indicator (see QActionGroup::setExclusive) */
QMenu::indicator:exclusive:unchecked {
image: url(:/qss_icons/rc/radio_unchecked.png);
}
@@ -242,39 +249,46 @@ QMenu::indicator:exclusive:checked:selected {
}
QMenu::right-arrow {
margin: 5px;
margin-right: 10px;
image: url(:/qss_icons/rc/right_arrow.png)
}
QWidget:disabled {
color: #454545;
color: #4f515b;
background-color: #31363b;
}
QAbstractItemView {
alternate-background-color: #31363b;
alternate-background-color: #2c2f32;
color: #eff0f1;
border: 1px solid #3A3939;
border-radius: 2px;
}
QWidget:focus,
QMenuBar:focus {
QAbstractItemView:disabled,
QAbstractItemView:read-only {
alternate-background-color: #232629;
}
QWidget:focus {
border: 1px solid #3daee9;
}
QTabWidget:focus,
QCheckBox:focus,
QRadioButton:focus,
QSlider:focus {
QSlider:focus,
QTreeView:focus,
QMenu:focus,
QMenuBar:focus,
QTabBar:focus {
border: none;
}
QLineEdit {
background-color: #232629;
padding: 5px;
border-style: solid;
border: 1px solid #76797C;
border: 1px solid #54575B;
border-radius: 2px;
color: #eff0f1;
}
@@ -284,9 +298,10 @@ QAbstractItemView QLineEdit {
}
QGroupBox {
border: 1px solid #76797C;
border: 1px solid #54575B;
border-radius: 2px;
margin-top: 20px;
margin-top: 12px;
padding-top: 2px;
}
QGroupBox::title {
@@ -294,12 +309,12 @@ QGroupBox::title {
subcontrol-position: top center;
padding-left: 10px;
padding-right: 10px;
padding-top: 10px;
padding-top: 2px;
}
QAbstractScrollArea {
border-radius: 2px;
border: 1px solid #76797C;
border: 1px solid #54575B;
background-color: transparent;
}
@@ -318,7 +333,7 @@ QScrollBar::handle:horizontal {
}
QScrollBar::add-line:horizontal {
margin: 0px 3px 0px 3px;
margin: 0 3px;
border-image: url(:/qss_icons/rc/right_arrow_disabled.png);
width: 10px;
height: 10px;
@@ -327,7 +342,7 @@ QScrollBar::add-line:horizontal {
}
QScrollBar::sub-line:horizontal {
margin: 0px 3px 0px 3px;
margin: 0 3px;
border-image: url(:/qss_icons/rc/left_arrow_disabled.png);
height: 10px;
width: 10px;
@@ -378,7 +393,7 @@ QScrollBar::handle:vertical {
}
QScrollBar::sub-line:vertical {
margin: 3px 0px 3px 0px;
margin: 3px 0;
border-image: url(:/qss_icons/rc/up_arrow_disabled.png);
height: 10px;
width: 10px;
@@ -387,7 +402,7 @@ QScrollBar::sub-line:vertical {
}
QScrollBar::add-line:vertical {
margin: 3px 0px 3px 0px;
margin: 3px 0;
border-image: url(:/qss_icons/rc/down_arrow_disabled.png);
height: 10px;
width: 10px;
@@ -426,15 +441,14 @@ QScrollBar::sub-page:vertical {
QTextEdit {
background-color: #232629;
color: #eff0f1;
border: 1px solid #76797C;
border: 1px solid #54575B;
}
QPlainTextEdit {
background-color: #232629;
;
color: #eff0f1;
border-radius: 2px;
border: 1px solid #76797C;
border: 1px solid #54575B;
}
QHeaderView::section {
@@ -466,15 +480,6 @@ QMainWindow::separator:hover {
spacing: 2px;
}
QMenu::separator {
height: 1px;
background-color: #76797C;
color: white;
padding-left: 4px;
margin-left: 10px;
margin-right: 5px;
}
QFrame {
border-radius: 2px;
border: 1px solid #76797C;
@@ -517,25 +522,19 @@ QToolButton#qt_toolbar_ext_button {
QPushButton {
color: #eff0f1;
background-color: #31363b;
border-width: 1px;
border-color: #76797C;
border-color: #54575B;
border-style: solid;
padding: 5px;
padding: 6px 4px;
border-radius: 2px;
outline: none;
min-width: 100px;
background-color: #232629;
}
QPushButton:disabled {
background-color: #31363b;
border-width: 1px;
border-color: #454545;
border-style: solid;
padding-top: 5px;
padding-bottom: 5px;
padding-left: 10px;
padding-right: 10px;
border-radius: 2px;
color: #454545;
}
@@ -552,11 +551,11 @@ QPushButton:pressed {
QComboBox {
selection-background-color: #3daee9;
border-style: solid;
border: 1px solid #76797C;
border: 1px solid #54575B;
border-radius: 2px;
padding: 5px;
padding: 4px 6px;
min-width: 75px;
background-color: #232629;
}
QPushButton:checked {
@@ -570,15 +569,12 @@ QAbstractSpinBox:hover,
QLineEdit:hover,
QTextEdit:hover,
QPlainTextEdit:hover,
QAbstractView:hover,
QTreeView:hover {
QAbstractView:hover {
border: 1px solid #3daee9;
color: #eff0f1;
}
QComboBox:on {
padding-top: 3px;
padding-left: 4px;
selection-background-color: #4a4a4a;
}
@@ -592,6 +588,7 @@ QComboBox QAbstractItemView {
QComboBox::drop-down {
subcontrol-origin: padding;
subcontrol-position: top right;
left: -6px;
width: 15px;
border-left-width: 0px;
border-left-color: darkgray;
@@ -611,8 +608,8 @@ QComboBox::down-arrow:focus {
}
QAbstractSpinBox {
padding: 5px;
border: 1px solid #76797C;
padding: 4px 6px;
border: 1px solid #54575B;
background-color: #232629;
color: #eff0f1;
border-radius: 2px;
@@ -623,12 +620,14 @@ QAbstractSpinBox:up-button {
background-color: transparent;
subcontrol-origin: border;
subcontrol-position: center right;
left: -6px;
}
QAbstractSpinBox:down-button {
background-color: transparent;
subcontrol-origin: border;
subcontrol-position: center left;
right: -6px;
}
QAbstractSpinBox::up-arrow,
@@ -655,22 +654,27 @@ QAbstractSpinBox::down-arrow:hover {
image: url(:/qss_icons/rc/down_arrow.png);
}
QLabel {
border: 0px solid black;
QLabel,
QTabWidget {
border: 0;
}
QTabWidget {
border: 0px transparent black;
padding-top: 1px;
}
QTabWidget::pane {
border: 1px solid #76797C;
padding: 5px;
margin: 0px;
position: absolute;
top: -1px;
border-top-right-radius: 2px;
border-bottom-right-radius: 2px;
border-bottom-left-radius: 2px;
}
QTabWidget::tab-bar {
/* left: 5px; move to the right by 5px */
overflow: visible;
}
QTabBar {
@@ -678,10 +682,6 @@ QTabBar {
border-radius: 3px;
}
QTabBar:focus {
border: 0px transparent black;
}
QTabBar::close-button {
image: url(:/qss_icons/rc/close.png);
background: transparent;
@@ -697,36 +697,33 @@ QTabBar::close-button:pressed {
background: transparent;
}
/* TOP TABS */
QTabBar::tab:top {
color: #eff0f1;
border: 1px solid #76797C;
border-bottom: 1px transparent black;
background-color: #31363b;
padding: 5px;
min-width: 50px;
border: 1px solid #54575B;
background-color: #2a2f33;
padding: 4px 16px 5px;
min-width: 36px;
border-top-left-radius: 2px;
border-top-right-radius: 2px;
}
QTabBar::tab:top:selected {
color: #eff0f1;
background-color: #54575B;
border: 1px solid #76797C;
border-bottom: 2px solid #3daee9;
border-top-left-radius: 2px;
border-top-right-radius: 2px;
border-color: #76797C;
background-color: #31363b;
border-bottom-color: #31363b;
}
QTabBar::tab:top:!selected {
margin-top: 1px;
border-bottom-color: #76797C;
}
QTabBar::tab:top:!selected:hover {
background-color: #3daee9;
}
/* BOTTOM TABS */
QTabBar::tab:bottom {
color: #eff0f1;
border: 1px solid #76797C;
@@ -751,9 +748,7 @@ QTabBar::tab:bottom:!selected:hover {
background-color: #3daee9;
}
/* LEFT TABS */
QTabBar::tab:left {
color: #eff0f1;
border: 1px solid #76797C;
@@ -778,9 +773,7 @@ QTabBar::tab:left:!selected:hover {
background-color: #3daee9;
}
/* RIGHT TABS */
QTabBar::tab:right {
color: #eff0f1;
border: 1px solid #76797C;
@@ -848,7 +841,7 @@ QDockWidget::float-button:pressed {
QTreeView,
QListView {
border: 1px solid #76797C;
border: 1px solid #54575B;
background-color: #232629;
}
@@ -979,8 +972,8 @@ QSlider::handle:vertical {
}
QToolButton {
background-color: transparent;
border: 1px transparent #76797C;
background-color: #232629;
border: 1px solid #54575B;
border-radius: 2px;
margin: 3px;
padding: 5px;
@@ -989,7 +982,6 @@ QToolButton {
QToolButton[popupMode="1"] {
/* only for MenuButtonPopup */
padding-right: 20px;
/* make way for the popup button */
border: 1px #76797C;
border-radius: 5px;
}
@@ -997,7 +989,6 @@ QToolButton[popupMode="1"] {
QToolButton[popupMode="2"] {
/* only for InstantPopup */
padding-right: 10px;
/* make way for the popup button */
border: 1px #76797C;
}
@@ -1016,19 +1007,14 @@ QToolButton::menu-button:pressed {
padding: 5px;
}
/* the subcontrol below is used only in the InstantPopup or DelayedPopup mode */
QToolButton::menu-indicator {
image: url(:/qss_icons/rc/down_arrow.png);
top: -7px;
left: -2px;
/* shift it a bit */
}
/* the subcontrols below are used only in the MenuButtonPopup mode */
QToolButton::menu-button {
border: 1px transparent #76797C;
border-top-right-radius: 6px;
@@ -1053,14 +1039,22 @@ QPushButton::menu-indicator {
}
QTableView {
border: 1px solid #76797C;
border: 1px solid #54575B;
gridline-color: #31363b;
background-color: #232629;
}
QTreeView:disabled {
background-color: #1f2225;
}
QTableView,
QHeaderView {
border-radius: 0px;
border-radius: 0;
}
QListView:focus {
border-color: #54575B;
}
QTableView::item:pressed,
@@ -1078,7 +1072,7 @@ QListView::item:selected:active {
}
QHeaderView {
background-color: #31363b;
background-color: #403F3F;
border: 1px transparent;
border-radius: 0px;
margin: 0px;
@@ -1086,30 +1080,32 @@ QHeaderView {
}
QHeaderView::section {
background-color: #31363b;
background-color: #232629;
color: #eff0f1;
padding: 5px;
border: 1px solid #76797C;
padding: 0 5px;
border: 1px solid #434242;
border-bottom: 0;
border-radius: 0px;
text-align: center;
}
QHeaderView::section::vertical::first,
QHeaderView::section::vertical::only-one {
border-top: 1px solid #76797C;
border-top: 1px solid #31363b;
}
QHeaderView::section::vertical {
border-top: transparent;
}
QHeaderView::section::horizontal,
QHeaderView::section::horizontal::first,
QHeaderView::section::horizontal::only-one {
border-left: 1px solid #76797C;
border-left: transparent;
}
QHeaderView::section::horizontal {
border-left: transparent;
QHeaderView::section::horizontal::last {
border-right: transparent;
}
QHeaderView::section:checked {
@@ -1117,9 +1113,7 @@ QHeaderView::section:checked {
background-color: #334e5e;
}
/* style the sort indicator */
/* sort indicator */
QHeaderView::down-arrow {
image: url(:/qss_icons/rc/down_arrow.png);
}
@@ -1149,14 +1143,13 @@ QToolBox::tab {
}
QToolBox::tab:selected {
/* italicize selected tabs */
font: italic;
background-color: #31363b;
border-color: #3daee9;
}
QStatusBar::item {
border: 0px transparent dark;
border: 0;
}
QFrame[height="3"],
@@ -1193,7 +1186,6 @@ QProgressBar::chunk {
QDateEdit {
selection-background-color: #3daee9;
border-style: solid;
border: 1px solid #3375A3;
border-radius: 2px;
padding: 1px;
@@ -1217,7 +1209,7 @@ QDateEdit::drop-down {
subcontrol-origin: padding;
subcontrol-position: top right;
width: 15px;
border-left-width: 0px;
border-left-width: 0;
border-left-color: darkgray;
border-left-style: solid;
border-top-right-radius: 3px;
@@ -1233,3 +1225,14 @@ QDateEdit::down-arrow:hover,
QDateEdit::down-arrow:focus {
image: url(:/qss_icons/rc/down_arrow.png);
}
QComboBox:disabled,
QPushButton:disabled,
QAbstractSpinBox:disabled,
QDateEdit:disabled,
QLineEdit:disabled,
QTextEdit:disabled,
QToolButton:disabled,
QPlainTextEdit:disabled {
background-color: #2b2e31;
}

View File

@@ -77,6 +77,15 @@ else()
add_compile_options("-static")
endif()
endif()
if(CMAKE_SYSTEM_NAME STREQUAL "Linux" OR MINGW)
# GNU ar: Create thin archive files.
# Requires binutils-2.19 or later.
set(CMAKE_C_ARCHIVE_CREATE "<CMAKE_AR> qcTP <TARGET> <LINK_FLAGS> <OBJECTS>")
set(CMAKE_C_ARCHIVE_APPEND "<CMAKE_AR> qTP <TARGET> <LINK_FLAGS> <OBJECTS>")
set(CMAKE_CXX_ARCHIVE_CREATE "<CMAKE_AR> qcTP <TARGET> <LINK_FLAGS> <OBJECTS>")
set(CMAKE_CXX_ARCHIVE_APPEND "<CMAKE_AR> qTP <TARGET> <LINK_FLAGS> <OBJECTS>")
endif()
endif()
add_subdirectory(common)

View File

@@ -15,6 +15,10 @@ endif ()
if (DEFINED ENV{DISPLAYVERSION})
set(DISPLAY_VERSION $ENV{DISPLAYVERSION})
endif ()
# Pass the path to git to the GenerateSCMRev.cmake as well
find_package(Git QUIET)
add_custom_command(OUTPUT scm_rev.cpp
COMMAND ${CMAKE_COMMAND}
-DSRC_DIR="${CMAKE_SOURCE_DIR}"
@@ -23,6 +27,7 @@ add_custom_command(OUTPUT scm_rev.cpp
-DTITLE_BAR_FORMAT_RUNNING="${TITLE_BAR_FORMAT_RUNNING}"
-DBUILD_TAG="${BUILD_TAG}"
-DBUILD_ID="${DISPLAY_VERSION}"
-DGIT_EXECUTABLE="${GIT_EXECUTABLE}"
-P "${CMAKE_SOURCE_DIR}/CMakeModules/GenerateSCMRev.cmake"
DEPENDS
# WARNING! It was too much work to try and make a common location for this list,

View File

@@ -120,7 +120,7 @@ private:
duration_cast<std::chrono::microseconds>(steady_clock::now() - time_origin);
entry.log_class = log_class;
entry.log_level = log_level;
entry.filename = Common::TrimSourcePath(filename);
entry.filename = filename;
entry.line_num = line_nr;
entry.function = function;
entry.message = std::move(message);

View File

@@ -23,7 +23,7 @@ struct Entry {
std::chrono::microseconds timestamp;
Class log_class;
Level log_level;
std::string filename;
const char* filename;
unsigned int line_num;
std::string function;
std::string message;

View File

@@ -9,6 +9,15 @@
namespace Log {
// trims up to and including the last of ../, ..\, src/, src\ in a string
constexpr const char* TrimSourcePath(std::string_view source) {
const auto rfind = [source](const std::string_view match) {
return source.rfind(match) == source.npos ? 0 : (source.rfind(match) + match.size());
};
auto idx = std::max({rfind("src/"), rfind("src\\"), rfind("../"), rfind("..\\")});
return source.data() + idx;
}
/// Specifies the severity or level of detail of the log message.
enum class Level : u8 {
Trace, ///< Extremely detailed and repetitive debugging information that is likely to
@@ -141,24 +150,24 @@ void FmtLogMessage(Class log_class, Level log_level, const char* filename, unsig
#ifdef _DEBUG
#define LOG_TRACE(log_class, ...) \
::Log::FmtLogMessage(::Log::Class::log_class, ::Log::Level::Trace, __FILE__, __LINE__, \
__func__, __VA_ARGS__)
::Log::FmtLogMessage(::Log::Class::log_class, ::Log::Level::Trace, \
::Log::TrimSourcePath(__FILE__), __LINE__, __func__, __VA_ARGS__)
#else
#define LOG_TRACE(log_class, fmt, ...) (void(0))
#endif
#define LOG_DEBUG(log_class, ...) \
::Log::FmtLogMessage(::Log::Class::log_class, ::Log::Level::Debug, __FILE__, __LINE__, \
__func__, __VA_ARGS__)
::Log::FmtLogMessage(::Log::Class::log_class, ::Log::Level::Debug, \
::Log::TrimSourcePath(__FILE__), __LINE__, __func__, __VA_ARGS__)
#define LOG_INFO(log_class, ...) \
::Log::FmtLogMessage(::Log::Class::log_class, ::Log::Level::Info, __FILE__, __LINE__, \
__func__, __VA_ARGS__)
::Log::FmtLogMessage(::Log::Class::log_class, ::Log::Level::Info, \
::Log::TrimSourcePath(__FILE__), __LINE__, __func__, __VA_ARGS__)
#define LOG_WARNING(log_class, ...) \
::Log::FmtLogMessage(::Log::Class::log_class, ::Log::Level::Warning, __FILE__, __LINE__, \
__func__, __VA_ARGS__)
::Log::FmtLogMessage(::Log::Class::log_class, ::Log::Level::Warning, \
::Log::TrimSourcePath(__FILE__), __LINE__, __func__, __VA_ARGS__)
#define LOG_ERROR(log_class, ...) \
::Log::FmtLogMessage(::Log::Class::log_class, ::Log::Level::Error, __FILE__, __LINE__, \
__func__, __VA_ARGS__)
::Log::FmtLogMessage(::Log::Class::log_class, ::Log::Level::Error, \
::Log::TrimSourcePath(__FILE__), __LINE__, __func__, __VA_ARGS__)
#define LOG_CRITICAL(log_class, ...) \
::Log::FmtLogMessage(::Log::Class::log_class, ::Log::Level::Critical, __FILE__, __LINE__, \
__func__, __VA_ARGS__)
::Log::FmtLogMessage(::Log::Class::log_class, ::Log::Level::Critical, \
::Log::TrimSourcePath(__FILE__), __LINE__, __func__, __VA_ARGS__)

View File

@@ -223,26 +223,4 @@ std::u16string UTF16StringFromFixedZeroTerminatedBuffer(std::u16string_view buff
return std::u16string(buffer.begin(), buffer.begin() + len);
}
const char* TrimSourcePath(const char* path, const char* root) {
const char* p = path;
while (*p != '\0') {
const char* next_slash = p;
while (*next_slash != '\0' && *next_slash != '/' && *next_slash != '\\') {
++next_slash;
}
bool is_src = Common::ComparePartialString(p, next_slash, root);
p = next_slash;
if (*p != '\0') {
++p;
}
if (is_src) {
path = p;
}
}
return path;
}
} // namespace Common

View File

@@ -44,20 +44,6 @@ template class Field<std::string>;
template class Field<const char*>;
template class Field<std::chrono::microseconds>;
#ifdef ARCHITECTURE_x86_64
static const char* CpuVendorToStr(Common::CPUVendor vendor) {
switch (vendor) {
case Common::CPUVendor::INTEL:
return "Intel";
case Common::CPUVendor::AMD:
return "Amd";
case Common::CPUVendor::OTHER:
return "Other";
}
UNREACHABLE();
}
#endif
void AppendBuildInfo(FieldCollection& fc) {
const bool is_git_dirty{std::strstr(Common::g_scm_desc, "dirty") != nullptr};
fc.AddField(FieldType::App, "Git_IsDirty", is_git_dirty);
@@ -71,7 +57,6 @@ void AppendCPUInfo(FieldCollection& fc) {
#ifdef ARCHITECTURE_x86_64
fc.AddField(FieldType::UserSystem, "CPU_Model", Common::GetCPUCaps().cpu_string);
fc.AddField(FieldType::UserSystem, "CPU_BrandString", Common::GetCPUCaps().brand_string);
fc.AddField(FieldType::UserSystem, "CPU_Vendor", CpuVendorToStr(Common::GetCPUCaps().vendor));
fc.AddField(FieldType::UserSystem, "CPU_Extension_x64_AES", Common::GetCPUCaps().aes);
fc.AddField(FieldType::UserSystem, "CPU_Extension_x64_AVX", Common::GetCPUCaps().avx);
fc.AddField(FieldType::UserSystem, "CPU_Extension_x64_AVX2", Common::GetCPUCaps().avx2);

View File

@@ -28,6 +28,15 @@ public:
is_set = false;
}
template <class Duration>
bool WaitFor(const std::chrono::duration<Duration>& time) {
std::unique_lock lk{mutex};
if (!condvar.wait_for(lk, time, [this] { return is_set; }))
return false;
is_set = false;
return true;
}
template <class Clock, class Duration>
bool WaitUntil(const std::chrono::time_point<Clock, Duration>& time) {
std::unique_lock lk{mutex};

View File

@@ -3,8 +3,6 @@
// Refer to the license.txt file included.
#include <cstring>
#include <string>
#include <thread>
#include "common/common_types.h"
#include "common/x64/cpu_detect.h"
@@ -51,8 +49,6 @@ namespace Common {
static CPUCaps Detect() {
CPUCaps caps = {};
caps.num_cores = std::thread::hardware_concurrency();
// Assumes the CPU supports the CPUID instruction. Those that don't would likely not support
// yuzu at all anyway
@@ -70,12 +66,6 @@ static CPUCaps Detect() {
__cpuid(cpu_id, 0x80000000);
u32 max_ex_fn = cpu_id[0];
if (!strcmp(caps.brand_string, "GenuineIntel"))
caps.vendor = CPUVendor::INTEL;
else if (!strcmp(caps.brand_string, "AuthenticAMD"))
caps.vendor = CPUVendor::AMD;
else
caps.vendor = CPUVendor::OTHER;
// Set reasonable default brand string even if brand string not available
strcpy(caps.cpu_string, caps.brand_string);
@@ -96,15 +86,9 @@ static CPUCaps Detect() {
caps.sse4_1 = true;
if ((cpu_id[2] >> 20) & 1)
caps.sse4_2 = true;
if ((cpu_id[2] >> 22) & 1)
caps.movbe = true;
if ((cpu_id[2] >> 25) & 1)
caps.aes = true;
if ((cpu_id[3] >> 24) & 1) {
caps.fxsave_fxrstor = true;
}
// AVX support requires 3 separate checks:
// - Is the AVX bit set in CPUID?
// - Is the XSAVE bit set in CPUID?
@@ -129,8 +113,6 @@ static CPUCaps Detect() {
}
}
caps.flush_to_zero = caps.sse;
if (max_ex_fn >= 0x80000004) {
// Extract CPU model string
__cpuid(cpu_id, 0x80000002);
@@ -144,14 +126,8 @@ static CPUCaps Detect() {
if (max_ex_fn >= 0x80000001) {
// Check for more features
__cpuid(cpu_id, 0x80000001);
if (cpu_id[2] & 1)
caps.lahf_sahf_64 = true;
if ((cpu_id[2] >> 5) & 1)
caps.lzcnt = true;
if ((cpu_id[2] >> 16) & 1)
caps.fma4 = true;
if ((cpu_id[3] >> 29) & 1)
caps.long_mode = true;
}
return caps;
@@ -162,48 +138,4 @@ const CPUCaps& GetCPUCaps() {
return caps;
}
std::string GetCPUCapsString() {
auto caps = GetCPUCaps();
std::string sum(caps.cpu_string);
sum += " (";
sum += caps.brand_string;
sum += ")";
if (caps.sse)
sum += ", SSE";
if (caps.sse2) {
sum += ", SSE2";
if (!caps.flush_to_zero)
sum += " (without DAZ)";
}
if (caps.sse3)
sum += ", SSE3";
if (caps.ssse3)
sum += ", SSSE3";
if (caps.sse4_1)
sum += ", SSE4.1";
if (caps.sse4_2)
sum += ", SSE4.2";
if (caps.avx)
sum += ", AVX";
if (caps.avx2)
sum += ", AVX2";
if (caps.bmi1)
sum += ", BMI1";
if (caps.bmi2)
sum += ", BMI2";
if (caps.fma)
sum += ", FMA";
if (caps.aes)
sum += ", AES";
if (caps.movbe)
sum += ", MOVBE";
if (caps.long_mode)
sum += ", 64-bit support";
return sum;
}
} // namespace Common

View File

@@ -4,23 +4,12 @@
#pragma once
#include <string>
namespace Common {
/// x86/x64 CPU vendors that may be detected by this module
enum class CPUVendor {
INTEL,
AMD,
OTHER,
};
/// x86/x64 CPU capabilities that may be detected by this module
struct CPUCaps {
CPUVendor vendor;
char cpu_string[0x21];
char brand_string[0x41];
int num_cores;
bool sse;
bool sse2;
bool sse3;
@@ -35,20 +24,6 @@ struct CPUCaps {
bool fma;
bool fma4;
bool aes;
// Support for the FXSAVE and FXRSTOR instructions
bool fxsave_fxrstor;
bool movbe;
// This flag indicates that the hardware supports some mode in which denormal inputs and outputs
// are automatically set to (signed) zero.
bool flush_to_zero;
// Support for LAHF and SAHF instructions in 64-bit mode
bool lahf_sahf_64;
bool long_mode;
};
/**
@@ -57,10 +32,4 @@ struct CPUCaps {
*/
const CPUCaps& GetCPUCaps();
/**
* Gets a string summary of the name and supported capabilities of the host CPU
* @return String summary
*/
std::string GetCPUCapsString();
} // namespace Common

View File

@@ -15,14 +15,14 @@ add_library(core STATIC
constants.h
core.cpp
core.h
core_cpu.cpp
core_cpu.h
core_manager.cpp
core_manager.h
core_timing.cpp
core_timing.h
core_timing_util.cpp
core_timing_util.h
cpu_core_manager.cpp
cpu_core_manager.h
cpu_manager.cpp
cpu_manager.h
crypto/aes_util.cpp
crypto/aes_util.h
crypto/encryption_layer.cpp
@@ -158,6 +158,8 @@ add_library(core STATIC
hle/kernel/mutex.h
hle/kernel/object.cpp
hle/kernel/object.h
hle/kernel/physical_core.cpp
hle/kernel/physical_core.h
hle/kernel/process.cpp
hle/kernel/process.h
hle/kernel/process_capability.cpp

View File

@@ -10,11 +10,12 @@
#include "common/microprofile.h"
#include "core/arm/dynarmic/arm_dynarmic.h"
#include "core/core.h"
#include "core/core_cpu.h"
#include "core/core_manager.h"
#include "core/core_timing.h"
#include "core/core_timing_util.h"
#include "core/gdbstub/gdbstub.h"
#include "core/hle/kernel/process.h"
#include "core/hle/kernel/scheduler.h"
#include "core/hle/kernel/svc.h"
#include "core/hle/kernel/vm_manager.h"
#include "core/memory.h"
@@ -87,7 +88,7 @@ public:
if (GDBStub::IsServerEnabled()) {
parent.jit->HaltExecution();
parent.SetPC(pc);
Kernel::Thread* thread = Kernel::GetCurrentThread();
Kernel::Thread* const thread = parent.system.CurrentScheduler().GetCurrentThread();
parent.SaveContext(thread->GetContext());
GDBStub::Break();
GDBStub::SendTrap(thread, 5);
@@ -141,6 +142,7 @@ std::unique_ptr<Dynarmic::A64::Jit> ARM_Dynarmic::MakeJit(Common::PageTable& pag
config.page_table = reinterpret_cast<void**>(page_table.pointers.data());
config.page_table_address_space_bits = address_space_bits;
config.silently_mirror_page_table = false;
config.absolute_offset_page_table = true;
// Multi-process state
config.processor_id = core_index;

View File

@@ -2,10 +2,24 @@
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#ifdef ARCHITECTURE_x86_64
#include "core/arm/dynarmic/arm_dynarmic.h"
#endif
#include "core/arm/exclusive_monitor.h"
#include "core/memory.h"
namespace Core {
ExclusiveMonitor::~ExclusiveMonitor() = default;
std::unique_ptr<Core::ExclusiveMonitor> MakeExclusiveMonitor(Memory::Memory& memory,
std::size_t num_cores) {
#ifdef ARCHITECTURE_x86_64
return std::make_unique<Core::DynarmicExclusiveMonitor>(memory, num_cores);
#else
// TODO(merry): Passthrough exclusive monitor
return nullptr;
#endif
}
} // namespace Core

View File

@@ -4,8 +4,14 @@
#pragma once
#include <memory>
#include "common/common_types.h"
namespace Memory {
class Memory;
}
namespace Core {
class ExclusiveMonitor {
@@ -22,4 +28,7 @@ public:
virtual bool ExclusiveWrite128(std::size_t core_index, VAddr vaddr, u128 value) = 0;
};
std::unique_ptr<Core::ExclusiveMonitor> MakeExclusiveMonitor(Memory::Memory& memory,
std::size_t num_cores);
} // namespace Core

View File

@@ -9,6 +9,7 @@
#include "core/arm/unicorn/arm_unicorn.h"
#include "core/core.h"
#include "core/core_timing.h"
#include "core/hle/kernel/scheduler.h"
#include "core/hle/kernel/svc.h"
namespace Core {
@@ -177,7 +178,7 @@ void ARM_Unicorn::ExecuteInstructions(std::size_t num_instructions) {
uc_reg_write(uc, UC_ARM64_REG_PC, &last_bkpt.address);
}
Kernel::Thread* thread = Kernel::GetCurrentThread();
Kernel::Thread* const thread = system.CurrentScheduler().GetCurrentThread();
SaveContext(thread->GetContext());
if (last_bkpt_hit || GDBStub::IsMemoryBreak() || GDBStub::GetCpuStepFlag()) {
last_bkpt_hit = false;

View File

@@ -11,9 +11,9 @@
#include "common/string_util.h"
#include "core/arm/exclusive_monitor.h"
#include "core/core.h"
#include "core/core_cpu.h"
#include "core/core_manager.h"
#include "core/core_timing.h"
#include "core/cpu_core_manager.h"
#include "core/cpu_manager.h"
#include "core/file_sys/bis_factory.h"
#include "core/file_sys/card_image.h"
#include "core/file_sys/mode.h"
@@ -28,6 +28,7 @@
#include "core/hardware_interrupt_manager.h"
#include "core/hle/kernel/client_port.h"
#include "core/hle/kernel/kernel.h"
#include "core/hle/kernel/physical_core.h"
#include "core/hle/kernel/process.h"
#include "core/hle/kernel/scheduler.h"
#include "core/hle/kernel/thread.h"
@@ -113,16 +114,25 @@ FileSys::VirtualFile GetGameFileFromPath(const FileSys::VirtualFilesystem& vfs,
struct System::Impl {
explicit Impl(System& system)
: kernel{system}, fs_controller{system}, memory{system},
cpu_core_manager{system}, reporter{system}, applet_manager{system} {}
cpu_manager{system}, reporter{system}, applet_manager{system} {}
Cpu& CurrentCpuCore() {
return cpu_core_manager.GetCurrentCore();
CoreManager& CurrentCoreManager() {
return cpu_manager.GetCurrentCoreManager();
}
Kernel::PhysicalCore& CurrentPhysicalCore() {
const auto index = cpu_manager.GetActiveCoreIndex();
return kernel.PhysicalCore(index);
}
Kernel::PhysicalCore& GetPhysicalCore(std::size_t index) {
return kernel.PhysicalCore(index);
}
ResultStatus RunLoop(bool tight_loop) {
status = ResultStatus::Success;
cpu_core_manager.RunLoop(tight_loop);
cpu_manager.RunLoop(tight_loop);
return status;
}
@@ -131,8 +141,8 @@ struct System::Impl {
LOG_DEBUG(HW_Memory, "initialized OK");
core_timing.Initialize();
cpu_core_manager.Initialize();
kernel.Initialize();
cpu_manager.Initialize();
const auto current_time = std::chrono::duration_cast<std::chrono::seconds>(
std::chrono::system_clock::now().time_since_epoch());
@@ -205,7 +215,6 @@ struct System::Impl {
// Main process has been loaded and been made current.
// Begin GPU and CPU execution.
gpu_core->Start();
cpu_core_manager.StartThreads();
// Initialize cheat engine
if (cheat_engine) {
@@ -272,7 +281,7 @@ struct System::Impl {
gpu_core.reset();
// Close all CPU/threading state
cpu_core_manager.Shutdown();
cpu_manager.Shutdown();
// Shutdown kernel and core timing
kernel.Shutdown();
@@ -342,7 +351,7 @@ struct System::Impl {
std::unique_ptr<Tegra::GPU> gpu_core;
std::unique_ptr<Hardware::InterruptManager> interrupt_manager;
Memory::Memory memory;
CpuCoreManager cpu_core_manager;
CpuManager cpu_manager;
bool is_powered_on = false;
bool exit_lock = false;
@@ -377,12 +386,12 @@ struct System::Impl {
System::System() : impl{std::make_unique<Impl>(*this)} {}
System::~System() = default;
Cpu& System::CurrentCpuCore() {
return impl->CurrentCpuCore();
CoreManager& System::CurrentCoreManager() {
return impl->CurrentCoreManager();
}
const Cpu& System::CurrentCpuCore() const {
return impl->CurrentCpuCore();
const CoreManager& System::CurrentCoreManager() const {
return impl->CurrentCoreManager();
}
System::ResultStatus System::RunLoop(bool tight_loop) {
@@ -394,7 +403,7 @@ System::ResultStatus System::SingleStep() {
}
void System::InvalidateCpuInstructionCaches() {
impl->cpu_core_manager.InvalidateAllInstructionCaches();
impl->kernel.InvalidateAllInstructionCaches();
}
System::ResultStatus System::Load(Frontend::EmuWindow& emu_window, const std::string& filepath) {
@@ -406,13 +415,11 @@ bool System::IsPoweredOn() const {
}
void System::PrepareReschedule() {
CurrentCpuCore().PrepareReschedule();
impl->CurrentPhysicalCore().Stop();
}
void System::PrepareReschedule(const u32 core_index) {
if (core_index < GlobalScheduler().CpuCoresCount()) {
CpuCore(core_index).PrepareReschedule();
}
impl->kernel.PrepareReschedule(core_index);
}
PerfStatsResults System::GetAndResetPerfStats() {
@@ -428,31 +435,31 @@ const TelemetrySession& System::TelemetrySession() const {
}
ARM_Interface& System::CurrentArmInterface() {
return CurrentCpuCore().ArmInterface();
return impl->CurrentPhysicalCore().ArmInterface();
}
const ARM_Interface& System::CurrentArmInterface() const {
return CurrentCpuCore().ArmInterface();
return impl->CurrentPhysicalCore().ArmInterface();
}
std::size_t System::CurrentCoreIndex() const {
return CurrentCpuCore().CoreIndex();
return impl->cpu_manager.GetActiveCoreIndex();
}
Kernel::Scheduler& System::CurrentScheduler() {
return CurrentCpuCore().Scheduler();
return impl->CurrentPhysicalCore().Scheduler();
}
const Kernel::Scheduler& System::CurrentScheduler() const {
return CurrentCpuCore().Scheduler();
return impl->CurrentPhysicalCore().Scheduler();
}
Kernel::Scheduler& System::Scheduler(std::size_t core_index) {
return CpuCore(core_index).Scheduler();
return impl->GetPhysicalCore(core_index).Scheduler();
}
const Kernel::Scheduler& System::Scheduler(std::size_t core_index) const {
return CpuCore(core_index).Scheduler();
return impl->GetPhysicalCore(core_index).Scheduler();
}
/// Gets the global scheduler
@@ -474,28 +481,28 @@ const Kernel::Process* System::CurrentProcess() const {
}
ARM_Interface& System::ArmInterface(std::size_t core_index) {
return CpuCore(core_index).ArmInterface();
return impl->GetPhysicalCore(core_index).ArmInterface();
}
const ARM_Interface& System::ArmInterface(std::size_t core_index) const {
return CpuCore(core_index).ArmInterface();
return impl->GetPhysicalCore(core_index).ArmInterface();
}
Cpu& System::CpuCore(std::size_t core_index) {
return impl->cpu_core_manager.GetCore(core_index);
CoreManager& System::GetCoreManager(std::size_t core_index) {
return impl->cpu_manager.GetCoreManager(core_index);
}
const Cpu& System::CpuCore(std::size_t core_index) const {
const CoreManager& System::GetCoreManager(std::size_t core_index) const {
ASSERT(core_index < NUM_CPU_CORES);
return impl->cpu_core_manager.GetCore(core_index);
return impl->cpu_manager.GetCoreManager(core_index);
}
ExclusiveMonitor& System::Monitor() {
return impl->cpu_core_manager.GetExclusiveMonitor();
return impl->kernel.GetExclusiveMonitor();
}
const ExclusiveMonitor& System::Monitor() const {
return impl->cpu_core_manager.GetExclusiveMonitor();
return impl->kernel.GetExclusiveMonitor();
}
Memory::Memory& System::Memory() {

View File

@@ -93,7 +93,7 @@ class Memory;
namespace Core {
class ARM_Interface;
class Cpu;
class CoreManager;
class ExclusiveMonitor;
class FrameLimiter;
class PerfStats;
@@ -218,10 +218,10 @@ public:
const ARM_Interface& ArmInterface(std::size_t core_index) const;
/// Gets a CPU interface to the CPU core with the specified index
Cpu& CpuCore(std::size_t core_index);
CoreManager& GetCoreManager(std::size_t core_index);
/// Gets a CPU interface to the CPU core with the specified index
const Cpu& CpuCore(std::size_t core_index) const;
const CoreManager& GetCoreManager(std::size_t core_index) const;
/// Gets a reference to the exclusive monitor
ExclusiveMonitor& Monitor();
@@ -364,10 +364,10 @@ private:
System();
/// Returns the currently running CPU core
Cpu& CurrentCpuCore();
CoreManager& CurrentCoreManager();
/// Returns the currently running CPU core
const Cpu& CurrentCpuCore() const;
const CoreManager& CurrentCoreManager() const;
/**
* Initialize the emulated system.

View File

@@ -1,127 +0,0 @@
// Copyright 2018 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include <condition_variable>
#include <mutex>
#include "common/logging/log.h"
#ifdef ARCHITECTURE_x86_64
#include "core/arm/dynarmic/arm_dynarmic.h"
#endif
#include "core/arm/exclusive_monitor.h"
#include "core/arm/unicorn/arm_unicorn.h"
#include "core/core.h"
#include "core/core_cpu.h"
#include "core/core_timing.h"
#include "core/hle/kernel/scheduler.h"
#include "core/hle/kernel/thread.h"
#include "core/hle/lock.h"
#include "core/settings.h"
namespace Core {
void CpuBarrier::NotifyEnd() {
std::unique_lock lock{mutex};
end = true;
condition.notify_all();
}
bool CpuBarrier::Rendezvous() {
if (!Settings::values.use_multi_core) {
// Meaningless when running in single-core mode
return true;
}
if (!end) {
std::unique_lock lock{mutex};
--cores_waiting;
if (!cores_waiting) {
cores_waiting = NUM_CPU_CORES;
condition.notify_all();
return true;
}
condition.wait(lock);
return true;
}
return false;
}
Cpu::Cpu(System& system, ExclusiveMonitor& exclusive_monitor, CpuBarrier& cpu_barrier,
std::size_t core_index)
: cpu_barrier{cpu_barrier}, global_scheduler{system.GlobalScheduler()},
core_timing{system.CoreTiming()}, core_index{core_index} {
#ifdef ARCHITECTURE_x86_64
arm_interface = std::make_unique<ARM_Dynarmic>(system, exclusive_monitor, core_index);
#else
arm_interface = std::make_unique<ARM_Unicorn>(system);
LOG_WARNING(Core, "CPU JIT requested, but Dynarmic not available");
#endif
scheduler = std::make_unique<Kernel::Scheduler>(system, *arm_interface, core_index);
}
Cpu::~Cpu() = default;
std::unique_ptr<ExclusiveMonitor> Cpu::MakeExclusiveMonitor(
[[maybe_unused]] Memory::Memory& memory, [[maybe_unused]] std::size_t num_cores) {
#ifdef ARCHITECTURE_x86_64
return std::make_unique<DynarmicExclusiveMonitor>(memory, num_cores);
#else
// TODO(merry): Passthrough exclusive monitor
return nullptr;
#endif
}
void Cpu::RunLoop(bool tight_loop) {
// Wait for all other CPU cores to complete the previous slice, such that they run in lock-step
if (!cpu_barrier.Rendezvous()) {
// If rendezvous failed, session has been killed
return;
}
Reschedule();
// If we don't have a currently active thread then don't execute instructions,
// instead advance to the next event and try to yield to the next thread
if (Kernel::GetCurrentThread() == nullptr) {
LOG_TRACE(Core, "Core-{} idling", core_index);
core_timing.Idle();
} else {
if (tight_loop) {
arm_interface->Run();
} else {
arm_interface->Step();
}
// We are stopping a run, exclusive state must be cleared
arm_interface->ClearExclusiveState();
}
core_timing.Advance();
Reschedule();
}
void Cpu::SingleStep() {
return RunLoop(false);
}
void Cpu::PrepareReschedule() {
arm_interface->PrepareReschedule();
}
void Cpu::Reschedule() {
// Lock the global kernel mutex when we manipulate the HLE state
std::lock_guard lock(HLE::g_hle_lock);
global_scheduler.SelectThread(core_index);
scheduler->TryDoContextSwitch();
}
void Cpu::Shutdown() {
scheduler->Shutdown();
}
} // namespace Core

View File

@@ -1,120 +0,0 @@
// Copyright 2018 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <atomic>
#include <condition_variable>
#include <cstddef>
#include <memory>
#include <mutex>
#include "common/common_types.h"
namespace Kernel {
class GlobalScheduler;
class Scheduler;
} // namespace Kernel
namespace Core {
class System;
}
namespace Core::Timing {
class CoreTiming;
}
namespace Memory {
class Memory;
}
namespace Core {
class ARM_Interface;
class ExclusiveMonitor;
constexpr unsigned NUM_CPU_CORES{4};
class CpuBarrier {
public:
bool IsAlive() const {
return !end;
}
void NotifyEnd();
bool Rendezvous();
private:
unsigned cores_waiting{NUM_CPU_CORES};
std::mutex mutex;
std::condition_variable condition;
std::atomic<bool> end{};
};
class Cpu {
public:
Cpu(System& system, ExclusiveMonitor& exclusive_monitor, CpuBarrier& cpu_barrier,
std::size_t core_index);
~Cpu();
void RunLoop(bool tight_loop = true);
void SingleStep();
void PrepareReschedule();
ARM_Interface& ArmInterface() {
return *arm_interface;
}
const ARM_Interface& ArmInterface() const {
return *arm_interface;
}
Kernel::Scheduler& Scheduler() {
return *scheduler;
}
const Kernel::Scheduler& Scheduler() const {
return *scheduler;
}
bool IsMainCore() const {
return core_index == 0;
}
std::size_t CoreIndex() const {
return core_index;
}
void Shutdown();
/**
* Creates an exclusive monitor to handle exclusive reads/writes.
*
* @param memory The current memory subsystem that the monitor may wish
* to keep track of.
*
* @param num_cores The number of cores to assume about the CPU.
*
* @returns The constructed exclusive monitor instance, or nullptr if the current
* CPU backend is unable to use an exclusive monitor.
*/
static std::unique_ptr<ExclusiveMonitor> MakeExclusiveMonitor(Memory::Memory& memory,
std::size_t num_cores);
private:
void Reschedule();
std::unique_ptr<ARM_Interface> arm_interface;
CpuBarrier& cpu_barrier;
Kernel::GlobalScheduler& global_scheduler;
std::unique_ptr<Kernel::Scheduler> scheduler;
Timing::CoreTiming& core_timing;
std::atomic<bool> reschedule_pending = false;
std::size_t core_index;
};
} // namespace Core

70
src/core/core_manager.cpp Normal file
View File

@@ -0,0 +1,70 @@
// Copyright 2018 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include <condition_variable>
#include <mutex>
#include "common/logging/log.h"
#ifdef ARCHITECTURE_x86_64
#include "core/arm/dynarmic/arm_dynarmic.h"
#endif
#include "core/arm/exclusive_monitor.h"
#include "core/arm/unicorn/arm_unicorn.h"
#include "core/core.h"
#include "core/core_manager.h"
#include "core/core_timing.h"
#include "core/hle/kernel/kernel.h"
#include "core/hle/kernel/physical_core.h"
#include "core/hle/kernel/scheduler.h"
#include "core/hle/kernel/thread.h"
#include "core/hle/lock.h"
#include "core/settings.h"
namespace Core {
CoreManager::CoreManager(System& system, std::size_t core_index)
: global_scheduler{system.GlobalScheduler()}, physical_core{system.Kernel().PhysicalCore(
core_index)},
core_timing{system.CoreTiming()}, core_index{core_index} {}
CoreManager::~CoreManager() = default;
void CoreManager::RunLoop(bool tight_loop) {
Reschedule();
// If we don't have a currently active thread then don't execute instructions,
// instead advance to the next event and try to yield to the next thread
if (Kernel::GetCurrentThread() == nullptr) {
LOG_TRACE(Core, "Core-{} idling", core_index);
core_timing.Idle();
} else {
if (tight_loop) {
physical_core.Run();
} else {
physical_core.Step();
}
}
core_timing.Advance();
Reschedule();
}
void CoreManager::SingleStep() {
return RunLoop(false);
}
void CoreManager::PrepareReschedule() {
physical_core.Stop();
}
void CoreManager::Reschedule() {
// Lock the global kernel mutex when we manipulate the HLE state
std::lock_guard lock(HLE::g_hle_lock);
global_scheduler.SelectThread(core_index);
physical_core.Scheduler().TryDoContextSwitch();
}
} // namespace Core

63
src/core/core_manager.h Normal file
View File

@@ -0,0 +1,63 @@
// Copyright 2018 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <atomic>
#include <cstddef>
#include <memory>
#include "common/common_types.h"
namespace Kernel {
class GlobalScheduler;
class PhysicalCore;
} // namespace Kernel
namespace Core {
class System;
}
namespace Core::Timing {
class CoreTiming;
}
namespace Memory {
class Memory;
}
namespace Core {
constexpr unsigned NUM_CPU_CORES{4};
class CoreManager {
public:
CoreManager(System& system, std::size_t core_index);
~CoreManager();
void RunLoop(bool tight_loop = true);
void SingleStep();
void PrepareReschedule();
bool IsMainCore() const {
return core_index == 0;
}
std::size_t CoreIndex() const {
return core_index;
}
private:
void Reschedule();
Kernel::GlobalScheduler& global_scheduler;
Kernel::PhysicalCore& physical_core;
Timing::CoreTiming& core_timing;
std::atomic<bool> reschedule_pending = false;
std::size_t core_index;
};
} // namespace Core

View File

@@ -1,152 +0,0 @@
// Copyright 2018 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include "common/assert.h"
#include "core/arm/exclusive_monitor.h"
#include "core/core.h"
#include "core/core_cpu.h"
#include "core/core_timing.h"
#include "core/cpu_core_manager.h"
#include "core/gdbstub/gdbstub.h"
#include "core/settings.h"
namespace Core {
namespace {
void RunCpuCore(const System& system, Cpu& cpu_state) {
while (system.IsPoweredOn()) {
cpu_state.RunLoop(true);
}
}
} // Anonymous namespace
CpuCoreManager::CpuCoreManager(System& system) : system{system} {}
CpuCoreManager::~CpuCoreManager() = default;
void CpuCoreManager::Initialize() {
barrier = std::make_unique<CpuBarrier>();
exclusive_monitor = Cpu::MakeExclusiveMonitor(system.Memory(), cores.size());
for (std::size_t index = 0; index < cores.size(); ++index) {
cores[index] = std::make_unique<Cpu>(system, *exclusive_monitor, *barrier, index);
}
}
void CpuCoreManager::StartThreads() {
// Create threads for CPU cores 1-3, and build thread_to_cpu map
// CPU core 0 is run on the main thread
thread_to_cpu[std::this_thread::get_id()] = cores[0].get();
if (!Settings::values.use_multi_core) {
return;
}
for (std::size_t index = 0; index < core_threads.size(); ++index) {
core_threads[index] = std::make_unique<std::thread>(RunCpuCore, std::cref(system),
std::ref(*cores[index + 1]));
thread_to_cpu[core_threads[index]->get_id()] = cores[index + 1].get();
}
}
void CpuCoreManager::Shutdown() {
barrier->NotifyEnd();
if (Settings::values.use_multi_core) {
for (auto& thread : core_threads) {
thread->join();
thread.reset();
}
}
thread_to_cpu.clear();
for (auto& cpu_core : cores) {
cpu_core->Shutdown();
cpu_core.reset();
}
exclusive_monitor.reset();
barrier.reset();
}
Cpu& CpuCoreManager::GetCore(std::size_t index) {
return *cores.at(index);
}
const Cpu& CpuCoreManager::GetCore(std::size_t index) const {
return *cores.at(index);
}
ExclusiveMonitor& CpuCoreManager::GetExclusiveMonitor() {
return *exclusive_monitor;
}
const ExclusiveMonitor& CpuCoreManager::GetExclusiveMonitor() const {
return *exclusive_monitor;
}
Cpu& CpuCoreManager::GetCurrentCore() {
if (Settings::values.use_multi_core) {
const auto& search = thread_to_cpu.find(std::this_thread::get_id());
ASSERT(search != thread_to_cpu.end());
ASSERT(search->second);
return *search->second;
}
// Otherwise, use single-threaded mode active_core variable
return *cores[active_core];
}
const Cpu& CpuCoreManager::GetCurrentCore() const {
if (Settings::values.use_multi_core) {
const auto& search = thread_to_cpu.find(std::this_thread::get_id());
ASSERT(search != thread_to_cpu.end());
ASSERT(search->second);
return *search->second;
}
// Otherwise, use single-threaded mode active_core variable
return *cores[active_core];
}
void CpuCoreManager::RunLoop(bool tight_loop) {
// Update thread_to_cpu in case Core 0 is run from a different host thread
thread_to_cpu[std::this_thread::get_id()] = cores[0].get();
if (GDBStub::IsServerEnabled()) {
GDBStub::HandlePacket();
// If the loop is halted and we want to step, use a tiny (1) number of instructions to
// execute. Otherwise, get out of the loop function.
if (GDBStub::GetCpuHaltFlag()) {
if (GDBStub::GetCpuStepFlag()) {
tight_loop = false;
} else {
return;
}
}
}
auto& core_timing = system.CoreTiming();
core_timing.ResetRun();
bool keep_running{};
do {
keep_running = false;
for (active_core = 0; active_core < NUM_CPU_CORES; ++active_core) {
core_timing.SwitchContext(active_core);
if (core_timing.CanCurrentContextRun()) {
cores[active_core]->RunLoop(tight_loop);
}
keep_running |= core_timing.CanCurrentContextRun();
}
} while (keep_running);
if (GDBStub::IsServerEnabled()) {
GDBStub::SetCpuStepFlag(false);
}
}
void CpuCoreManager::InvalidateAllInstructionCaches() {
for (auto& cpu : cores) {
cpu->ArmInterface().ClearInstructionCache();
}
}
} // namespace Core

View File

@@ -1,62 +0,0 @@
// Copyright 2018 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <array>
#include <map>
#include <memory>
#include <thread>
namespace Core {
class Cpu;
class CpuBarrier;
class ExclusiveMonitor;
class System;
class CpuCoreManager {
public:
explicit CpuCoreManager(System& system);
CpuCoreManager(const CpuCoreManager&) = delete;
CpuCoreManager(CpuCoreManager&&) = delete;
~CpuCoreManager();
CpuCoreManager& operator=(const CpuCoreManager&) = delete;
CpuCoreManager& operator=(CpuCoreManager&&) = delete;
void Initialize();
void StartThreads();
void Shutdown();
Cpu& GetCore(std::size_t index);
const Cpu& GetCore(std::size_t index) const;
Cpu& GetCurrentCore();
const Cpu& GetCurrentCore() const;
ExclusiveMonitor& GetExclusiveMonitor();
const ExclusiveMonitor& GetExclusiveMonitor() const;
void RunLoop(bool tight_loop);
void InvalidateAllInstructionCaches();
private:
static constexpr std::size_t NUM_CPU_CORES = 4;
std::unique_ptr<ExclusiveMonitor> exclusive_monitor;
std::unique_ptr<CpuBarrier> barrier;
std::array<std::unique_ptr<Cpu>, NUM_CPU_CORES> cores;
std::array<std::unique_ptr<std::thread>, NUM_CPU_CORES - 1> core_threads;
std::size_t active_core{}; ///< Active core, only used in single thread mode
/// Map of guest threads to CPU cores
std::map<std::thread::id, Cpu*> thread_to_cpu;
System& system;
};
} // namespace Core

81
src/core/cpu_manager.cpp Normal file
View File

@@ -0,0 +1,81 @@
// Copyright 2018 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include "core/arm/exclusive_monitor.h"
#include "core/core.h"
#include "core/core_manager.h"
#include "core/core_timing.h"
#include "core/cpu_manager.h"
#include "core/gdbstub/gdbstub.h"
namespace Core {
CpuManager::CpuManager(System& system) : system{system} {}
CpuManager::~CpuManager() = default;
void CpuManager::Initialize() {
for (std::size_t index = 0; index < core_managers.size(); ++index) {
core_managers[index] = std::make_unique<CoreManager>(system, index);
}
}
void CpuManager::Shutdown() {
for (auto& cpu_core : core_managers) {
cpu_core.reset();
}
}
CoreManager& CpuManager::GetCoreManager(std::size_t index) {
return *core_managers.at(index);
}
const CoreManager& CpuManager::GetCoreManager(std::size_t index) const {
return *core_managers.at(index);
}
CoreManager& CpuManager::GetCurrentCoreManager() {
// Otherwise, use single-threaded mode active_core variable
return *core_managers[active_core];
}
const CoreManager& CpuManager::GetCurrentCoreManager() const {
// Otherwise, use single-threaded mode active_core variable
return *core_managers[active_core];
}
void CpuManager::RunLoop(bool tight_loop) {
if (GDBStub::IsServerEnabled()) {
GDBStub::HandlePacket();
// If the loop is halted and we want to step, use a tiny (1) number of instructions to
// execute. Otherwise, get out of the loop function.
if (GDBStub::GetCpuHaltFlag()) {
if (GDBStub::GetCpuStepFlag()) {
tight_loop = false;
} else {
return;
}
}
}
auto& core_timing = system.CoreTiming();
core_timing.ResetRun();
bool keep_running{};
do {
keep_running = false;
for (active_core = 0; active_core < NUM_CPU_CORES; ++active_core) {
core_timing.SwitchContext(active_core);
if (core_timing.CanCurrentContextRun()) {
core_managers[active_core]->RunLoop(tight_loop);
}
keep_running |= core_timing.CanCurrentContextRun();
}
} while (keep_running);
if (GDBStub::IsServerEnabled()) {
GDBStub::SetCpuStepFlag(false);
}
}
} // namespace Core

50
src/core/cpu_manager.h Normal file
View File

@@ -0,0 +1,50 @@
// Copyright 2018 yuzu emulator team
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <array>
#include <memory>
namespace Core {
class CoreManager;
class System;
class CpuManager {
public:
explicit CpuManager(System& system);
CpuManager(const CpuManager&) = delete;
CpuManager(CpuManager&&) = delete;
~CpuManager();
CpuManager& operator=(const CpuManager&) = delete;
CpuManager& operator=(CpuManager&&) = delete;
void Initialize();
void Shutdown();
CoreManager& GetCoreManager(std::size_t index);
const CoreManager& GetCoreManager(std::size_t index) const;
CoreManager& GetCurrentCoreManager();
const CoreManager& GetCurrentCoreManager() const;
std::size_t GetActiveCoreIndex() const {
return active_core;
}
void RunLoop(bool tight_loop);
private:
static constexpr std::size_t NUM_CPU_CORES = 4;
std::array<std::unique_ptr<CoreManager>, NUM_CPU_CORES> core_managers;
std::size_t active_core{}; ///< Active core, only used in single thread mode
System& system;
};
} // namespace Core

View File

@@ -15,6 +15,13 @@
namespace Input {
enum class AnalogDirection : u8 {
RIGHT,
LEFT,
UP,
DOWN,
};
/// An abstract class template for an input device (a button, an analog input, etc.).
template <typename StatusType>
class InputDevice {
@@ -23,6 +30,9 @@ public:
virtual StatusType GetStatus() const {
return {};
}
virtual bool GetAnalogDirectionStatus(AnalogDirection direction) const {
return {};
}
};
/// An abstract class template for a factory that can create input devices.

View File

@@ -35,7 +35,7 @@
#include "common/swap.h"
#include "core/arm/arm_interface.h"
#include "core/core.h"
#include "core/core_cpu.h"
#include "core/core_manager.h"
#include "core/gdbstub/gdbstub.h"
#include "core/hle/kernel/process.h"
#include "core/hle/kernel/scheduler.h"

View File

@@ -8,7 +8,6 @@
#include "common/assert.h"
#include "common/common_types.h"
#include "core/core.h"
#include "core/core_cpu.h"
#include "core/hle/kernel/address_arbiter.h"
#include "core/hle/kernel/errors.h"
#include "core/hle/kernel/scheduler.h"

View File

@@ -3,13 +3,15 @@
// Refer to the license.txt file included.
#include <atomic>
#include <functional>
#include <memory>
#include <mutex>
#include <utility>
#include "common/assert.h"
#include "common/logging/log.h"
#include "core/arm/arm_interface.h"
#include "core/arm/exclusive_monitor.h"
#include "core/core.h"
#include "core/core_timing.h"
#include "core/core_timing_util.h"
@@ -17,6 +19,7 @@
#include "core/hle/kernel/errors.h"
#include "core/hle/kernel/handle_table.h"
#include "core/hle/kernel/kernel.h"
#include "core/hle/kernel/physical_core.h"
#include "core/hle/kernel/process.h"
#include "core/hle/kernel/resource_limit.h"
#include "core/hle/kernel/scheduler.h"
@@ -98,6 +101,7 @@ struct KernelCore::Impl {
void Initialize(KernelCore& kernel) {
Shutdown();
InitializePhysicalCores();
InitializeSystemResourceLimit(kernel);
InitializeThreads();
InitializePreemption();
@@ -121,6 +125,21 @@ struct KernelCore::Impl {
global_scheduler.Shutdown();
named_ports.clear();
for (auto& core : cores) {
core.Shutdown();
}
cores.clear();
exclusive_monitor.reset();
}
void InitializePhysicalCores() {
exclusive_monitor =
Core::MakeExclusiveMonitor(system.Memory(), global_scheduler.CpuCoresCount());
for (std::size_t i = 0; i < global_scheduler.CpuCoresCount(); i++) {
cores.emplace_back(system, i, *exclusive_monitor);
}
}
// Creates the default system resource limit
@@ -186,6 +205,9 @@ struct KernelCore::Impl {
/// the ConnectToPort SVC.
NamedPortTable named_ports;
std::unique_ptr<Core::ExclusiveMonitor> exclusive_monitor;
std::vector<Kernel::PhysicalCore> cores;
// System context
Core::System& system;
};
@@ -240,6 +262,34 @@ const Kernel::GlobalScheduler& KernelCore::GlobalScheduler() const {
return impl->global_scheduler;
}
Kernel::PhysicalCore& KernelCore::PhysicalCore(std::size_t id) {
return impl->cores[id];
}
const Kernel::PhysicalCore& KernelCore::PhysicalCore(std::size_t id) const {
return impl->cores[id];
}
Core::ExclusiveMonitor& KernelCore::GetExclusiveMonitor() {
return *impl->exclusive_monitor;
}
const Core::ExclusiveMonitor& KernelCore::GetExclusiveMonitor() const {
return *impl->exclusive_monitor;
}
void KernelCore::InvalidateAllInstructionCaches() {
for (std::size_t i = 0; i < impl->global_scheduler.CpuCoresCount(); i++) {
PhysicalCore(i).ArmInterface().ClearInstructionCache();
}
}
void KernelCore::PrepareReschedule(std::size_t id) {
if (id < impl->global_scheduler.CpuCoresCount()) {
impl->cores[id].Stop();
}
}
void KernelCore::AddNamedPort(std::string name, std::shared_ptr<ClientPort> port) {
impl->named_ports.emplace(std::move(name), std::move(port));
}

View File

@@ -11,8 +11,9 @@
#include "core/hle/kernel/object.h"
namespace Core {
class ExclusiveMonitor;
class System;
}
} // namespace Core
namespace Core::Timing {
class CoreTiming;
@@ -25,6 +26,7 @@ class AddressArbiter;
class ClientPort;
class GlobalScheduler;
class HandleTable;
class PhysicalCore;
class Process;
class ResourceLimit;
class Thread;
@@ -84,6 +86,21 @@ public:
/// Gets the sole instance of the global scheduler
const Kernel::GlobalScheduler& GlobalScheduler() const;
/// Gets the an instance of the respective physical CPU core.
Kernel::PhysicalCore& PhysicalCore(std::size_t id);
/// Gets the an instance of the respective physical CPU core.
const Kernel::PhysicalCore& PhysicalCore(std::size_t id) const;
/// Stops execution of 'id' core, in order to reschedule a new thread.
void PrepareReschedule(std::size_t id);
Core::ExclusiveMonitor& GetExclusiveMonitor();
const Core::ExclusiveMonitor& GetExclusiveMonitor() const;
void InvalidateAllInstructionCaches();
/// Adds a port to the named port table
void AddNamedPort(std::string name, std::shared_ptr<ClientPort> port);

View File

@@ -0,0 +1,51 @@
// Copyright 2020 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include "common/logging/log.h"
#include "core/arm/arm_interface.h"
#ifdef ARCHITECTURE_x86_64
#include "core/arm/dynarmic/arm_dynarmic.h"
#endif
#include "core/arm/exclusive_monitor.h"
#include "core/arm/unicorn/arm_unicorn.h"
#include "core/core.h"
#include "core/hle/kernel/physical_core.h"
#include "core/hle/kernel/scheduler.h"
#include "core/hle/kernel/thread.h"
namespace Kernel {
PhysicalCore::PhysicalCore(Core::System& system, std::size_t id,
Core::ExclusiveMonitor& exclusive_monitor)
: core_index{id} {
#ifdef ARCHITECTURE_x86_64
arm_interface = std::make_unique<Core::ARM_Dynarmic>(system, exclusive_monitor, core_index);
#else
arm_interface = std::make_shared<Core::ARM_Unicorn>(system);
LOG_WARNING(Core, "CPU JIT requested, but Dynarmic not available");
#endif
scheduler = std::make_unique<Kernel::Scheduler>(system, *arm_interface, core_index);
}
PhysicalCore::~PhysicalCore() = default;
void PhysicalCore::Run() {
arm_interface->Run();
arm_interface->ClearExclusiveState();
}
void PhysicalCore::Step() {
arm_interface->Step();
}
void PhysicalCore::Stop() {
arm_interface->PrepareReschedule();
}
void PhysicalCore::Shutdown() {
scheduler->Shutdown();
}
} // namespace Kernel

View File

@@ -0,0 +1,77 @@
// Copyright 2020 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <cstddef>
#include <memory>
namespace Kernel {
class Scheduler;
} // namespace Kernel
namespace Core {
class ARM_Interface;
class ExclusiveMonitor;
class System;
} // namespace Core
namespace Kernel {
class PhysicalCore {
public:
PhysicalCore(Core::System& system, std::size_t id, Core::ExclusiveMonitor& exclusive_monitor);
~PhysicalCore();
PhysicalCore(const PhysicalCore&) = delete;
PhysicalCore& operator=(const PhysicalCore&) = delete;
PhysicalCore(PhysicalCore&&) = default;
PhysicalCore& operator=(PhysicalCore&&) = default;
/// Execute current jit state
void Run();
/// Execute a single instruction in current jit.
void Step();
/// Stop JIT execution/exit
void Stop();
// Shutdown this physical core.
void Shutdown();
Core::ARM_Interface& ArmInterface() {
return *arm_interface;
}
const Core::ARM_Interface& ArmInterface() const {
return *arm_interface;
}
bool IsMainCore() const {
return core_index == 0;
}
bool IsSystemCore() const {
return core_index == 3;
}
std::size_t CoreIndex() const {
return core_index;
}
Kernel::Scheduler& Scheduler() {
return *scheduler;
}
const Kernel::Scheduler& Scheduler() const {
return *scheduler;
}
private:
std::size_t core_index;
std::unique_ptr<Core::ARM_Interface> arm_interface;
std::unique_ptr<Kernel::Scheduler> scheduler;
};
} // namespace Kernel

View File

@@ -14,6 +14,9 @@ namespace Kernel {
// - Second to ensure all host backing memory used is aligned to 256 bytes due
// to strict alignment restrictions on GPU memory.
using PhysicalMemory = std::vector<u8, Common::AlignmentAllocator<u8, 256>>;
using PhysicalMemoryVector = std::vector<u8, Common::AlignmentAllocator<u8, 256>>;
class PhysicalMemory final : public PhysicalMemoryVector {
using PhysicalMemoryVector::PhysicalMemoryVector;
};
} // namespace Kernel

View File

@@ -317,6 +317,8 @@ void Process::FreeTLSRegion(VAddr tls_address) {
}
void Process::LoadModule(CodeSet module_, VAddr base_addr) {
code_memory_size += module_.memory.size();
const auto memory = std::make_shared<PhysicalMemory>(std::move(module_.memory));
const auto MapSegment = [&](const CodeSet::Segment& segment, VMAPermission permissions,
@@ -332,8 +334,6 @@ void Process::LoadModule(CodeSet module_, VAddr base_addr) {
MapSegment(module_.CodeSegment(), VMAPermission::ReadExecute, MemoryState::Code);
MapSegment(module_.RODataSegment(), VMAPermission::Read, MemoryState::CodeData);
MapSegment(module_.DataSegment(), VMAPermission::ReadWrite, MemoryState::CodeData);
code_memory_size += module_.memory.size();
}
Process::Process(Core::System& system)

View File

@@ -14,7 +14,6 @@
#include "common/logging/log.h"
#include "core/arm/arm_interface.h"
#include "core/core.h"
#include "core/core_cpu.h"
#include "core/core_timing.h"
#include "core/hle/kernel/kernel.h"
#include "core/hle/kernel/process.h"

View File

@@ -15,7 +15,7 @@
#include "common/string_util.h"
#include "core/arm/exclusive_monitor.h"
#include "core/core.h"
#include "core/core_cpu.h"
#include "core/core_manager.h"
#include "core/core_timing.h"
#include "core/core_timing_util.h"
#include "core/hle/kernel/address_arbiter.h"

View File

@@ -13,7 +13,6 @@
#include "common/thread_queue_list.h"
#include "core/arm/arm_interface.h"
#include "core/core.h"
#include "core/core_cpu.h"
#include "core/core_timing.h"
#include "core/core_timing_util.h"
#include "core/hle/kernel/errors.h"
@@ -356,7 +355,7 @@ void Thread::SetActivity(ThreadActivity value) {
// Set status if not waiting
if (status == ThreadStatus::Ready || status == ThreadStatus::Running) {
SetStatus(ThreadStatus::Paused);
Core::System::GetInstance().CpuCore(processor_id).PrepareReschedule();
kernel.PrepareReschedule(processor_id);
}
} else if (status == ThreadStatus::Paused) {
// Ready to reschedule

View File

@@ -3,6 +3,7 @@
// Refer to the license.txt file included.
#include <algorithm>
#include <cstring>
#include <iterator>
#include <utility>
#include "common/alignment.h"
@@ -269,18 +270,9 @@ ResultVal<VAddr> VMManager::SetHeapSize(u64 size) {
// If necessary, expand backing vector to cover new heap extents in
// the case of allocating. Otherwise, shrink the backing memory,
// if a smaller heap has been requested.
const u64 old_heap_size = GetCurrentHeapSize();
if (size > old_heap_size) {
const u64 alloc_size = size - old_heap_size;
heap_memory->insert(heap_memory->end(), alloc_size, 0);
RefreshMemoryBlockMappings(heap_memory.get());
} else if (size < old_heap_size) {
heap_memory->resize(size);
heap_memory->shrink_to_fit();
RefreshMemoryBlockMappings(heap_memory.get());
}
heap_memory->resize(size);
heap_memory->shrink_to_fit();
RefreshMemoryBlockMappings(heap_memory.get());
heap_end = heap_region_base + size;
ASSERT(GetCurrentHeapSize() == heap_memory->size());
@@ -752,24 +744,20 @@ void VMManager::MergeAdjacentVMA(VirtualMemoryArea& left, const VirtualMemoryAre
// Always merge allocated memory blocks, even when they don't share the same backing block.
if (left.type == VMAType::AllocatedMemoryBlock &&
(left.backing_block != right.backing_block || left.offset + left.size != right.offset)) {
const auto right_begin = right.backing_block->begin() + right.offset;
const auto right_end = right_begin + right.size;
// Check if we can save work.
if (left.offset == 0 && left.size == left.backing_block->size()) {
// Fast case: left is an entire backing block.
left.backing_block->insert(left.backing_block->end(), right_begin, right_end);
left.backing_block->resize(left.size + right.size);
std::memcpy(left.backing_block->data() + left.size,
right.backing_block->data() + right.offset, right.size);
} else {
// Slow case: make a new memory block for left and right.
const auto left_begin = left.backing_block->begin() + left.offset;
const auto left_end = left_begin + left.size;
const auto left_size = static_cast<std::size_t>(std::distance(left_begin, left_end));
const auto right_size = static_cast<std::size_t>(std::distance(right_begin, right_end));
auto new_memory = std::make_shared<PhysicalMemory>();
new_memory->reserve(left_size + right_size);
new_memory->insert(new_memory->end(), left_begin, left_end);
new_memory->insert(new_memory->end(), right_begin, right_end);
new_memory->resize(left.size + right.size);
std::memcpy(new_memory->data(), left.backing_block->data() + left.offset, left.size);
std::memcpy(new_memory->data() + left.size, right.backing_block->data() + right.offset,
right.size);
left.backing_block = std::move(new_memory);
left.offset = 0;
@@ -792,8 +780,7 @@ void VMManager::UpdatePageTableForVMA(const VirtualMemoryArea& vma) {
memory.UnmapRegion(page_table, vma.base, vma.size);
break;
case VMAType::AllocatedMemoryBlock:
memory.MapMemoryRegion(page_table, vma.base, vma.size,
vma.backing_block->data() + vma.offset);
memory.MapMemoryRegion(page_table, vma.base, vma.size, *vma.backing_block, vma.offset);
break;
case VMAType::BackingMemory:
memory.MapMemoryRegion(page_table, vma.base, vma.size, vma.backing_memory);

View File

@@ -7,7 +7,6 @@
#include "common/common_types.h"
#include "common/logging/log.h"
#include "core/core.h"
#include "core/core_cpu.h"
#include "core/hle/kernel/kernel.h"
#include "core/hle/kernel/object.h"
#include "core/hle/kernel/process.h"
@@ -96,7 +95,7 @@ void WaitObject::WakeupWaitingThread(std::shared_ptr<Thread> thread) {
}
if (resume) {
thread->ResumeFromWait();
Core::System::GetInstance().PrepareReschedule(thread->GetProcessorID());
kernel.PrepareReschedule(thread->GetProcessorID());
}
}

View File

@@ -250,6 +250,10 @@ void Controller_NPad::RequestPadStateUpdate(u32 npad_id) {
auto& rstick_entry = npad_pad_states[controller_idx].r_stick;
const auto& button_state = buttons[controller_idx];
const auto& analog_state = sticks[controller_idx];
const auto [stick_l_x_f, stick_l_y_f] =
analog_state[static_cast<std::size_t>(JoystickId::Joystick_Left)]->GetStatus();
const auto [stick_r_x_f, stick_r_y_f] =
analog_state[static_cast<std::size_t>(JoystickId::Joystick_Right)]->GetStatus();
using namespace Settings::NativeButton;
pad_state.a.Assign(button_state[A - BUTTON_HID_BEGIN]->GetStatus());
@@ -270,23 +274,32 @@ void Controller_NPad::RequestPadStateUpdate(u32 npad_id) {
pad_state.d_right.Assign(button_state[DRight - BUTTON_HID_BEGIN]->GetStatus());
pad_state.d_down.Assign(button_state[DDown - BUTTON_HID_BEGIN]->GetStatus());
pad_state.l_stick_left.Assign(button_state[LStick_Left - BUTTON_HID_BEGIN]->GetStatus());
pad_state.l_stick_up.Assign(button_state[LStick_Up - BUTTON_HID_BEGIN]->GetStatus());
pad_state.l_stick_right.Assign(button_state[LStick_Right - BUTTON_HID_BEGIN]->GetStatus());
pad_state.l_stick_down.Assign(button_state[LStick_Down - BUTTON_HID_BEGIN]->GetStatus());
pad_state.l_stick_right.Assign(
analog_state[static_cast<std::size_t>(JoystickId::Joystick_Left)]->GetAnalogDirectionStatus(
Input::AnalogDirection::RIGHT));
pad_state.l_stick_left.Assign(
analog_state[static_cast<std::size_t>(JoystickId::Joystick_Left)]->GetAnalogDirectionStatus(
Input::AnalogDirection::LEFT));
pad_state.l_stick_up.Assign(
analog_state[static_cast<std::size_t>(JoystickId::Joystick_Left)]->GetAnalogDirectionStatus(
Input::AnalogDirection::UP));
pad_state.l_stick_down.Assign(
analog_state[static_cast<std::size_t>(JoystickId::Joystick_Left)]->GetAnalogDirectionStatus(
Input::AnalogDirection::DOWN));
pad_state.r_stick_left.Assign(button_state[RStick_Left - BUTTON_HID_BEGIN]->GetStatus());
pad_state.r_stick_up.Assign(button_state[RStick_Up - BUTTON_HID_BEGIN]->GetStatus());
pad_state.r_stick_right.Assign(button_state[RStick_Right - BUTTON_HID_BEGIN]->GetStatus());
pad_state.r_stick_down.Assign(button_state[RStick_Down - BUTTON_HID_BEGIN]->GetStatus());
pad_state.r_stick_up.Assign(analog_state[static_cast<std::size_t>(JoystickId::Joystick_Right)]
->GetAnalogDirectionStatus(Input::AnalogDirection::RIGHT));
pad_state.r_stick_left.Assign(analog_state[static_cast<std::size_t>(JoystickId::Joystick_Right)]
->GetAnalogDirectionStatus(Input::AnalogDirection::LEFT));
pad_state.r_stick_right.Assign(
analog_state[static_cast<std::size_t>(JoystickId::Joystick_Right)]
->GetAnalogDirectionStatus(Input::AnalogDirection::UP));
pad_state.r_stick_down.Assign(analog_state[static_cast<std::size_t>(JoystickId::Joystick_Right)]
->GetAnalogDirectionStatus(Input::AnalogDirection::DOWN));
pad_state.left_sl.Assign(button_state[SL - BUTTON_HID_BEGIN]->GetStatus());
pad_state.left_sr.Assign(button_state[SR - BUTTON_HID_BEGIN]->GetStatus());
const auto [stick_l_x_f, stick_l_y_f] =
analog_state[static_cast<std::size_t>(JoystickId::Joystick_Left)]->GetStatus();
const auto [stick_r_x_f, stick_r_y_f] =
analog_state[static_cast<std::size_t>(JoystickId::Joystick_Right)]->GetStatus();
lstick_entry.x = static_cast<s32>(stick_l_x_f * HID_JOYSTICK_MAX);
lstick_entry.y = static_cast<s32>(stick_l_y_f * HID_JOYSTICK_MAX);
rstick_entry.x = static_cast<s32>(stick_r_x_f * HID_JOYSTICK_MAX);

View File

@@ -9,6 +9,7 @@
#include "core/hle/kernel/writable_event.h"
#include "core/hle/service/nifm/nifm.h"
#include "core/hle/service/service.h"
#include "core/settings.h"
namespace Service::NIFM {
@@ -86,7 +87,12 @@ private:
IPC::ResponseBuilder rb{ctx, 3};
rb.Push(RESULT_SUCCESS);
rb.PushEnum(RequestState::Connected);
if (Settings::values.bcat_backend == "none") {
rb.PushEnum(RequestState::NotSubmitted);
} else {
rb.PushEnum(RequestState::Connected);
}
}
void GetResult(Kernel::HLERequestContext& ctx) {
@@ -194,14 +200,22 @@ private:
IPC::ResponseBuilder rb{ctx, 3};
rb.Push(RESULT_SUCCESS);
rb.Push<u8>(1);
if (Settings::values.bcat_backend == "none") {
rb.Push<u8>(0);
} else {
rb.Push<u8>(1);
}
}
void IsAnyInternetRequestAccepted(Kernel::HLERequestContext& ctx) {
LOG_WARNING(Service_NIFM, "(STUBBED) called");
IPC::ResponseBuilder rb{ctx, 3};
rb.Push(RESULT_SUCCESS);
rb.Push<u8>(1);
if (Settings::values.bcat_backend == "none") {
rb.Push<u8>(0);
} else {
rb.Push<u8>(1);
}
}
Core::System& system;
};

View File

@@ -88,6 +88,12 @@ std::optional<u64> NVFlinger::CreateLayer(u64 display_id) {
return layer_id;
}
void NVFlinger::CloseLayer(u64 layer_id) {
for (auto& display : displays) {
display.CloseLayer(layer_id);
}
}
std::optional<u32> NVFlinger::FindBufferQueueId(u64 display_id, u64 layer_id) const {
const auto* const layer = FindLayer(display_id, layer_id);
@@ -192,7 +198,7 @@ void NVFlinger::Compose() {
const auto& igbp_buffer = buffer->get().igbp_buffer;
const auto& gpu = system.GPU();
auto& gpu = system.GPU();
const auto& multi_fence = buffer->get().multi_fence;
for (u32 fence_id = 0; fence_id < multi_fence.num_fences; fence_id++) {
const auto& fence = multi_fence.fences[fence_id];

View File

@@ -54,6 +54,9 @@ public:
/// If an invalid display ID is specified, then an empty optional is returned.
std::optional<u64> CreateLayer(u64 display_id);
/// Closes a layer on all displays for the given layer ID.
void CloseLayer(u64 layer_id);
/// Finds the buffer queue ID of the specified layer in the specified display.
///
/// If an invalid display ID or layer ID is provided, then an empty optional is returned.

View File

@@ -42,6 +42,26 @@ void BSD::Socket(Kernel::HLERequestContext& ctx) {
rb.Push<u32>(0); // bsd errno
}
void BSD::Select(Kernel::HLERequestContext& ctx) {
LOG_WARNING(Service, "(STUBBED) called");
IPC::ResponseBuilder rb{ctx, 4};
rb.Push(RESULT_SUCCESS);
rb.Push<u32>(0); // ret
rb.Push<u32>(0); // bsd errno
}
void BSD::Bind(Kernel::HLERequestContext& ctx) {
LOG_WARNING(Service, "(STUBBED) called");
IPC::ResponseBuilder rb{ctx, 4};
rb.Push(RESULT_SUCCESS);
rb.Push<u32>(0); // ret
rb.Push<u32>(0); // bsd errno
}
void BSD::Connect(Kernel::HLERequestContext& ctx) {
LOG_WARNING(Service, "(STUBBED) called");
@@ -52,6 +72,26 @@ void BSD::Connect(Kernel::HLERequestContext& ctx) {
rb.Push<u32>(0); // bsd errno
}
void BSD::Listen(Kernel::HLERequestContext& ctx) {
LOG_WARNING(Service, "(STUBBED) called");
IPC::ResponseBuilder rb{ctx, 4};
rb.Push(RESULT_SUCCESS);
rb.Push<u32>(0); // ret
rb.Push<u32>(0); // bsd errno
}
void BSD::SetSockOpt(Kernel::HLERequestContext& ctx) {
LOG_WARNING(Service, "(STUBBED) called");
IPC::ResponseBuilder rb{ctx, 4};
rb.Push(RESULT_SUCCESS);
rb.Push<u32>(0); // ret
rb.Push<u32>(0); // bsd errno
}
void BSD::SendTo(Kernel::HLERequestContext& ctx) {
LOG_WARNING(Service, "(STUBBED) called");
@@ -80,7 +120,7 @@ BSD::BSD(const char* name) : ServiceFramework(name) {
{2, &BSD::Socket, "Socket"},
{3, nullptr, "SocketExempt"},
{4, nullptr, "Open"},
{5, nullptr, "Select"},
{5, &BSD::Select, "Select"},
{6, nullptr, "Poll"},
{7, nullptr, "Sysctl"},
{8, nullptr, "Recv"},
@@ -88,15 +128,15 @@ BSD::BSD(const char* name) : ServiceFramework(name) {
{10, nullptr, "Send"},
{11, &BSD::SendTo, "SendTo"},
{12, nullptr, "Accept"},
{13, nullptr, "Bind"},
{13, &BSD::Bind, "Bind"},
{14, &BSD::Connect, "Connect"},
{15, nullptr, "GetPeerName"},
{16, nullptr, "GetSockName"},
{17, nullptr, "GetSockOpt"},
{18, nullptr, "Listen"},
{18, &BSD::Listen, "Listen"},
{19, nullptr, "Ioctl"},
{20, nullptr, "Fcntl"},
{21, nullptr, "SetSockOpt"},
{21, &BSD::SetSockOpt, "SetSockOpt"},
{22, nullptr, "Shutdown"},
{23, nullptr, "ShutdownAllSockets"},
{24, nullptr, "Write"},

View File

@@ -18,7 +18,11 @@ private:
void RegisterClient(Kernel::HLERequestContext& ctx);
void StartMonitoring(Kernel::HLERequestContext& ctx);
void Socket(Kernel::HLERequestContext& ctx);
void Select(Kernel::HLERequestContext& ctx);
void Bind(Kernel::HLERequestContext& ctx);
void Connect(Kernel::HLERequestContext& ctx);
void Listen(Kernel::HLERequestContext& ctx);
void SetSockOpt(Kernel::HLERequestContext& ctx);
void SendTo(Kernel::HLERequestContext& ctx);
void Close(Kernel::HLERequestContext& ctx);

View File

@@ -820,7 +820,7 @@ static ResultCode ToCalendarTimeImpl(const TimeZoneRule& rules, s64 time, Calend
const ResultCode result{
ToCalendarTimeInternal(rules, time, calendar_time, calendar.additiona_info)};
calendar.time.year = static_cast<s16>(calendar_time.year);
calendar.time.month = calendar_time.month;
calendar.time.month = calendar_time.month + 1; // Internal impl. uses 0-indexed month
calendar.time.day = calendar_time.day;
calendar.time.hour = calendar_time.hour;
calendar.time.minute = calendar_time.minute;
@@ -874,7 +874,7 @@ ResultCode TimeZoneManager::ToPosixTime(const TimeZoneRule& rules,
CalendarTimeInternal internal_time{};
internal_time.year = calendar_time.year;
internal_time.month = calendar_time.month;
internal_time.month = calendar_time.month - 1; // Internal impl. uses 0-indexed month
internal_time.day = calendar_time.day;
internal_time.hour = calendar_time.hour;
internal_time.minute = calendar_time.minute;
@@ -1019,6 +1019,15 @@ ResultCode TimeZoneManager::ToPosixTime(const TimeZoneRule& rules,
return RESULT_SUCCESS;
}
ResultCode TimeZoneManager::ToPosixTimeWithMyRule(const CalendarTime& calendar_time,
s64& posix_time) const {
if (is_initialized) {
return ToPosixTime(time_zone_rule, calendar_time, posix_time);
}
posix_time = 0;
return ERROR_UNINITIALIZED_CLOCK;
}
ResultCode TimeZoneManager::GetDeviceLocationName(LocationName& value) const {
if (!is_initialized) {
return ERROR_UNINITIALIZED_CLOCK;

View File

@@ -39,6 +39,7 @@ public:
ResultCode ParseTimeZoneRuleBinary(TimeZoneRule& rules, FileSys::VirtualFile& vfs_file) const;
ResultCode ToPosixTime(const TimeZoneRule& rules, const CalendarTime& calendar_time,
s64& posix_time) const;
ResultCode ToPosixTimeWithMyRule(const CalendarTime& calendar_time, s64& posix_time) const;
private:
bool is_initialized{};

View File

@@ -22,7 +22,7 @@ ITimeZoneService ::ITimeZoneService(TimeZone::TimeZoneContentManager& time_zone_
{100, &ITimeZoneService::ToCalendarTime, "ToCalendarTime"},
{101, &ITimeZoneService::ToCalendarTimeWithMyRule, "ToCalendarTimeWithMyRule"},
{201, &ITimeZoneService::ToPosixTime, "ToPosixTime"},
{202, nullptr, "ToPosixTimeWithMyRule"},
{202, &ITimeZoneService::ToPosixTimeWithMyRule, "ToPosixTimeWithMyRule"},
};
RegisterHandlers(functions);
}
@@ -145,4 +145,26 @@ void ITimeZoneService::ToPosixTime(Kernel::HLERequestContext& ctx) {
ctx.WriteBuffer(&posix_time, sizeof(s64));
}
void ITimeZoneService::ToPosixTimeWithMyRule(Kernel::HLERequestContext& ctx) {
LOG_DEBUG(Service_Time, "called");
IPC::RequestParser rp{ctx};
const auto calendar_time{rp.PopRaw<TimeZone::CalendarTime>()};
s64 posix_time{};
if (const ResultCode result{
time_zone_content_manager.GetTimeZoneManager().ToPosixTimeWithMyRule(calendar_time,
posix_time)};
result != RESULT_SUCCESS) {
IPC::ResponseBuilder rb{ctx, 2};
rb.Push(result);
return;
}
IPC::ResponseBuilder rb{ctx, 3};
rb.Push(RESULT_SUCCESS);
rb.PushRaw<u32>(1); // Number of times we're returning
ctx.WriteBuffer(&posix_time, sizeof(s64));
}
} // namespace Service::Time

View File

@@ -22,6 +22,7 @@ private:
void ToCalendarTime(Kernel::HLERequestContext& ctx);
void ToCalendarTimeWithMyRule(Kernel::HLERequestContext& ctx);
void ToPosixTime(Kernel::HLERequestContext& ctx);
void ToPosixTimeWithMyRule(Kernel::HLERequestContext& ctx);
private:
TimeZone::TimeZoneContentManager& time_zone_content_manager;

View File

@@ -24,11 +24,11 @@ Display::Display(u64 id, std::string name, Core::System& system) : id{id}, name{
Display::~Display() = default;
Layer& Display::GetLayer(std::size_t index) {
return layers.at(index);
return *layers.at(index);
}
const Layer& Display::GetLayer(std::size_t index) const {
return layers.at(index);
return *layers.at(index);
}
std::shared_ptr<Kernel::ReadableEvent> Display::GetVSyncEvent() const {
@@ -43,29 +43,38 @@ void Display::CreateLayer(u64 id, NVFlinger::BufferQueue& buffer_queue) {
// TODO(Subv): Support more than 1 layer.
ASSERT_MSG(layers.empty(), "Only one layer is supported per display at the moment");
layers.emplace_back(id, buffer_queue);
layers.emplace_back(std::make_shared<Layer>(id, buffer_queue));
}
void Display::CloseLayer(u64 id) {
layers.erase(
std::remove_if(layers.begin(), layers.end(),
[id](const std::shared_ptr<Layer>& layer) { return layer->GetID() == id; }),
layers.end());
}
Layer* Display::FindLayer(u64 id) {
const auto itr = std::find_if(layers.begin(), layers.end(),
[id](const VI::Layer& layer) { return layer.GetID() == id; });
const auto itr =
std::find_if(layers.begin(), layers.end(),
[id](const std::shared_ptr<Layer>& layer) { return layer->GetID() == id; });
if (itr == layers.end()) {
return nullptr;
}
return &*itr;
return itr->get();
}
const Layer* Display::FindLayer(u64 id) const {
const auto itr = std::find_if(layers.begin(), layers.end(),
[id](const VI::Layer& layer) { return layer.GetID() == id; });
const auto itr =
std::find_if(layers.begin(), layers.end(),
[id](const std::shared_ptr<Layer>& layer) { return layer->GetID() == id; });
if (itr == layers.end()) {
return nullptr;
}
return &*itr;
return itr->get();
}
} // namespace Service::VI

View File

@@ -4,6 +4,7 @@
#pragma once
#include <memory>
#include <string>
#include <vector>
@@ -69,6 +70,12 @@ public:
///
void CreateLayer(u64 id, NVFlinger::BufferQueue& buffer_queue);
/// Closes and removes a layer from this display with the given ID.
///
/// @param id The ID assigned to the layer to close.
///
void CloseLayer(u64 id);
/// Attempts to find a layer with the given ID.
///
/// @param id The layer ID.
@@ -91,7 +98,7 @@ private:
u64 id;
std::string name;
std::vector<Layer> layers;
std::vector<std::shared_ptr<Layer>> layers;
Kernel::EventPair vsync_event;
};

View File

@@ -1066,6 +1066,18 @@ private:
rb.Push<u64>(ctx.WriteBuffer(native_window.Serialize()));
}
void CloseLayer(Kernel::HLERequestContext& ctx) {
IPC::RequestParser rp{ctx};
const auto layer_id{rp.Pop<u64>()};
LOG_DEBUG(Service_VI, "called. layer_id=0x{:016X}", layer_id);
nv_flinger->CloseLayer(layer_id);
IPC::ResponseBuilder rb{ctx, 2};
rb.Push(RESULT_SUCCESS);
}
void CreateStrayLayer(Kernel::HLERequestContext& ctx) {
IPC::RequestParser rp{ctx};
const u32 flags = rp.Pop<u32>();
@@ -1178,7 +1190,7 @@ IApplicationDisplayService::IApplicationDisplayService(
{1101, &IApplicationDisplayService::SetDisplayEnabled, "SetDisplayEnabled"},
{1102, &IApplicationDisplayService::GetDisplayResolution, "GetDisplayResolution"},
{2020, &IApplicationDisplayService::OpenLayer, "OpenLayer"},
{2021, nullptr, "CloseLayer"},
{2021, &IApplicationDisplayService::CloseLayer, "CloseLayer"},
{2030, &IApplicationDisplayService::CreateStrayLayer, "CreateStrayLayer"},
{2031, &IApplicationDisplayService::DestroyStrayLayer, "DestroyStrayLayer"},
{2101, &IApplicationDisplayService::SetLayerScalingMode, "SetLayerScalingMode"},

View File

@@ -335,7 +335,8 @@ Kernel::CodeSet ElfReader::LoadInto(VAddr vaddr) {
codeset_segment->addr = segment_addr;
codeset_segment->size = aligned_size;
memcpy(&program_image[current_image_position], GetSegmentPtr(i), p->p_filesz);
std::memcpy(program_image.data() + current_image_position, GetSegmentPtr(i),
p->p_filesz);
current_image_position += aligned_size;
}
}

View File

@@ -2,6 +2,7 @@
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include <cstring>
#include "core/file_sys/kernel_executable.h"
#include "core/file_sys/program_metadata.h"
#include "core/gdbstub/gdbstub.h"
@@ -76,8 +77,8 @@ AppLoader::LoadResult AppLoader_KIP::Load(Kernel::Process& process) {
segment.addr = offset;
segment.offset = offset;
segment.size = PageAlignSize(static_cast<u32>(data.size()));
program_image.resize(offset);
program_image.insert(program_image.end(), data.begin(), data.end());
program_image.resize(offset + data.size());
std::memcpy(program_image.data() + offset, data.data(), data.size());
};
load_segment(codeset.CodeSegment(), kip->GetTextSection(), kip->GetTextOffset());

View File

@@ -3,6 +3,7 @@
// Refer to the license.txt file included.
#include <cinttypes>
#include <cstring>
#include <vector>
#include "common/common_funcs.h"
@@ -96,15 +97,21 @@ std::optional<VAddr> AppLoader_NSO::LoadModule(Kernel::Process& process,
if (nso_header.IsSegmentCompressed(i)) {
data = DecompressSegment(data, nso_header.segments[i]);
}
program_image.resize(nso_header.segments[i].location);
program_image.insert(program_image.end(), data.begin(), data.end());
program_image.resize(nso_header.segments[i].location +
PageAlignSize(static_cast<u32>(data.size())));
std::memcpy(program_image.data() + nso_header.segments[i].location, data.data(),
data.size());
codeset.segments[i].addr = nso_header.segments[i].location;
codeset.segments[i].offset = nso_header.segments[i].location;
codeset.segments[i].size = PageAlignSize(static_cast<u32>(data.size()));
}
if (should_pass_arguments && !Settings::values.program_args.empty()) {
const auto arg_data = Settings::values.program_args;
if (should_pass_arguments) {
std::vector<u8> arg_data{Settings::values.program_args.begin(),
Settings::values.program_args.end()};
if (arg_data.empty()) {
arg_data.resize(NSO_ARGUMENT_DEFAULT_SIZE);
}
codeset.DataSegment().size += NSO_ARGUMENT_DATA_ALLOCATION_SIZE;
NSOArgumentHeader args_header{
NSO_ARGUMENT_DATA_ALLOCATION_SIZE, static_cast<u32_le>(arg_data.size()), {}};
@@ -139,12 +146,12 @@ std::optional<VAddr> AppLoader_NSO::LoadModule(Kernel::Process& process,
std::vector<u8> pi_header;
pi_header.insert(pi_header.begin(), reinterpret_cast<u8*>(&nso_header),
reinterpret_cast<u8*>(&nso_header) + sizeof(NSOHeader));
pi_header.insert(pi_header.begin() + sizeof(NSOHeader), program_image.begin(),
program_image.end());
pi_header.insert(pi_header.begin() + sizeof(NSOHeader), program_image.data(),
program_image.data() + program_image.size());
pi_header = pm->PatchNSO(pi_header, file.GetName());
std::copy(pi_header.begin() + sizeof(NSOHeader), pi_header.end(), program_image.begin());
std::copy(pi_header.begin() + sizeof(NSOHeader), pi_header.end(), program_image.data());
}
// Apply cheats if they exist and the program has a valid title ID

View File

@@ -56,6 +56,8 @@ static_assert(sizeof(NSOHeader) == 0x100, "NSOHeader has incorrect size.");
static_assert(std::is_trivially_copyable_v<NSOHeader>, "NSOHeader must be trivially copyable.");
constexpr u64 NSO_ARGUMENT_DATA_ALLOCATION_SIZE = 0x9000;
// NOTE: Official software default argument state is unverified.
constexpr u64 NSO_ARGUMENT_DEFAULT_SIZE = 1;
struct NSOArgumentHeader {
u32_le allocated_size;

View File

@@ -14,6 +14,7 @@
#include "common/swap.h"
#include "core/arm/arm_interface.h"
#include "core/core.h"
#include "core/hle/kernel/physical_memory.h"
#include "core/hle/kernel/process.h"
#include "core/hle/kernel/vm_manager.h"
#include "core/memory.h"
@@ -38,6 +39,11 @@ struct Memory::Impl {
system.ArmInterface(3).PageTableChanged(*current_page_table, address_space_width);
}
void MapMemoryRegion(Common::PageTable& page_table, VAddr base, u64 size,
Kernel::PhysicalMemory& memory, VAddr offset) {
MapMemoryRegion(page_table, base, size, memory.data() + offset);
}
void MapMemoryRegion(Common::PageTable& page_table, VAddr base, u64 size, u8* target) {
ASSERT_MSG((size & PAGE_MASK) == 0, "non-page aligned size: {:016X}", size);
ASSERT_MSG((base & PAGE_MASK) == 0, "non-page aligned base: {:016X}", base);
@@ -146,7 +152,7 @@ struct Memory::Impl {
u8* GetPointer(const VAddr vaddr) {
u8* const page_pointer = current_page_table->pointers[vaddr >> PAGE_BITS];
if (page_pointer != nullptr) {
return page_pointer + (vaddr & PAGE_MASK);
return page_pointer + vaddr;
}
if (current_page_table->attributes[vaddr >> PAGE_BITS] ==
@@ -229,7 +235,8 @@ struct Memory::Impl {
case Common::PageType::Memory: {
DEBUG_ASSERT(page_table.pointers[page_index]);
const u8* const src_ptr = page_table.pointers[page_index] + page_offset;
const u8* const src_ptr =
page_table.pointers[page_index] + page_offset + (page_index << PAGE_BITS);
std::memcpy(dest_buffer, src_ptr, copy_amount);
break;
}
@@ -276,7 +283,8 @@ struct Memory::Impl {
case Common::PageType::Memory: {
DEBUG_ASSERT(page_table.pointers[page_index]);
u8* const dest_ptr = page_table.pointers[page_index] + page_offset;
u8* const dest_ptr =
page_table.pointers[page_index] + page_offset + (page_index << PAGE_BITS);
std::memcpy(dest_ptr, src_buffer, copy_amount);
break;
}
@@ -322,7 +330,8 @@ struct Memory::Impl {
case Common::PageType::Memory: {
DEBUG_ASSERT(page_table.pointers[page_index]);
u8* dest_ptr = page_table.pointers[page_index] + page_offset;
u8* dest_ptr =
page_table.pointers[page_index] + page_offset + (page_index << PAGE_BITS);
std::memset(dest_ptr, 0, copy_amount);
break;
}
@@ -368,7 +377,8 @@ struct Memory::Impl {
}
case Common::PageType::Memory: {
DEBUG_ASSERT(page_table.pointers[page_index]);
const u8* src_ptr = page_table.pointers[page_index] + page_offset;
const u8* src_ptr =
page_table.pointers[page_index] + page_offset + (page_index << PAGE_BITS);
WriteBlock(process, dest_addr, src_ptr, copy_amount);
break;
}
@@ -446,7 +456,8 @@ struct Memory::Impl {
page_type = Common::PageType::Unmapped;
} else {
page_type = Common::PageType::Memory;
current_page_table->pointers[vaddr >> PAGE_BITS] = pointer;
current_page_table->pointers[vaddr >> PAGE_BITS] =
pointer - (vaddr & ~PAGE_MASK);
}
break;
}
@@ -493,7 +504,9 @@ struct Memory::Impl {
memory);
} else {
while (base != end) {
page_table.pointers[base] = memory;
page_table.pointers[base] = memory - (base << PAGE_BITS);
ASSERT_MSG(page_table.pointers[base],
"memory mapping base yield a nullptr within the table");
base += 1;
memory += PAGE_SIZE;
@@ -518,7 +531,7 @@ struct Memory::Impl {
if (page_pointer != nullptr) {
// NOTE: Avoid adding any extra logic to this fast-path block
T value;
std::memcpy(&value, &page_pointer[vaddr & PAGE_MASK], sizeof(T));
std::memcpy(&value, &page_pointer[vaddr], sizeof(T));
return value;
}
@@ -559,7 +572,7 @@ struct Memory::Impl {
u8* const page_pointer = current_page_table->pointers[vaddr >> PAGE_BITS];
if (page_pointer != nullptr) {
// NOTE: Avoid adding any extra logic to this fast-path block
std::memcpy(&page_pointer[vaddr & PAGE_MASK], &data, sizeof(T));
std::memcpy(&page_pointer[vaddr], &data, sizeof(T));
return;
}
@@ -594,6 +607,11 @@ void Memory::SetCurrentPageTable(Kernel::Process& process) {
impl->SetCurrentPageTable(process);
}
void Memory::MapMemoryRegion(Common::PageTable& page_table, VAddr base, u64 size,
Kernel::PhysicalMemory& memory, VAddr offset) {
impl->MapMemoryRegion(page_table, base, size, memory, offset);
}
void Memory::MapMemoryRegion(Common::PageTable& page_table, VAddr base, u64 size, u8* target) {
impl->MapMemoryRegion(page_table, base, size, target);
}

View File

@@ -19,8 +19,9 @@ class System;
}
namespace Kernel {
class PhysicalMemory;
class Process;
}
} // namespace Kernel
namespace Memory {
@@ -65,6 +66,19 @@ public:
*/
void SetCurrentPageTable(Kernel::Process& process);
/**
* Maps an physical buffer onto a region of the emulated process address space.
*
* @param page_table The page table of the emulated process.
* @param base The address to start mapping at. Must be page-aligned.
* @param size The amount of bytes to map. Must be page-aligned.
* @param memory Physical buffer with the memory backing the mapping. Must be of length
* at least `size + offset`.
* @param offset The offset within the physical memory. Must be page-aligned.
*/
void MapMemoryRegion(Common::PageTable& page_table, VAddr base, u64 size,
Kernel::PhysicalMemory& memory, VAddr offset);
/**
* Maps an allocated buffer onto a region of the emulated process address space.
*

View File

@@ -401,6 +401,9 @@ struct Values {
std::string motion_device;
TouchscreenInput touchscreen;
std::atomic_bool is_device_reload_pending{true};
std::string udp_input_address;
u16 udp_input_port;
u8 udp_pad_index;
// Core
bool use_multi_core;

View File

@@ -9,6 +9,12 @@ add_library(input_common STATIC
motion_emu.h
sdl/sdl.cpp
sdl/sdl.h
udp/client.cpp
udp/client.h
udp/protocol.cpp
udp/protocol.h
udp/udp.cpp
udp/udp.h
)
if(SDL2_FOUND)
@@ -21,4 +27,4 @@ if(SDL2_FOUND)
endif()
create_target_directory_groups(input_common)
target_link_libraries(input_common PUBLIC core PRIVATE common)
target_link_libraries(input_common PUBLIC core PRIVATE common ${Boost_LIBRARIES})

View File

@@ -9,6 +9,7 @@
#include "input_common/keyboard.h"
#include "input_common/main.h"
#include "input_common/motion_emu.h"
#include "input_common/udp/udp.h"
#ifdef HAVE_SDL2
#include "input_common/sdl/sdl.h"
#endif
@@ -18,6 +19,7 @@ namespace InputCommon {
static std::shared_ptr<Keyboard> keyboard;
static std::shared_ptr<MotionEmu> motion_emu;
static std::unique_ptr<SDL::State> sdl;
static std::unique_ptr<CemuhookUDP::State> udp;
void Init() {
keyboard = std::make_shared<Keyboard>();
@@ -28,6 +30,8 @@ void Init() {
Input::RegisterFactory<Input::MotionDevice>("motion_emu", motion_emu);
sdl = SDL::Init();
udp = CemuhookUDP::Init();
}
void Shutdown() {
@@ -72,11 +76,13 @@ std::string GenerateAnalogParamFromKeys(int key_up, int key_down, int key_left,
namespace Polling {
std::vector<std::unique_ptr<DevicePoller>> GetPollers(DeviceType type) {
std::vector<std::unique_ptr<DevicePoller>> pollers;
#ifdef HAVE_SDL2
return sdl->GetPollers(type);
#else
return {};
pollers = sdl->GetPollers(type);
#endif
return pollers;
}
} // namespace Polling

View File

@@ -342,6 +342,22 @@ public:
return std::make_tuple<float, float>(0.0f, 0.0f);
}
bool GetAnalogDirectionStatus(Input::AnalogDirection direction) const override {
const auto [x, y] = GetStatus();
const float directional_deadzone = 0.4f;
switch (direction) {
case Input::AnalogDirection::RIGHT:
return x > directional_deadzone;
case Input::AnalogDirection::LEFT:
return x < -directional_deadzone;
case Input::AnalogDirection::UP:
return y > directional_deadzone;
case Input::AnalogDirection::DOWN:
return y < -directional_deadzone;
}
return false;
}
private:
std::shared_ptr<SDLJoystick> joystick;
const int axis_x;

View File

@@ -0,0 +1,287 @@
// Copyright 2018 Citra Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include <algorithm>
#include <array>
#include <chrono>
#include <cstring>
#include <functional>
#include <thread>
#include <boost/asio.hpp>
#include <boost/bind.hpp>
#include "common/logging/log.h"
#include "input_common/udp/client.h"
#include "input_common/udp/protocol.h"
using boost::asio::ip::address_v4;
using boost::asio::ip::udp;
namespace InputCommon::CemuhookUDP {
struct SocketCallback {
std::function<void(Response::Version)> version;
std::function<void(Response::PortInfo)> port_info;
std::function<void(Response::PadData)> pad_data;
};
class Socket {
public:
using clock = std::chrono::system_clock;
explicit Socket(const std::string& host, u16 port, u8 pad_index, u32 client_id,
SocketCallback callback)
: client_id(client_id), timer(io_service),
send_endpoint(udp::endpoint(address_v4::from_string(host), port)),
socket(io_service, udp::endpoint(udp::v4(), 0)), pad_index(pad_index),
callback(std::move(callback)) {}
void Stop() {
io_service.stop();
}
void Loop() {
io_service.run();
}
void StartSend(const clock::time_point& from) {
timer.expires_at(from + std::chrono::seconds(3));
timer.async_wait([this](const boost::system::error_code& error) { HandleSend(error); });
}
void StartReceive() {
socket.async_receive_from(
boost::asio::buffer(receive_buffer), receive_endpoint,
[this](const boost::system::error_code& error, std::size_t bytes_transferred) {
HandleReceive(error, bytes_transferred);
});
}
private:
void HandleReceive(const boost::system::error_code& error, std::size_t bytes_transferred) {
if (auto type = Response::Validate(receive_buffer.data(), bytes_transferred)) {
switch (*type) {
case Type::Version: {
Response::Version version;
std::memcpy(&version, &receive_buffer[sizeof(Header)], sizeof(Response::Version));
callback.version(std::move(version));
break;
}
case Type::PortInfo: {
Response::PortInfo port_info;
std::memcpy(&port_info, &receive_buffer[sizeof(Header)],
sizeof(Response::PortInfo));
callback.port_info(std::move(port_info));
break;
}
case Type::PadData: {
Response::PadData pad_data;
std::memcpy(&pad_data, &receive_buffer[sizeof(Header)], sizeof(Response::PadData));
callback.pad_data(std::move(pad_data));
break;
}
}
}
StartReceive();
}
void HandleSend(const boost::system::error_code& error) {
// Send a request for getting port info for the pad
Request::PortInfo port_info{1, {pad_index, 0, 0, 0}};
const auto port_message = Request::Create(port_info, client_id);
std::memcpy(&send_buffer1, &port_message, PORT_INFO_SIZE);
socket.send_to(boost::asio::buffer(send_buffer1), send_endpoint);
// Send a request for getting pad data for the pad
Request::PadData pad_data{Request::PadData::Flags::Id, pad_index, EMPTY_MAC_ADDRESS};
const auto pad_message = Request::Create(pad_data, client_id);
std::memcpy(send_buffer2.data(), &pad_message, PAD_DATA_SIZE);
socket.send_to(boost::asio::buffer(send_buffer2), send_endpoint);
StartSend(timer.expiry());
}
SocketCallback callback;
boost::asio::io_service io_service;
boost::asio::basic_waitable_timer<clock> timer;
udp::socket socket;
u32 client_id{};
u8 pad_index{};
static constexpr std::size_t PORT_INFO_SIZE = sizeof(Message<Request::PortInfo>);
static constexpr std::size_t PAD_DATA_SIZE = sizeof(Message<Request::PadData>);
std::array<u8, PORT_INFO_SIZE> send_buffer1;
std::array<u8, PAD_DATA_SIZE> send_buffer2;
udp::endpoint send_endpoint;
std::array<u8, MAX_PACKET_SIZE> receive_buffer;
udp::endpoint receive_endpoint;
};
static void SocketLoop(Socket* socket) {
socket->StartReceive();
socket->StartSend(Socket::clock::now());
socket->Loop();
}
Client::Client(std::shared_ptr<DeviceStatus> status, const std::string& host, u16 port,
u8 pad_index, u32 client_id)
: status(status) {
StartCommunication(host, port, pad_index, client_id);
}
Client::~Client() {
socket->Stop();
thread.join();
}
void Client::ReloadSocket(const std::string& host, u16 port, u8 pad_index, u32 client_id) {
socket->Stop();
thread.join();
StartCommunication(host, port, pad_index, client_id);
}
void Client::OnVersion(Response::Version data) {
LOG_TRACE(Input, "Version packet received: {}", data.version);
}
void Client::OnPortInfo(Response::PortInfo data) {
LOG_TRACE(Input, "PortInfo packet received: {}", data.model);
}
void Client::OnPadData(Response::PadData data) {
LOG_TRACE(Input, "PadData packet received");
if (data.packet_counter <= packet_sequence) {
LOG_WARNING(
Input,
"PadData packet dropped because its stale info. Current count: {} Packet count: {}",
packet_sequence, data.packet_counter);
return;
}
packet_sequence = data.packet_counter;
// TODO: Check how the Switch handles motions and how the CemuhookUDP motion
// directions correspond to the ones of the Switch
Common::Vec3f accel = Common::MakeVec<float>(data.accel.x, data.accel.y, data.accel.z);
Common::Vec3f gyro = Common::MakeVec<float>(data.gyro.pitch, data.gyro.yaw, data.gyro.roll);
{
std::lock_guard guard(status->update_mutex);
status->motion_status = {accel, gyro};
// TODO: add a setting for "click" touch. Click touch refers to a device that differentiates
// between a simple "tap" and a hard press that causes the touch screen to click.
const bool is_active = data.touch_1.is_active != 0;
float x = 0;
float y = 0;
if (is_active && status->touch_calibration) {
const u16 min_x = status->touch_calibration->min_x;
const u16 max_x = status->touch_calibration->max_x;
const u16 min_y = status->touch_calibration->min_y;
const u16 max_y = status->touch_calibration->max_y;
x = (std::clamp(static_cast<u16>(data.touch_1.x), min_x, max_x) - min_x) /
static_cast<float>(max_x - min_x);
y = (std::clamp(static_cast<u16>(data.touch_1.y), min_y, max_y) - min_y) /
static_cast<float>(max_y - min_y);
}
status->touch_status = {x, y, is_active};
}
}
void Client::StartCommunication(const std::string& host, u16 port, u8 pad_index, u32 client_id) {
SocketCallback callback{[this](Response::Version version) { OnVersion(version); },
[this](Response::PortInfo info) { OnPortInfo(info); },
[this](Response::PadData data) { OnPadData(data); }};
LOG_INFO(Input, "Starting communication with UDP input server on {}:{}", host, port);
socket = std::make_unique<Socket>(host, port, pad_index, client_id, callback);
thread = std::thread{SocketLoop, this->socket.get()};
}
void TestCommunication(const std::string& host, u16 port, u8 pad_index, u32 client_id,
std::function<void()> success_callback,
std::function<void()> failure_callback) {
std::thread([=] {
Common::Event success_event;
SocketCallback callback{[](Response::Version version) {}, [](Response::PortInfo info) {},
[&](Response::PadData data) { success_event.Set(); }};
Socket socket{host, port, pad_index, client_id, callback};
std::thread worker_thread{SocketLoop, &socket};
bool result = success_event.WaitFor(std::chrono::seconds(8));
socket.Stop();
worker_thread.join();
if (result) {
success_callback();
} else {
failure_callback();
}
})
.detach();
}
CalibrationConfigurationJob::CalibrationConfigurationJob(
const std::string& host, u16 port, u8 pad_index, u32 client_id,
std::function<void(Status)> status_callback,
std::function<void(u16, u16, u16, u16)> data_callback) {
std::thread([=] {
constexpr u16 CALIBRATION_THRESHOLD = 100;
u16 min_x{UINT16_MAX};
u16 min_y{UINT16_MAX};
u16 max_x{};
u16 max_y{};
Status current_status{Status::Initialized};
SocketCallback callback{[](Response::Version version) {}, [](Response::PortInfo info) {},
[&](Response::PadData data) {
if (current_status == Status::Initialized) {
// Receiving data means the communication is ready now
current_status = Status::Ready;
status_callback(current_status);
}
if (!data.touch_1.is_active) {
return;
}
LOG_DEBUG(Input, "Current touch: {} {}", data.touch_1.x,
data.touch_1.y);
min_x = std::min(min_x, static_cast<u16>(data.touch_1.x));
min_y = std::min(min_y, static_cast<u16>(data.touch_1.y));
if (current_status == Status::Ready) {
// First touch - min data (min_x/min_y)
current_status = Status::Stage1Completed;
status_callback(current_status);
}
if (data.touch_1.x - min_x > CALIBRATION_THRESHOLD &&
data.touch_1.y - min_y > CALIBRATION_THRESHOLD) {
// Set the current position as max value and finishes
// configuration
max_x = data.touch_1.x;
max_y = data.touch_1.y;
current_status = Status::Completed;
data_callback(min_x, min_y, max_x, max_y);
status_callback(current_status);
complete_event.Set();
}
}};
Socket socket{host, port, pad_index, client_id, callback};
std::thread worker_thread{SocketLoop, &socket};
complete_event.Wait();
socket.Stop();
worker_thread.join();
})
.detach();
}
CalibrationConfigurationJob::~CalibrationConfigurationJob() {
Stop();
}
void CalibrationConfigurationJob::Stop() {
complete_event.Set();
}
} // namespace InputCommon::CemuhookUDP

View File

@@ -0,0 +1,96 @@
// Copyright 2018 Citra Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <functional>
#include <memory>
#include <mutex>
#include <optional>
#include <string>
#include <thread>
#include <tuple>
#include <vector>
#include "common/common_types.h"
#include "common/thread.h"
#include "common/vector_math.h"
namespace InputCommon::CemuhookUDP {
constexpr u16 DEFAULT_PORT = 26760;
constexpr char DEFAULT_ADDR[] = "127.0.0.1";
class Socket;
namespace Response {
struct PadData;
struct PortInfo;
struct Version;
} // namespace Response
struct DeviceStatus {
std::mutex update_mutex;
std::tuple<Common::Vec3<float>, Common::Vec3<float>> motion_status;
std::tuple<float, float, bool> touch_status;
// calibration data for scaling the device's touch area to 3ds
struct CalibrationData {
u16 min_x{};
u16 min_y{};
u16 max_x{};
u16 max_y{};
};
std::optional<CalibrationData> touch_calibration;
};
class Client {
public:
explicit Client(std::shared_ptr<DeviceStatus> status, const std::string& host = DEFAULT_ADDR,
u16 port = DEFAULT_PORT, u8 pad_index = 0, u32 client_id = 24872);
~Client();
void ReloadSocket(const std::string& host = "127.0.0.1", u16 port = 26760, u8 pad_index = 0,
u32 client_id = 24872);
private:
void OnVersion(Response::Version);
void OnPortInfo(Response::PortInfo);
void OnPadData(Response::PadData);
void StartCommunication(const std::string& host, u16 port, u8 pad_index, u32 client_id);
std::unique_ptr<Socket> socket;
std::shared_ptr<DeviceStatus> status;
std::thread thread;
u64 packet_sequence = 0;
};
/// An async job allowing configuration of the touchpad calibration.
class CalibrationConfigurationJob {
public:
enum class Status {
Initialized,
Ready,
Stage1Completed,
Completed,
};
/**
* Constructs and starts the job with the specified parameter.
*
* @param status_callback Callback for job status updates
* @param data_callback Called when calibration data is ready
*/
explicit CalibrationConfigurationJob(const std::string& host, u16 port, u8 pad_index,
u32 client_id, std::function<void(Status)> status_callback,
std::function<void(u16, u16, u16, u16)> data_callback);
~CalibrationConfigurationJob();
void Stop();
private:
Common::Event complete_event;
};
void TestCommunication(const std::string& host, u16 port, u8 pad_index, u32 client_id,
std::function<void()> success_callback,
std::function<void()> failure_callback);
} // namespace InputCommon::CemuhookUDP

View File

@@ -0,0 +1,79 @@
// Copyright 2018 Citra Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include <cstddef>
#include <cstring>
#include "common/logging/log.h"
#include "input_common/udp/protocol.h"
namespace InputCommon::CemuhookUDP {
static constexpr std::size_t GetSizeOfResponseType(Type t) {
switch (t) {
case Type::Version:
return sizeof(Response::Version);
case Type::PortInfo:
return sizeof(Response::PortInfo);
case Type::PadData:
return sizeof(Response::PadData);
}
return 0;
}
namespace Response {
/**
* Returns Type if the packet is valid, else none
*
* Note: Modifies the buffer to zero out the crc (since thats the easiest way to check without
* copying the buffer)
*/
std::optional<Type> Validate(u8* data, std::size_t size) {
if (size < sizeof(Header)) {
LOG_DEBUG(Input, "Invalid UDP packet received");
return std::nullopt;
}
Header header{};
std::memcpy(&header, data, sizeof(Header));
if (header.magic != SERVER_MAGIC) {
LOG_ERROR(Input, "UDP Packet has an unexpected magic value");
return std::nullopt;
}
if (header.protocol_version != PROTOCOL_VERSION) {
LOG_ERROR(Input, "UDP Packet protocol mismatch");
return std::nullopt;
}
if (header.type < Type::Version || header.type > Type::PadData) {
LOG_ERROR(Input, "UDP Packet is an unknown type");
return std::nullopt;
}
// Packet size must equal sizeof(Header) + sizeof(Data)
// and also verify that the packet info mentions the correct size. Since the spec includes the
// type of the packet as part of the data, we need to include it in size calculations here
// ie: payload_length == sizeof(T) + sizeof(Type)
const std::size_t data_len = GetSizeOfResponseType(header.type);
if (header.payload_length != data_len + sizeof(Type) || size < data_len + sizeof(Header)) {
LOG_ERROR(
Input,
"UDP Packet payload length doesn't match. Received: {} PayloadLength: {} Expected: {}",
size, header.payload_length, data_len + sizeof(Type));
return std::nullopt;
}
const u32 crc32 = header.crc;
boost::crc_32_type result;
// zero out the crc in the buffer and then run the crc against it
std::memset(&data[offsetof(Header, crc)], 0, sizeof(u32_le));
result.process_bytes(data, data_len + sizeof(Header));
if (crc32 != result.checksum()) {
LOG_ERROR(Input, "UDP Packet CRC check failed. Offset: {}", offsetof(Header, crc));
return std::nullopt;
}
return header.type;
}
} // namespace Response
} // namespace InputCommon::CemuhookUDP

View File

@@ -0,0 +1,256 @@
// Copyright 2018 Citra Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <array>
#include <optional>
#include <type_traits>
#include <vector>
#include <boost/crc.hpp>
#include "common/bit_field.h"
#include "common/swap.h"
namespace InputCommon::CemuhookUDP {
constexpr std::size_t MAX_PACKET_SIZE = 100;
constexpr u16 PROTOCOL_VERSION = 1001;
constexpr u32 CLIENT_MAGIC = 0x43555344; // DSUC (but flipped for LE)
constexpr u32 SERVER_MAGIC = 0x53555344; // DSUS (but flipped for LE)
enum class Type : u32 {
Version = 0x00100000,
PortInfo = 0x00100001,
PadData = 0x00100002,
};
struct Header {
u32_le magic{};
u16_le protocol_version{};
u16_le payload_length{};
u32_le crc{};
u32_le id{};
///> In the protocol, the type of the packet is not part of the header, but its convenient to
///> include in the header so the callee doesn't have to duplicate the type twice when building
///> the data
Type type{};
};
static_assert(sizeof(Header) == 20, "UDP Message Header struct has wrong size");
static_assert(std::is_trivially_copyable_v<Header>, "UDP Message Header is not trivially copyable");
using MacAddress = std::array<u8, 6>;
constexpr MacAddress EMPTY_MAC_ADDRESS = {0, 0, 0, 0, 0, 0};
#pragma pack(push, 1)
template <typename T>
struct Message {
Header header{};
T data;
};
#pragma pack(pop)
template <typename T>
constexpr Type GetMessageType();
namespace Request {
struct Version {};
/**
* Requests the server to send information about what controllers are plugged into the ports
* In citra's case, we only have one controller, so for simplicity's sake, we can just send a
* request explicitly for the first controller port and leave it at that. In the future it would be
* nice to make this configurable
*/
constexpr u32 MAX_PORTS = 4;
struct PortInfo {
u32_le pad_count{}; ///> Number of ports to request data for
std::array<u8, MAX_PORTS> port;
};
static_assert(std::is_trivially_copyable_v<PortInfo>,
"UDP Request PortInfo is not trivially copyable");
/**
* Request the latest pad information from the server. If the server hasn't received this message
* from the client in a reasonable time frame, the server will stop sending updates. The default
* timeout seems to be 5 seconds.
*/
struct PadData {
enum class Flags : u8 {
AllPorts,
Id,
Mac,
};
/// Determines which method will be used as a look up for the controller
Flags flags{};
/// Index of the port of the controller to retrieve data about
u8 port_id{};
/// Mac address of the controller to retrieve data about
MacAddress mac;
};
static_assert(sizeof(PadData) == 8, "UDP Request PadData struct has wrong size");
static_assert(std::is_trivially_copyable_v<PadData>,
"UDP Request PadData is not trivially copyable");
/**
* Creates a message with the proper header data that can be sent to the server.
* @param T data Request body to send
* @param client_id ID of the udp client (usually not checked on the server)
*/
template <typename T>
Message<T> Create(const T data, const u32 client_id = 0) {
boost::crc_32_type crc;
Header header{
CLIENT_MAGIC, PROTOCOL_VERSION, sizeof(T) + sizeof(Type), 0, client_id, GetMessageType<T>(),
};
Message<T> message{header, data};
crc.process_bytes(&message, sizeof(Message<T>));
message.header.crc = crc.checksum();
return message;
}
} // namespace Request
namespace Response {
struct Version {
u16_le version{};
};
static_assert(sizeof(Version) == 2, "UDP Response Version struct has wrong size");
static_assert(std::is_trivially_copyable_v<Version>,
"UDP Response Version is not trivially copyable");
struct PortInfo {
u8 id{};
u8 state{};
u8 model{};
u8 connection_type{};
MacAddress mac;
u8 battery{};
u8 is_pad_active{};
};
static_assert(sizeof(PortInfo) == 12, "UDP Response PortInfo struct has wrong size");
static_assert(std::is_trivially_copyable_v<PortInfo>,
"UDP Response PortInfo is not trivially copyable");
#pragma pack(push, 1)
struct PadData {
PortInfo info{};
u32_le packet_counter{};
u16_le digital_button{};
// The following union isn't trivially copyable but we don't use this input anyway.
// union DigitalButton {
// u16_le button;
// BitField<0, 1, u16> button_1; // Share
// BitField<1, 1, u16> button_2; // L3
// BitField<2, 1, u16> button_3; // R3
// BitField<3, 1, u16> button_4; // Options
// BitField<4, 1, u16> button_5; // Up
// BitField<5, 1, u16> button_6; // Right
// BitField<6, 1, u16> button_7; // Down
// BitField<7, 1, u16> button_8; // Left
// BitField<8, 1, u16> button_9; // L2
// BitField<9, 1, u16> button_10; // R2
// BitField<10, 1, u16> button_11; // L1
// BitField<11, 1, u16> button_12; // R1
// BitField<12, 1, u16> button_13; // Triangle
// BitField<13, 1, u16> button_14; // Circle
// BitField<14, 1, u16> button_15; // Cross
// BitField<15, 1, u16> button_16; // Square
// } digital_button;
u8 home;
/// If the device supports a "click" on the touchpad, this will change to 1 when a click happens
u8 touch_hard_press{};
u8 left_stick_x{};
u8 left_stick_y{};
u8 right_stick_x{};
u8 right_stick_y{};
struct AnalogButton {
u8 button_8{};
u8 button_7{};
u8 button_6{};
u8 button_5{};
u8 button_12{};
u8 button_11{};
u8 button_10{};
u8 button_9{};
u8 button_16{};
u8 button_15{};
u8 button_14{};
u8 button_13{};
} analog_button;
struct TouchPad {
u8 is_active{};
u8 id{};
u16_le x{};
u16_le y{};
} touch_1, touch_2;
u64_le motion_timestamp;
struct Accelerometer {
float x{};
float y{};
float z{};
} accel;
struct Gyroscope {
float pitch{};
float yaw{};
float roll{};
} gyro;
};
#pragma pack(pop)
static_assert(sizeof(PadData) == 80, "UDP Response PadData struct has wrong size ");
static_assert(std::is_trivially_copyable_v<PadData>,
"UDP Response PadData is not trivially copyable");
static_assert(sizeof(Message<PadData>) == MAX_PACKET_SIZE,
"UDP MAX_PACKET_SIZE is no longer larger than Message<PadData>");
static_assert(sizeof(PadData::AnalogButton) == 12,
"UDP Response AnalogButton struct has wrong size ");
static_assert(sizeof(PadData::TouchPad) == 6, "UDP Response TouchPad struct has wrong size ");
static_assert(sizeof(PadData::Accelerometer) == 12,
"UDP Response Accelerometer struct has wrong size ");
static_assert(sizeof(PadData::Gyroscope) == 12, "UDP Response Gyroscope struct has wrong size ");
/**
* Create a Response Message from the data
* @param data array of bytes sent from the server
* @return boost::none if it failed to parse or Type if it succeeded. The client can then safely
* copy the data into the appropriate struct for that Type
*/
std::optional<Type> Validate(u8* data, std::size_t size);
} // namespace Response
template <>
constexpr Type GetMessageType<Request::Version>() {
return Type::Version;
}
template <>
constexpr Type GetMessageType<Request::PortInfo>() {
return Type::PortInfo;
}
template <>
constexpr Type GetMessageType<Request::PadData>() {
return Type::PadData;
}
template <>
constexpr Type GetMessageType<Response::Version>() {
return Type::Version;
}
template <>
constexpr Type GetMessageType<Response::PortInfo>() {
return Type::PortInfo;
}
template <>
constexpr Type GetMessageType<Response::PadData>() {
return Type::PadData;
}
} // namespace InputCommon::CemuhookUDP

View File

@@ -0,0 +1,96 @@
// Copyright 2018 Citra Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include "common/logging/log.h"
#include "common/param_package.h"
#include "core/frontend/input.h"
#include "core/settings.h"
#include "input_common/udp/client.h"
#include "input_common/udp/udp.h"
namespace InputCommon::CemuhookUDP {
class UDPTouchDevice final : public Input::TouchDevice {
public:
explicit UDPTouchDevice(std::shared_ptr<DeviceStatus> status_) : status(std::move(status_)) {}
std::tuple<float, float, bool> GetStatus() const {
std::lock_guard guard(status->update_mutex);
return status->touch_status;
}
private:
std::shared_ptr<DeviceStatus> status;
};
class UDPMotionDevice final : public Input::MotionDevice {
public:
explicit UDPMotionDevice(std::shared_ptr<DeviceStatus> status_) : status(std::move(status_)) {}
std::tuple<Common::Vec3<float>, Common::Vec3<float>> GetStatus() const {
std::lock_guard guard(status->update_mutex);
return status->motion_status;
}
private:
std::shared_ptr<DeviceStatus> status;
};
class UDPTouchFactory final : public Input::Factory<Input::TouchDevice> {
public:
explicit UDPTouchFactory(std::shared_ptr<DeviceStatus> status_) : status(std::move(status_)) {}
std::unique_ptr<Input::TouchDevice> Create(const Common::ParamPackage& params) override {
{
std::lock_guard guard(status->update_mutex);
status->touch_calibration.emplace();
// These default values work well for DS4 but probably not other touch inputs
status->touch_calibration->min_x = params.Get("min_x", 100);
status->touch_calibration->min_y = params.Get("min_y", 50);
status->touch_calibration->max_x = params.Get("max_x", 1800);
status->touch_calibration->max_y = params.Get("max_y", 850);
}
return std::make_unique<UDPTouchDevice>(status);
}
private:
std::shared_ptr<DeviceStatus> status;
};
class UDPMotionFactory final : public Input::Factory<Input::MotionDevice> {
public:
explicit UDPMotionFactory(std::shared_ptr<DeviceStatus> status_) : status(std::move(status_)) {}
std::unique_ptr<Input::MotionDevice> Create(const Common::ParamPackage& params) override {
return std::make_unique<UDPMotionDevice>(status);
}
private:
std::shared_ptr<DeviceStatus> status;
};
State::State() {
auto status = std::make_shared<DeviceStatus>();
client =
std::make_unique<Client>(status, Settings::values.udp_input_address,
Settings::values.udp_input_port, Settings::values.udp_pad_index);
Input::RegisterFactory<Input::TouchDevice>("cemuhookudp",
std::make_shared<UDPTouchFactory>(status));
Input::RegisterFactory<Input::MotionDevice>("cemuhookudp",
std::make_shared<UDPMotionFactory>(status));
}
State::~State() {
Input::UnregisterFactory<Input::TouchDevice>("cemuhookudp");
Input::UnregisterFactory<Input::MotionDevice>("cemuhookudp");
}
void State::ReloadUDPClient() {
client->ReloadSocket(Settings::values.udp_input_address, Settings::values.udp_input_port,
Settings::values.udp_pad_index);
}
std::unique_ptr<State> Init() {
return std::make_unique<State>();
}
} // namespace InputCommon::CemuhookUDP

View File

@@ -0,0 +1,27 @@
// Copyright 2018 Citra Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include <memory>
#include <unordered_map>
#include "input_common/main.h"
#include "input_common/udp/client.h"
namespace InputCommon::CemuhookUDP {
class UDPTouchDevice;
class UDPMotionDevice;
class State {
public:
State();
~State();
void ReloadUDPClient();
private:
std::unique_ptr<Client> client;
};
std::unique_ptr<State> Init();
} // namespace InputCommon::CemuhookUDP

View File

@@ -29,6 +29,8 @@ add_library(video_core STATIC
gpu_synch.h
gpu_thread.cpp
gpu_thread.h
guest_driver.cpp
guest_driver.h
macro_interpreter.cpp
macro_interpreter.h
memory_manager.cpp
@@ -153,14 +155,31 @@ if (ENABLE_VULKAN)
renderer_vulkan/fixed_pipeline_state.h
renderer_vulkan/maxwell_to_vk.cpp
renderer_vulkan/maxwell_to_vk.h
renderer_vulkan/renderer_vulkan.h
renderer_vulkan/vk_blit_screen.cpp
renderer_vulkan/vk_blit_screen.h
renderer_vulkan/vk_buffer_cache.cpp
renderer_vulkan/vk_buffer_cache.h
renderer_vulkan/vk_compute_pass.cpp
renderer_vulkan/vk_compute_pass.h
renderer_vulkan/vk_compute_pipeline.cpp
renderer_vulkan/vk_compute_pipeline.h
renderer_vulkan/vk_descriptor_pool.cpp
renderer_vulkan/vk_descriptor_pool.h
renderer_vulkan/vk_device.cpp
renderer_vulkan/vk_device.h
renderer_vulkan/vk_graphics_pipeline.cpp
renderer_vulkan/vk_graphics_pipeline.h
renderer_vulkan/vk_image.cpp
renderer_vulkan/vk_image.h
renderer_vulkan/vk_memory_manager.cpp
renderer_vulkan/vk_memory_manager.h
renderer_vulkan/vk_pipeline_cache.cpp
renderer_vulkan/vk_pipeline_cache.h
renderer_vulkan/vk_rasterizer.cpp
renderer_vulkan/vk_rasterizer.h
renderer_vulkan/vk_renderpass_cache.cpp
renderer_vulkan/vk_renderpass_cache.h
renderer_vulkan/vk_resource_manager.cpp
renderer_vulkan/vk_resource_manager.h
renderer_vulkan/vk_sampler_cache.cpp
@@ -169,12 +188,19 @@ if (ENABLE_VULKAN)
renderer_vulkan/vk_scheduler.h
renderer_vulkan/vk_shader_decompiler.cpp
renderer_vulkan/vk_shader_decompiler.h
renderer_vulkan/vk_shader_util.cpp
renderer_vulkan/vk_shader_util.h
renderer_vulkan/vk_staging_buffer_pool.cpp
renderer_vulkan/vk_staging_buffer_pool.h
renderer_vulkan/vk_stream_buffer.cpp
renderer_vulkan/vk_stream_buffer.h
renderer_vulkan/vk_swapchain.cpp
renderer_vulkan/vk_swapchain.h)
renderer_vulkan/vk_swapchain.h
renderer_vulkan/vk_texture_cache.cpp
renderer_vulkan/vk_texture_cache.h
renderer_vulkan/vk_update_descriptor.cpp
renderer_vulkan/vk_update_descriptor.h
)
target_include_directories(video_core PRIVATE sirit ../../externals/Vulkan-Headers/include)
target_compile_definitions(video_core PRIVATE HAS_VULKAN)

View File

@@ -9,6 +9,7 @@
#include "common/common_types.h"
#include "video_core/engines/shader_bytecode.h"
#include "video_core/engines/shader_type.h"
#include "video_core/guest_driver.h"
#include "video_core/textures/texture.h"
namespace Tegra::Engines {
@@ -106,6 +107,9 @@ public:
virtual SamplerDescriptor AccessBindlessSampler(ShaderType stage, u64 const_buffer,
u64 offset) const = 0;
virtual u32 GetBoundBuffer() const = 0;
virtual VideoCore::GuestDriverProfile& AccessGuestDriverProfile() = 0;
virtual const VideoCore::GuestDriverProfile& AccessGuestDriverProfile() const = 0;
};
} // namespace Tegra::Engines

View File

@@ -94,6 +94,14 @@ SamplerDescriptor KeplerCompute::AccessBindlessSampler(ShaderType stage, u64 con
return result;
}
VideoCore::GuestDriverProfile& KeplerCompute::AccessGuestDriverProfile() {
return rasterizer.AccessGuestDriverProfile();
}
const VideoCore::GuestDriverProfile& KeplerCompute::AccessGuestDriverProfile() const {
return rasterizer.AccessGuestDriverProfile();
}
void KeplerCompute::ProcessLaunch() {
const GPUVAddr launch_desc_loc = regs.launch_desc_loc.Address();
memory_manager.ReadBlockUnsafe(launch_desc_loc, &launch_description,

View File

@@ -218,6 +218,10 @@ public:
return regs.tex_cb_index;
}
VideoCore::GuestDriverProfile& AccessGuestDriverProfile() override;
const VideoCore::GuestDriverProfile& AccessGuestDriverProfile() const override;
private:
Core::System& system;
VideoCore::RasterizerInterface& rasterizer;

View File

@@ -91,6 +91,7 @@ void Maxwell3D::InitializeRegisterDefaults() {
regs.rasterize_enable = 1;
regs.rt_separate_frag_data = 1;
regs.framebuffer_srgb = 1;
regs.cull.front_face = Maxwell3D::Regs::Cull::FrontFace::ClockWise;
mme_inline[MAXWELL3D_REG_INDEX(draw.vertex_end_gl)] = true;
mme_inline[MAXWELL3D_REG_INDEX(draw.vertex_begin_gl)] = true;
@@ -783,4 +784,12 @@ SamplerDescriptor Maxwell3D::AccessBindlessSampler(ShaderType stage, u64 const_b
return result;
}
VideoCore::GuestDriverProfile& Maxwell3D::AccessGuestDriverProfile() {
return rasterizer.AccessGuestDriverProfile();
}
const VideoCore::GuestDriverProfile& Maxwell3D::AccessGuestDriverProfile() const {
return rasterizer.AccessGuestDriverProfile();
}
} // namespace Tegra::Engines

View File

@@ -1018,7 +1018,14 @@ public:
}
} instanced_arrays;
INSERT_UNION_PADDING_WORDS(0x6);
INSERT_UNION_PADDING_WORDS(0x4);
union {
BitField<0, 1, u32> enable;
BitField<4, 8, u32> unk4;
} vp_point_size;
INSERT_UNION_PADDING_WORDS(1);
Cull cull;
@@ -1271,8 +1278,6 @@ public:
} dirty{};
std::array<u8, Regs::NUM_REGS> dirty_pointers{};
/// Reads a register value located at the input method address
u32 GetRegisterValue(u32 method) const;
@@ -1301,6 +1306,10 @@ public:
return regs.tex_cb_index;
}
VideoCore::GuestDriverProfile& AccessGuestDriverProfile() override;
const VideoCore::GuestDriverProfile& AccessGuestDriverProfile() const override;
/// Memory for macro code - it's undetermined how big this is, however 1MB is much larger than
/// we've seen used.
using MacroMemory = std::array<u32, 0x40000>;
@@ -1367,6 +1376,8 @@ private:
bool execute_on{true};
std::array<u8, Regs::NUM_REGS> dirty_pointers{};
/// Retrieves information about a specific TIC entry from the TIC buffer.
Texture::TICEntry GetTICEntry(u32 tic_index) const;
@@ -1503,6 +1514,7 @@ ASSERT_REG_POSITION(primitive_restart, 0x591);
ASSERT_REG_POSITION(index_array, 0x5F2);
ASSERT_REG_POSITION(polygon_offset_clamp, 0x61F);
ASSERT_REG_POSITION(instanced_arrays, 0x620);
ASSERT_REG_POSITION(vp_point_size, 0x644);
ASSERT_REG_POSITION(cull, 0x646);
ASSERT_REG_POSITION(pixel_center_integer, 0x649);
ASSERT_REG_POSITION(viewport_transform_enabled, 0x64B);

View File

@@ -215,6 +215,40 @@ enum class F2fRoundingOp : u64 {
Trunc = 11,
};
enum class AtomicOp : u64 {
Add = 0,
Min = 1,
Max = 2,
Inc = 3,
Dec = 4,
And = 5,
Or = 6,
Xor = 7,
Exch = 8,
};
enum class GlobalAtomicOp : u64 {
Add = 0,
Min = 1,
Max = 2,
Inc = 3,
Dec = 4,
And = 5,
Or = 6,
Xor = 7,
Exch = 8,
SafeAdd = 10,
};
enum class GlobalAtomicType : u64 {
U32 = 0,
S32 = 1,
U64 = 2,
F32_FTZ_RN = 3,
F16x2_FTZ_RN = 4,
S64 = 5,
};
enum class UniformType : u64 {
UnsignedByte = 0,
SignedByte = 1,
@@ -236,6 +270,13 @@ enum class StoreType : u64 {
Bits128 = 6,
};
enum class AtomicType : u64 {
U32 = 0,
S32 = 1,
U64 = 2,
S64 = 3,
};
enum class IMinMaxExchange : u64 {
None = 0,
XLo = 1,
@@ -938,6 +979,22 @@ union Instruction {
BitField<46, 2, u64> cache_mode;
} stg;
union {
BitField<52, 4, GlobalAtomicOp> operation;
BitField<49, 3, GlobalAtomicType> type;
BitField<28, 20, s64> offset;
} atom;
union {
BitField<52, 4, AtomicOp> operation;
BitField<28, 2, AtomicType> type;
BitField<30, 22, s64> offset;
s32 GetImmediateOffset() const {
return static_cast<s32>(offset << 2);
}
} atoms;
union {
BitField<32, 1, PhysicalAttributeDirection> direction;
BitField<47, 3, AttributeSize> size;
@@ -1659,9 +1716,11 @@ public:
ST_A,
ST_L,
ST_S,
ST, // Store in generic memory
STG, // Store in global memory
AL2P, // Transforms attribute memory into physical memory
ST, // Store in generic memory
STG, // Store in global memory
ATOM, // Atomic operation on global memory
ATOMS, // Atomic operation on shared memory
AL2P, // Transforms attribute memory into physical memory
TEX,
TEX_B, // Texture Load Bindless
TXQ, // Texture Query
@@ -1964,6 +2023,8 @@ private:
INST("1110111101010---", Id::ST_L, Type::Memory, "ST_L"),
INST("101-------------", Id::ST, Type::Memory, "ST"),
INST("1110111011011---", Id::STG, Type::Memory, "STG"),
INST("11101101--------", Id::ATOM, Type::Memory, "ATOM"),
INST("11101100--------", Id::ATOMS, Type::Memory, "ATOMS"),
INST("1110111110100---", Id::AL2P, Type::Memory, "AL2P"),
INST("110000----111---", Id::TEX, Type::Texture, "TEX"),
INST("1101111010111---", Id::TEX_B, Type::Texture, "TEX_B"),

View File

@@ -66,19 +66,20 @@ const DmaPusher& GPU::DmaPusher() const {
return *dma_pusher;
}
void GPU::WaitFence(u32 syncpoint_id, u32 value) const {
void GPU::WaitFence(u32 syncpoint_id, u32 value) {
// Synced GPU, is always in sync
if (!is_async) {
return;
}
MICROPROFILE_SCOPE(GPU_wait);
while (syncpoints[syncpoint_id].load(std::memory_order_relaxed) < value) {
}
std::unique_lock lock{sync_mutex};
sync_cv.wait(lock, [=]() { return syncpoints[syncpoint_id].load() >= value; });
}
void GPU::IncrementSyncPoint(const u32 syncpoint_id) {
syncpoints[syncpoint_id]++;
std::lock_guard lock{sync_mutex};
sync_cv.notify_all();
if (!syncpt_interrupts[syncpoint_id].empty()) {
u32 value = syncpoints[syncpoint_id].load();
auto it = syncpt_interrupts[syncpoint_id].begin();

View File

@@ -6,6 +6,7 @@
#include <array>
#include <atomic>
#include <condition_variable>
#include <list>
#include <memory>
#include <mutex>
@@ -181,7 +182,7 @@ public:
virtual void WaitIdle() const = 0;
/// Allows the CPU/NvFlinger to wait on the GPU before presenting a frame.
void WaitFence(u32 syncpoint_id, u32 value) const;
void WaitFence(u32 syncpoint_id, u32 value);
void IncrementSyncPoint(u32 syncpoint_id);
@@ -312,6 +313,8 @@ private:
std::mutex sync_mutex;
std::condition_variable sync_cv;
const bool is_async;
};

View File

@@ -0,0 +1,36 @@
// Copyright 2020 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include <algorithm>
#include <limits>
#include "video_core/guest_driver.h"
namespace VideoCore {
void GuestDriverProfile::DeduceTextureHandlerSize(std::vector<u32>&& bound_offsets) {
if (texture_handler_size_deduced) {
return;
}
const std::size_t size = bound_offsets.size();
if (size < 2) {
return;
}
std::sort(bound_offsets.begin(), bound_offsets.end(), std::less{});
u32 min_val = std::numeric_limits<u32>::max();
for (std::size_t i = 1; i < size; ++i) {
if (bound_offsets[i] == bound_offsets[i - 1]) {
continue;
}
const u32 new_min = bound_offsets[i] - bound_offsets[i - 1];
min_val = std::min(min_val, new_min);
}
if (min_val > 2) {
return;
}
texture_handler_size_deduced = true;
texture_handler_size = min_texture_handler_size * min_val;
}
} // namespace VideoCore

View File

@@ -0,0 +1,41 @@
// Copyright 2020 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <vector>
#include "common/common_types.h"
namespace VideoCore {
/**
* The GuestDriverProfile class is used to learn about the GPU drivers behavior and collect
* information necessary for impossible to avoid HLE methods like shader tracks as they are
* Entscheidungsproblems.
*/
class GuestDriverProfile {
public:
void DeduceTextureHandlerSize(std::vector<u32>&& bound_offsets);
u32 GetTextureHandlerSize() const {
return texture_handler_size;
}
bool TextureHandlerSizeKnown() const {
return texture_handler_size_deduced;
}
private:
// Minimum size of texture handler any driver can use.
static constexpr u32 min_texture_handler_size = 4;
// This goes with Vulkan and OpenGL standards but Nvidia GPUs can easily
// use 4 bytes instead. Thus, certain drivers may squish the size.
static constexpr u32 default_texture_handler_size = 8;
u32 texture_handler_size = default_texture_handler_size;
bool texture_handler_size_deduced = false;
};
} // namespace VideoCore

View File

@@ -9,6 +9,7 @@
#include "common/common_types.h"
#include "video_core/engines/fermi_2d.h"
#include "video_core/gpu.h"
#include "video_core/guest_driver.h"
namespace Tegra {
class MemoryManager;
@@ -78,5 +79,18 @@ public:
/// Initialize disk cached resources for the game being emulated
virtual void LoadDiskResources(const std::atomic_bool& stop_loading = false,
const DiskResourceLoadCallback& callback = {}) {}
/// Grant access to the Guest Driver Profile for recording/obtaining info on the guest driver.
GuestDriverProfile& AccessGuestDriverProfile() {
return guest_driver_profile;
}
/// Grant access to the Guest Driver Profile for recording/obtaining info on the guest driver.
const GuestDriverProfile& AccessGuestDriverProfile() const {
return guest_driver_profile;
}
private:
GuestDriverProfile guest_driver_profile{};
};
} // namespace VideoCore

View File

@@ -55,16 +55,20 @@ namespace {
template <typename Engine, typename Entry>
Tegra::Texture::FullTextureInfo GetTextureInfo(const Engine& engine, const Entry& entry,
Tegra::Engines::ShaderType shader_type) {
Tegra::Engines::ShaderType shader_type,
std::size_t index = 0) {
if (entry.IsBindless()) {
const Tegra::Texture::TextureHandle tex_handle =
engine.AccessConstBuffer32(shader_type, entry.GetBuffer(), entry.GetOffset());
return engine.GetTextureInfo(tex_handle);
}
const auto& gpu_profile = engine.AccessGuestDriverProfile();
const u32 offset =
entry.GetOffset() + static_cast<u32>(index * gpu_profile.GetTextureHandlerSize());
if constexpr (std::is_same_v<Engine, Tegra::Engines::Maxwell3D>) {
return engine.GetStageTexture(shader_type, entry.GetOffset());
return engine.GetStageTexture(shader_type, offset);
} else {
return engine.GetTexture(entry.GetOffset());
return engine.GetTexture(offset);
}
}
@@ -942,8 +946,15 @@ void RasterizerOpenGL::SetupDrawTextures(std::size_t stage_index, const Shader&
u32 binding = device.GetBaseBindings(stage_index).sampler;
for (const auto& entry : shader->GetShaderEntries().samplers) {
const auto shader_type = static_cast<Tegra::Engines::ShaderType>(stage_index);
const auto texture = GetTextureInfo(maxwell3d, entry, shader_type);
SetupTexture(binding++, texture, entry);
if (!entry.IsIndexed()) {
const auto texture = GetTextureInfo(maxwell3d, entry, shader_type);
SetupTexture(binding++, texture, entry);
} else {
for (std::size_t i = 0; i < entry.Size(); ++i) {
const auto texture = GetTextureInfo(maxwell3d, entry, shader_type, i);
SetupTexture(binding++, texture, entry);
}
}
}
}
@@ -952,8 +963,17 @@ void RasterizerOpenGL::SetupComputeTextures(const Shader& kernel) {
const auto& compute = system.GPU().KeplerCompute();
u32 binding = 0;
for (const auto& entry : kernel->GetShaderEntries().samplers) {
const auto texture = GetTextureInfo(compute, entry, Tegra::Engines::ShaderType::Compute);
SetupTexture(binding++, texture, entry);
if (!entry.IsIndexed()) {
const auto texture =
GetTextureInfo(compute, entry, Tegra::Engines::ShaderType::Compute);
SetupTexture(binding++, texture, entry);
} else {
for (std::size_t i = 0; i < entry.Size(); ++i) {
const auto texture =
GetTextureInfo(compute, entry, Tegra::Engines::ShaderType::Compute, i);
SetupTexture(binding++, texture, entry);
}
}
}
}
@@ -1272,6 +1292,7 @@ void RasterizerOpenGL::SyncPointState() {
const auto& regs = system.GPU().Maxwell3D().regs;
// Limit the point size to 1 since nouveau sometimes sets a point size of 0 (and that's invalid
// in OpenGL).
state.point.program_control = regs.vp_point_size.enable != 0;
state.point.size = std::max(1.0f, regs.point_size);
}

View File

@@ -34,9 +34,6 @@ using VideoCommon::Shader::ShaderIR;
namespace {
// One UBO is always reserved for emulation values on staged shaders
constexpr u32 STAGE_RESERVED_UBOS = 1;
constexpr u32 STAGE_MAIN_OFFSET = 10;
constexpr u32 KERNEL_MAIN_OFFSET = 0;
@@ -217,6 +214,7 @@ std::unique_ptr<ConstBufferLocker> MakeLocker(Core::System& system, ShaderType s
}
void FillLocker(ConstBufferLocker& locker, const ShaderDiskCacheUsage& usage) {
locker.SetBoundBuffer(usage.bound_buffer);
for (const auto& key : usage.keys) {
const auto [buffer, offset] = key.first;
locker.InsertKey(buffer, offset, key.second);
@@ -243,7 +241,6 @@ CachedProgram BuildShader(const Device& device, u64 unique_identifier, ShaderTyp
if (!code_b.empty()) {
ir_b.emplace(code_b, main_offset, COMPILER_SETTINGS, locker);
}
const auto entries = GLShader::GetEntries(ir);
std::string source = fmt::format(R"(// {}
#version 430 core
@@ -264,6 +261,10 @@ CachedProgram BuildShader(const Device& device, u64 unique_identifier, ShaderTyp
"#extension GL_NV_shader_thread_group : require\n"
"#extension GL_NV_shader_thread_shuffle : require\n";
}
// This pragma stops Nvidia's driver from over optimizing math (probably using fp16 operations)
// on places where we don't want to.
// Thanks to Ryujinx for finding this workaround.
source += "#pragma optionNV(fastmath off)\n";
if (shader_type == ShaderType::Geometry) {
const auto [glsl_topology, max_vertices] = GetPrimitiveDescription(variant.primitive_mode);
@@ -314,9 +315,10 @@ std::unordered_set<GLenum> GetSupportedFormats() {
CachedShader::CachedShader(const ShaderParameters& params, ShaderType shader_type,
GLShader::ShaderEntries entries, ProgramCode code, ProgramCode code_b)
: RasterizerCacheObject{params.host_ptr}, system{params.system}, disk_cache{params.disk_cache},
device{params.device}, cpu_addr{params.cpu_addr}, unique_identifier{params.unique_identifier},
shader_type{shader_type}, entries{entries}, code{std::move(code)}, code_b{std::move(code_b)} {
: RasterizerCacheObject{params.host_ptr}, system{params.system},
disk_cache{params.disk_cache}, device{params.device}, cpu_addr{params.cpu_addr},
unique_identifier{params.unique_identifier}, shader_type{shader_type},
entries{std::move(entries)}, code{std::move(code)}, code_b{std::move(code_b)} {
if (!params.precompiled_variants) {
return;
}
@@ -417,7 +419,8 @@ bool CachedShader::EnsureValidLockerVariant() {
ShaderDiskCacheUsage CachedShader::GetUsage(const ProgramVariant& variant,
const ConstBufferLocker& locker) const {
return ShaderDiskCacheUsage{unique_identifier, variant, locker.GetKeys(),
return ShaderDiskCacheUsage{unique_identifier, variant,
locker.GetBoundBuffer(), locker.GetKeys(),
locker.GetBoundSamplers(), locker.GetBindlessSamplers()};
}

View File

@@ -391,6 +391,7 @@ public:
DeclareVertex();
DeclareGeometry();
DeclareRegisters();
DeclareCustomVariables();
DeclarePredicates();
DeclareLocalMemory();
DeclareInternalFlags();
@@ -503,6 +504,16 @@ private:
}
}
void DeclareCustomVariables() {
const u32 num_custom_variables = ir.GetNumCustomVariables();
for (u32 i = 0; i < num_custom_variables; ++i) {
code.AddLine("float {} = 0.0f;", GetCustomVariable(i));
}
if (num_custom_variables > 0) {
code.AddNewLine();
}
}
void DeclarePredicates() {
const auto& predicates = ir.GetPredicates();
for (const auto pred : predicates) {
@@ -655,7 +666,8 @@ private:
u32 binding = device.GetBaseBindings(stage).sampler;
for (const auto& sampler : ir.GetSamplers()) {
const std::string name = GetSampler(sampler);
const std::string description = fmt::format("layout (binding = {}) uniform", binding++);
const std::string description = fmt::format("layout (binding = {}) uniform", binding);
binding += sampler.IsIndexed() ? sampler.Size() : 1;
std::string sampler_type = [&]() {
if (sampler.IsBuffer()) {
@@ -682,7 +694,11 @@ private:
sampler_type += "Shadow";
}
code.AddLine("{} {} {};", description, sampler_type, name);
if (!sampler.IsIndexed()) {
code.AddLine("{} {} {};", description, sampler_type, name);
} else {
code.AddLine("{} {} {}[{}];", description, sampler_type, name, sampler.Size());
}
}
if (!ir.GetSamplers().empty()) {
code.AddNewLine();
@@ -751,6 +767,9 @@ private:
Expression Visit(const Node& node) {
if (const auto operation = std::get_if<OperationNode>(&*node)) {
if (const auto amend_index = operation->GetAmendIndex()) {
Visit(ir.GetAmendNode(*amend_index)).CheckVoid();
}
const auto operation_index = static_cast<std::size_t>(operation->GetCode());
if (operation_index >= operation_decompilers.size()) {
UNREACHABLE_MSG("Out of bounds operation: {}", operation_index);
@@ -772,6 +791,11 @@ private:
return {GetRegister(index), Type::Float};
}
if (const auto cv = std::get_if<CustomVarNode>(&*node)) {
const u32 index = cv->GetIndex();
return {GetCustomVariable(index), Type::Float};
}
if (const auto immediate = std::get_if<ImmediateNode>(&*node)) {
const u32 value = immediate->GetValue();
if (value < 10) {
@@ -872,6 +896,9 @@ private:
}
if (const auto conditional = std::get_if<ConditionalNode>(&*node)) {
if (const auto amend_index = conditional->GetAmendIndex()) {
Visit(ir.GetAmendNode(*amend_index)).CheckVoid();
}
// It's invalid to call conditional on nested nodes, use an operation instead
code.AddLine("if ({}) {{", Visit(conditional->GetCondition()).AsBool());
++code.scope;
@@ -1013,7 +1040,6 @@ private:
}
return {{"gl_ViewportIndex", Type::Int}};
case 3:
UNIMPLEMENTED_MSG("Requires some state changes for gl_PointSize to work in shader");
return {{"gl_PointSize", Type::Float}};
}
return {};
@@ -1093,7 +1119,11 @@ private:
} else if (!meta->ptp.empty()) {
expr += "Offsets";
}
expr += '(' + GetSampler(meta->sampler) + ", ";
if (!meta->sampler.IsIndexed()) {
expr += '(' + GetSampler(meta->sampler) + ", ";
} else {
expr += '(' + GetSampler(meta->sampler) + '[' + Visit(meta->index).AsUint() + "], ";
}
expr += coord_constructors.at(count + (has_array ? 1 : 0) +
(has_shadow && !separate_dc ? 1 : 0) - 1);
expr += '(';
@@ -1305,6 +1335,8 @@ private:
const std::string final_offset = fmt::format("({} - {}) >> 2", real, base);
target = {fmt::format("{}[{}]", GetGlobalMemory(gmem->GetDescriptor()), final_offset),
Type::Uint};
} else if (const auto cv = std::get_if<CustomVarNode>(&*dest)) {
target = {GetCustomVariable(cv->GetIndex()), Type::Float};
} else {
UNREACHABLE_MSG("Assign called without a proper target");
}
@@ -1850,6 +1882,13 @@ private:
Type::Uint};
}
template <const std::string_view& opname, Type type>
Expression Atomic(Operation operation) {
return {fmt::format("atomic{}({}, {})", opname, Visit(operation[0]).GetCode(),
Visit(operation[1]).As(type)),
type};
}
Expression Branch(Operation operation) {
const auto target = std::get_if<ImmediateNode>(&*operation[0]);
UNIMPLEMENTED_IF(!target);
@@ -2188,6 +2227,8 @@ private:
&GLSLDecompiler::AtomicImage<Func::Xor>,
&GLSLDecompiler::AtomicImage<Func::Exchange>,
&GLSLDecompiler::Atomic<Func::Add, Type::Uint>,
&GLSLDecompiler::Branch,
&GLSLDecompiler::BranchIndirect,
&GLSLDecompiler::PushFlowStack,
@@ -2223,6 +2264,10 @@ private:
return GetDeclarationWithSuffix(index, "gpr");
}
std::string GetCustomVariable(u32 index) const {
return GetDeclarationWithSuffix(index, "custom_var");
}
std::string GetPredicate(Tegra::Shader::Pred pred) const {
return GetDeclarationWithSuffix(static_cast<u32>(pred), "pred");
}
@@ -2307,7 +2352,7 @@ public:
explicit ExprDecompiler(GLSLDecompiler& decomp) : decomp{decomp} {}
void operator()(const ExprAnd& expr) {
inner += "( ";
inner += '(';
std::visit(*this, *expr.operand1);
inner += " && ";
std::visit(*this, *expr.operand2);
@@ -2315,7 +2360,7 @@ public:
}
void operator()(const ExprOr& expr) {
inner += "( ";
inner += '(';
std::visit(*this, *expr.operand1);
inner += " || ";
std::visit(*this, *expr.operand2);
@@ -2333,28 +2378,7 @@ public:
}
void operator()(const ExprCondCode& expr) {
const Node cc = decomp.ir.GetConditionCode(expr.cc);
std::string target;
if (const auto pred = std::get_if<PredicateNode>(&*cc)) {
const auto index = pred->GetIndex();
switch (index) {
case Tegra::Shader::Pred::NeverExecute:
target = "false";
break;
case Tegra::Shader::Pred::UnusedIndex:
target = "true";
break;
default:
target = decomp.GetPredicate(index);
break;
}
} else if (const auto flag = std::get_if<InternalFlagNode>(&*cc)) {
target = decomp.GetInternalFlag(flag->GetFlag());
} else {
UNREACHABLE();
}
inner += target;
inner += decomp.Visit(decomp.ir.GetConditionCode(expr.cc)).AsBool();
}
void operator()(const ExprVar& expr) {
@@ -2366,8 +2390,7 @@ public:
}
void operator()(VideoCommon::Shader::ExprGprEqual& expr) {
inner +=
"( ftou(" + decomp.GetRegister(expr.gpr) + ") == " + std::to_string(expr.value) + ')';
inner += fmt::format("(ftou({}) == {})", decomp.GetRegister(expr.gpr), expr.value);
}
const std::string& GetResult() const {
@@ -2375,8 +2398,8 @@ public:
}
private:
std::string inner;
GLSLDecompiler& decomp;
std::string inner;
};
class ASTDecompiler {

View File

@@ -53,7 +53,7 @@ struct BindlessSamplerKey {
Tegra::Engines::SamplerDescriptor sampler{};
};
constexpr u32 NativeVersion = 11;
constexpr u32 NativeVersion = 12;
// Making sure sizes doesn't change by accident
static_assert(sizeof(ProgramVariant) == 20);
@@ -186,7 +186,8 @@ ShaderDiskCacheOpenGL::LoadTransferable() {
u32 num_bound_samplers{};
u32 num_bindless_samplers{};
if (file.ReadArray(&usage.unique_identifier, 1) != 1 ||
file.ReadArray(&usage.variant, 1) != 1 || file.ReadArray(&num_keys, 1) != 1 ||
file.ReadArray(&usage.variant, 1) != 1 ||
file.ReadArray(&usage.bound_buffer, 1) != 1 || file.ReadArray(&num_keys, 1) != 1 ||
file.ReadArray(&num_bound_samplers, 1) != 1 ||
file.ReadArray(&num_bindless_samplers, 1) != 1) {
LOG_ERROR(Render_OpenGL, error_loading);
@@ -281,7 +282,9 @@ ShaderDiskCacheOpenGL::LoadPrecompiledFile(FileUtil::IOFile& file) {
u32 num_bindless_samplers{};
ShaderDiskCacheUsage usage;
if (!LoadObjectFromPrecompiled(usage.unique_identifier) ||
!LoadObjectFromPrecompiled(usage.variant) || !LoadObjectFromPrecompiled(num_keys) ||
!LoadObjectFromPrecompiled(usage.variant) ||
!LoadObjectFromPrecompiled(usage.bound_buffer) ||
!LoadObjectFromPrecompiled(num_keys) ||
!LoadObjectFromPrecompiled(num_bound_samplers) ||
!LoadObjectFromPrecompiled(num_bindless_samplers)) {
return {};
@@ -393,6 +396,7 @@ void ShaderDiskCacheOpenGL::SaveUsage(const ShaderDiskCacheUsage& usage) {
if (file.WriteObject(TransferableEntryKind::Usage) != 1 ||
file.WriteObject(usage.unique_identifier) != 1 || file.WriteObject(usage.variant) != 1 ||
file.WriteObject(usage.bound_buffer) != 1 ||
file.WriteObject(static_cast<u32>(usage.keys.size())) != 1 ||
file.WriteObject(static_cast<u32>(usage.bound_samplers.size())) != 1 ||
file.WriteObject(static_cast<u32>(usage.bindless_samplers.size())) != 1) {
@@ -447,7 +451,7 @@ void ShaderDiskCacheOpenGL::SaveDump(const ShaderDiskCacheUsage& usage, GLuint p
};
if (!SaveObjectToPrecompiled(usage.unique_identifier) ||
!SaveObjectToPrecompiled(usage.variant) ||
!SaveObjectToPrecompiled(usage.variant) || !SaveObjectToPrecompiled(usage.bound_buffer) ||
!SaveObjectToPrecompiled(static_cast<u32>(usage.keys.size())) ||
!SaveObjectToPrecompiled(static_cast<u32>(usage.bound_samplers.size())) ||
!SaveObjectToPrecompiled(static_cast<u32>(usage.bindless_samplers.size()))) {

View File

@@ -79,6 +79,7 @@ static_assert(std::is_trivially_copyable_v<ProgramVariant>);
struct ShaderDiskCacheUsage {
u64 unique_identifier{};
ProgramVariant variant;
u32 bound_buffer{};
VideoCommon::Shader::KeyMap keys;
VideoCommon::Shader::BoundSamplerMap bound_samplers;
VideoCommon::Shader::BindlessSamplerMap bindless_samplers;

View File

@@ -127,6 +127,7 @@ void OpenGLState::ApplyClipDistances() {
}
void OpenGLState::ApplyPointSize() {
Enable(GL_PROGRAM_POINT_SIZE, cur_state.point.program_control, point.program_control);
if (UpdateValue(cur_state.point.size, point.size)) {
glPointSize(point.size);
}

View File

@@ -131,7 +131,8 @@ public:
std::array<Viewport, Tegra::Engines::Maxwell3D::Regs::NumViewports> viewports;
struct {
float size = 1.0f; // GL_POINT_SIZE
bool program_control = false; // GL_PROGRAM_POINT_SIZE
GLfloat size = 1.0f; // GL_POINT_SIZE
} point;
struct {

View File

@@ -44,7 +44,7 @@ struct FormatTuple {
constexpr std::array<FormatTuple, VideoCore::Surface::MaxPixelFormat> tex_format_tuples = {{
{GL_RGBA8, GL_RGBA, GL_UNSIGNED_INT_8_8_8_8_REV, false}, // ABGR8U
{GL_RGBA8, GL_RGBA, GL_BYTE, false}, // ABGR8S
{GL_RGBA8_SNORM, GL_RGBA, GL_BYTE, false}, // ABGR8S
{GL_RGBA8UI, GL_RGBA_INTEGER, GL_UNSIGNED_BYTE, false}, // ABGR8UI
{GL_RGB565, GL_RGB, GL_UNSIGNED_SHORT_5_6_5_REV, false}, // B5G6R5U
{GL_RGB10_A2, GL_RGBA, GL_UNSIGNED_INT_2_10_10_10_REV, false}, // A2B10G10R10U
@@ -83,9 +83,9 @@ constexpr std::array<FormatTuple, VideoCore::Surface::MaxPixelFormat> tex_format
{GL_RGB32F, GL_RGB, GL_FLOAT, false}, // RGB32F
{GL_SRGB8_ALPHA8, GL_RGBA, GL_UNSIGNED_INT_8_8_8_8_REV, false}, // RGBA8_SRGB
{GL_RG8, GL_RG, GL_UNSIGNED_BYTE, false}, // RG8U
{GL_RG8, GL_RG, GL_BYTE, false}, // RG8S
{GL_RG8_SNORM, GL_RG, GL_BYTE, false}, // RG8S
{GL_RG32UI, GL_RG_INTEGER, GL_UNSIGNED_INT, false}, // RG32UI
{GL_RGB16F, GL_RGBA16, GL_HALF_FLOAT, false}, // RGBX16F
{GL_RGB16F, GL_RGBA, GL_HALF_FLOAT, false}, // RGBX16F
{GL_R32UI, GL_RED_INTEGER, GL_UNSIGNED_INT, false}, // R32UI
{GL_RGBA8, GL_RGBA, GL_UNSIGNED_BYTE, false}, // ASTC_2D_8X8
{GL_RGBA8, GL_RGBA, GL_UNSIGNED_BYTE, false}, // ASTC_2D_8X5
@@ -176,6 +176,19 @@ GLint GetSwizzleSource(SwizzleSource source) {
return GL_NONE;
}
GLenum GetComponent(PixelFormat format, bool is_first) {
switch (format) {
case PixelFormat::Z24S8:
case PixelFormat::Z32FS8:
return is_first ? GL_DEPTH_COMPONENT : GL_STENCIL_INDEX;
case PixelFormat::S8Z24:
return is_first ? GL_STENCIL_INDEX : GL_DEPTH_COMPONENT;
default:
UNREACHABLE();
return GL_DEPTH_COMPONENT;
}
}
void ApplyTextureDefaults(const SurfaceParams& params, GLuint texture) {
if (params.IsBuffer()) {
return;
@@ -184,7 +197,7 @@ void ApplyTextureDefaults(const SurfaceParams& params, GLuint texture) {
glTextureParameteri(texture, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTextureParameteri(texture, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTextureParameteri(texture, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTextureParameteri(texture, GL_TEXTURE_MAX_LEVEL, params.num_levels - 1);
glTextureParameteri(texture, GL_TEXTURE_MAX_LEVEL, static_cast<GLint>(params.num_levels - 1));
if (params.num_levels == 1) {
glTextureParameterf(texture, GL_TEXTURE_LOD_BIAS, 1000.0f);
}
@@ -253,14 +266,12 @@ void CachedSurface::DownloadTexture(std::vector<u8>& staging_buffer) {
glPixelStorei(GL_PACK_ALIGNMENT, std::min(8U, params.GetRowAlignment(level)));
glPixelStorei(GL_PACK_ROW_LENGTH, static_cast<GLint>(params.GetMipWidth(level)));
const std::size_t mip_offset = params.GetHostMipmapLevelOffset(level);
u8* const mip_data = staging_buffer.data() + mip_offset;
const GLsizei size = static_cast<GLsizei>(params.GetHostMipmapSize(level));
if (is_compressed) {
glGetCompressedTextureImage(texture.handle, level,
static_cast<GLsizei>(params.GetHostMipmapSize(level)),
staging_buffer.data() + mip_offset);
glGetCompressedTextureImage(texture.handle, level, size, mip_data);
} else {
glGetTextureImage(texture.handle, level, format, type,
static_cast<GLsizei>(params.GetHostMipmapSize(level)),
staging_buffer.data() + mip_offset);
glGetTextureImage(texture.handle, level, format, type, size, mip_data);
}
}
}
@@ -418,11 +429,21 @@ void CachedSurfaceView::ApplySwizzle(SwizzleSource x_source, SwizzleSource y_sou
if (new_swizzle == swizzle)
return;
swizzle = new_swizzle;
const std::array<GLint, 4> gl_swizzle = {GetSwizzleSource(x_source), GetSwizzleSource(y_source),
GetSwizzleSource(z_source),
GetSwizzleSource(w_source)};
const std::array gl_swizzle = {GetSwizzleSource(x_source), GetSwizzleSource(y_source),
GetSwizzleSource(z_source), GetSwizzleSource(w_source)};
const GLuint handle = GetTexture();
glTextureParameteriv(handle, GL_TEXTURE_SWIZZLE_RGBA, gl_swizzle.data());
const PixelFormat format = surface.GetSurfaceParams().pixel_format;
switch (format) {
case PixelFormat::Z24S8:
case PixelFormat::Z32FS8:
case PixelFormat::S8Z24:
glTextureParameteri(handle, GL_DEPTH_STENCIL_TEXTURE_MODE,
GetComponent(format, x_source == SwizzleSource::R));
break;
default:
glTextureParameteriv(handle, GL_TEXTURE_SWIZZLE_RGBA, gl_swizzle.data());
break;
}
}
OGLTextureView CachedSurfaceView::CreateTextureView() const {
@@ -531,8 +552,11 @@ void TextureCacheOpenGL::ImageBlit(View& src_view, View& dst_view,
const Common::Rectangle<u32>& dst_rect = copy_config.dst_rect;
const bool is_linear = copy_config.filter == Tegra::Engines::Fermi2D::Filter::Linear;
glBlitFramebuffer(src_rect.left, src_rect.top, src_rect.right, src_rect.bottom, dst_rect.left,
dst_rect.top, dst_rect.right, dst_rect.bottom, buffers,
glBlitFramebuffer(static_cast<GLint>(src_rect.left), static_cast<GLint>(src_rect.top),
static_cast<GLint>(src_rect.right), static_cast<GLint>(src_rect.bottom),
static_cast<GLint>(dst_rect.left), static_cast<GLint>(dst_rect.top),
static_cast<GLint>(dst_rect.right), static_cast<GLint>(dst_rect.bottom),
buffers,
is_linear && (buffers == GL_COLOR_BUFFER_BIT) ? GL_LINEAR : GL_NEAREST);
}

View File

@@ -6,16 +6,20 @@
#include <vector>
#include <fmt/format.h>
#include <glad/glad.h>
#include "common/assert.h"
#include "common/common_types.h"
#include "common/scope_exit.h"
#include "video_core/renderer_opengl/utils.h"
namespace OpenGL {
struct VertexArrayPushBuffer::Entry {
GLuint binding_index{};
const GLuint* buffer{};
GLintptr offset{};
GLsizei stride{};
};
VertexArrayPushBuffer::VertexArrayPushBuffer() = default;
VertexArrayPushBuffer::~VertexArrayPushBuffer() = default;
@@ -47,6 +51,13 @@ void VertexArrayPushBuffer::Bind() {
}
}
struct BindBuffersRangePushBuffer::Entry {
GLuint binding;
const GLuint* buffer;
GLintptr offset;
GLsizeiptr size;
};
BindBuffersRangePushBuffer::BindBuffersRangePushBuffer(GLenum target) : target{target} {}
BindBuffersRangePushBuffer::~BindBuffersRangePushBuffer() = default;

View File

@@ -26,12 +26,7 @@ public:
void Bind();
private:
struct Entry {
GLuint binding_index{};
const GLuint* buffer{};
GLintptr offset{};
GLsizei stride{};
};
struct Entry;
GLuint vao{};
const GLuint* index_buffer{};
@@ -50,12 +45,7 @@ public:
void Bind();
private:
struct Entry {
GLuint binding;
const GLuint* buffer;
GLintptr offset;
GLsizeiptr size;
};
struct Entry;
GLenum target;
std::vector<Entry> entries;

View File

@@ -109,6 +109,9 @@ constexpr FixedPipelineState::Rasterizer GetRasterizerState(const Maxwell& regs)
const auto topology = static_cast<std::size_t>(regs.draw.topology.Value());
const bool depth_bias_enabled = enabled_lut[PolygonOffsetEnableLUT[topology]];
const auto& clip = regs.view_volume_clip_control;
const bool depth_clamp_enabled = clip.depth_clamp_near == 1 || clip.depth_clamp_far == 1;
Maxwell::Cull::FrontFace front_face = regs.cull.front_face;
if (regs.screen_y_control.triangle_rast_flip != 0 &&
regs.viewport_transform[0].scale_y > 0.0f) {
@@ -119,8 +122,9 @@ constexpr FixedPipelineState::Rasterizer GetRasterizerState(const Maxwell& regs)
}
const bool gl_ndc = regs.depth_mode == Maxwell::DepthMode::MinusOneToOne;
return FixedPipelineState::Rasterizer(regs.cull.enabled, depth_bias_enabled, gl_ndc,
regs.cull.cull_face, front_face);
return FixedPipelineState::Rasterizer(regs.cull.enabled, depth_bias_enabled,
depth_clamp_enabled, gl_ndc, regs.cull.cull_face,
front_face);
}
} // Anonymous namespace
@@ -222,15 +226,17 @@ bool FixedPipelineState::Tessellation::operator==(const Tessellation& rhs) const
std::size_t FixedPipelineState::Rasterizer::Hash() const noexcept {
return static_cast<std::size_t>(cull_enable) ^
(static_cast<std::size_t>(depth_bias_enable) << 1) ^
(static_cast<std::size_t>(ndc_minus_one_to_one) << 2) ^
(static_cast<std::size_t>(depth_clamp_enable) << 2) ^
(static_cast<std::size_t>(ndc_minus_one_to_one) << 3) ^
(static_cast<std::size_t>(cull_face) << 24) ^
(static_cast<std::size_t>(front_face) << 48);
}
bool FixedPipelineState::Rasterizer::operator==(const Rasterizer& rhs) const noexcept {
return std::tie(cull_enable, depth_bias_enable, ndc_minus_one_to_one, cull_face, front_face) ==
std::tie(rhs.cull_enable, rhs.depth_bias_enable, rhs.ndc_minus_one_to_one, rhs.cull_face,
rhs.front_face);
return std::tie(cull_enable, depth_bias_enable, depth_clamp_enable, ndc_minus_one_to_one,
cull_face, front_face) ==
std::tie(rhs.cull_enable, rhs.depth_bias_enable, rhs.depth_clamp_enable,
rhs.ndc_minus_one_to_one, rhs.cull_face, rhs.front_face);
}
std::size_t FixedPipelineState::DepthStencil::Hash() const noexcept {

View File

@@ -170,15 +170,17 @@ struct FixedPipelineState {
};
struct Rasterizer {
constexpr Rasterizer(bool cull_enable, bool depth_bias_enable, bool ndc_minus_one_to_one,
Maxwell::Cull::CullFace cull_face, Maxwell::Cull::FrontFace front_face)
constexpr Rasterizer(bool cull_enable, bool depth_bias_enable, bool depth_clamp_enable,
bool ndc_minus_one_to_one, Maxwell::Cull::CullFace cull_face,
Maxwell::Cull::FrontFace front_face)
: cull_enable{cull_enable}, depth_bias_enable{depth_bias_enable},
ndc_minus_one_to_one{ndc_minus_one_to_one}, cull_face{cull_face}, front_face{
front_face} {}
depth_clamp_enable{depth_clamp_enable}, ndc_minus_one_to_one{ndc_minus_one_to_one},
cull_face{cull_face}, front_face{front_face} {}
Rasterizer() = default;
bool cull_enable;
bool depth_bias_enable;
bool depth_clamp_enable;
bool ndc_minus_one_to_one;
Maxwell::Cull::CullFace cull_face;
Maxwell::Cull::FrontFace front_face;

View File

@@ -44,7 +44,7 @@ vk::SamplerMipmapMode MipmapMode(Tegra::Texture::TextureMipmapFilter mipmap_filt
return {};
}
vk::SamplerAddressMode WrapMode(Tegra::Texture::WrapMode wrap_mode,
vk::SamplerAddressMode WrapMode(const VKDevice& device, Tegra::Texture::WrapMode wrap_mode,
Tegra::Texture::TextureFilter filter) {
switch (wrap_mode) {
case Tegra::Texture::WrapMode::Wrap:
@@ -56,7 +56,12 @@ vk::SamplerAddressMode WrapMode(Tegra::Texture::WrapMode wrap_mode,
case Tegra::Texture::WrapMode::Border:
return vk::SamplerAddressMode::eClampToBorder;
case Tegra::Texture::WrapMode::Clamp:
// TODO(Rodrigo): Emulate GL_CLAMP properly
if (device.GetDriverID() == vk::DriverIdKHR::eNvidiaProprietary) {
// Nvidia's Vulkan driver defaults to GL_CLAMP on invalid enumerations, we can hack this
// by sending an invalid enumeration.
return static_cast<vk::SamplerAddressMode>(0xcafe);
}
// TODO(Rodrigo): Emulate GL_CLAMP properly on other vendors
switch (filter) {
case Tegra::Texture::TextureFilter::Nearest:
return vk::SamplerAddressMode::eClampToEdge;

View File

@@ -22,7 +22,7 @@ vk::Filter Filter(Tegra::Texture::TextureFilter filter);
vk::SamplerMipmapMode MipmapMode(Tegra::Texture::TextureMipmapFilter mipmap_filter);
vk::SamplerAddressMode WrapMode(Tegra::Texture::WrapMode wrap_mode,
vk::SamplerAddressMode WrapMode(const VKDevice& device, Tegra::Texture::WrapMode wrap_mode,
Tegra::Texture::TextureFilter filter);
vk::CompareOp DepthCompareFunction(Tegra::Texture::DepthCompareFunc depth_compare_func);

View File

@@ -0,0 +1,72 @@
// Copyright 2018 yuzu Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <optional>
#include <vector>
#include "video_core/renderer_base.h"
#include "video_core/renderer_vulkan/declarations.h"
namespace Core {
class System;
}
namespace Vulkan {
class VKBlitScreen;
class VKDevice;
class VKFence;
class VKMemoryManager;
class VKResourceManager;
class VKSwapchain;
class VKScheduler;
class VKImage;
struct VKScreenInfo {
VKImage* image{};
u32 width{};
u32 height{};
bool is_srgb{};
};
class RendererVulkan final : public VideoCore::RendererBase {
public:
explicit RendererVulkan(Core::Frontend::EmuWindow& window, Core::System& system);
~RendererVulkan() override;
/// Swap buffers (render frame)
void SwapBuffers(const Tegra::FramebufferConfig* framebuffer) override;
/// Initialize the renderer
bool Init() override;
/// Shutdown the renderer
void ShutDown() override;
private:
std::optional<vk::DebugUtilsMessengerEXT> CreateDebugCallback(
const vk::DispatchLoaderDynamic& dldi);
bool PickDevices(const vk::DispatchLoaderDynamic& dldi);
void Report() const;
Core::System& system;
vk::Instance instance;
vk::SurfaceKHR surface;
VKScreenInfo screen_info;
UniqueDebugUtilsMessengerEXT debug_callback;
std::unique_ptr<VKDevice> device;
std::unique_ptr<VKSwapchain> swapchain;
std::unique_ptr<VKMemoryManager> memory_manager;
std::unique_ptr<VKResourceManager> resource_manager;
std::unique_ptr<VKScheduler> scheduler;
std::unique_ptr<VKBlitScreen> blit_screen;
};
} // namespace Vulkan

Some files were not shown because too many files have changed in this diff Show More