(no title)
mr_briggs | 2 years ago
In what might be a bit of a cursed, uninformed thought - but I'd like to see what happens if each player's perspective could be altered individually. Would it be possible for players to have unique camera perspectives when split, and then reorient to the same perspective when within a given distance to transition to single screen?
From memory (as I have been unable to find a demo video), the LEGO series had some interesting approaches to dealing with this. IIRC LEGO Marvel Super Heroes gave players control of their camera when in the open world, so in Dynamic Splitscreen mode there was a little fade transition when recombining cameras to single screen. Pretty sure there was a little delay too so it wouldn't recombine unnecessarily, and it was typically a more annoying point of the splitscreen as the dividing line would pivot more dramatically - something the raytraced approach would definitely improve!
lloeki|2 years ago
> the LEGO series had some interesting approaches to dealing with this
Indeed, LEGO Indiana Jones (2008) for sure had it on 360, but IIRC LEGO Star Wars I (2005) and II (2006) did not.
Here it is on The Force Awakens https://www.youtube.com/watch?v=T04B2coSN0Y
In some ways, it was awesome, in others, it was terrible!
- When the viewpoints are very close but still split, things from both views are only slightly offset which gave some weird effect like stereoscopic stuff. Visible at 0:07, 0:11, 2:05 in the SW video above.
- FOV is more or less fixed, so on a 16:9 screen with a fixed FOV you get either fixed wide FOV for a vertical-ish split (nice) but also for horizontal-ish (everything is super small), at the cost of a fisheyesque warp; OR you get fixed narrow FOV and you can't see much left-right on vertical-ish split nor up-down on horizontal-ish.
Overall the combination of both made the games extremely headache inducing for me.
> IIRC LEGO Marvel Super Heroes gave players control of their camera when in the open world
Here one can see the open world sections with the POV control, non-dynamic split screen:
https://www.youtube.com/watch?v=KLRDzNy0t2g
chckens|2 years ago
https://youtu.be/SPQXmZR7JWo?feature=shared (From about 2:30)
Now get off my lawn.
chmod775|2 years ago
Technically, yes. There's not any major technical hurdle here. You'll probably have to render a bit more than required for each screen, so your shaders don't create seams near merge (because they may behave differently at the edge of whatever they're sampling). Also you'll have to decide what you want to do about on-screen effects (hit notifications etc) on merge. You may get something that looks like seams to our brain's pattern recognition anyways as you get close to merge (a "triangle" looking shape on floor/walls when perspectives only slightly diverge). I suspect the latter is why the game you mentioned had a transition - besides providing a visual to each player that they can now also look at the other half of the screen.
djtango|2 years ago
Imagine a 3rd person action game. A and B split up and have two perspectives. They end up facing in opposite directions, how do you reconcile the camera views?
Maybe you can have some heuristic that only does it if their perspectives are "close enough" but what value does it bring?
For some kind of topdown/isometric type game I could see how that might work.
EDIT: actually read the article after getting curious. All of this is covered, extremely cool article. Don't quite understand ray tracing enough to understanding exactly why it is faster. Is it because rasterisation starts from the camera but ray tracing starts from the light source? so you can amortise more calculations if you start from a light source rather than individual perspectives?
JamesLeonis|2 years ago
The problem is Rasterization is a bastard Raytrace. A camera view, represented as a trapezoidal prism, is transformed into a cubic prism and flattened. Every pixel is a parallel "raytrace" along the flattened cube. To get multiple viewports Rasterization creates an entirely new rendering scene, complete with the boilerplate code and API calls, as if it were its own game with its own screen. A four way split was akin to brute-forcing four bastardized raytraces.
Raytracing simplifies this by admitting this is terrible, and thus giving every pixel its own official transform (Raytrace) rather than the ugly pile of hacks above. I think of it as a "Camera Shader", sitting with pixel and vertex shaders as a way to dynamically change a given pixel.
oz_1123|2 years ago
My assumption is that the shared scene and shaders might play a role here, although the direction of the rays will still vary.
robertlagrant|2 years ago
furyofantares|2 years ago
herpdyderp|2 years ago