I've loved these kinds of demos since I first discovered them. For those not in the know, this is _the_ site for finding them: http://www.pouet.net/
Though it didn't end up involving any procedural generation, I was tasked with cramming an OSD into an FPGA that was already using the majority of its resources. The device was processing an HDMI video stream and needed to display a menu and various other information to the user, for control and settings. All this at up to 1080p60. I ended up using about 16 KB of on-chip SRAM, if I recall correctly, and designed an old-school inspired GPU ala NES/SNES, with perfect alpha blending even! It was palette based, of course, and the memory is used to store sprites and a "command buffer". While pixels are flying through the GPU it reads the command buffer sequentially, which instructs it which sprites to draw in-order. Space between sprites is handled by rendering a fully transparent 1x1 sprite, repeated however many times. I was able to draw the framing for the menus and displays with fancy drop-shadows, logos, text, etc. It worked quite well! The only major limitation was my inability to handle exotic languages like Chinese, simply because there wasn't enough room for all the necessary sprites.
While the FPGA handled real-time, on-the-fly rendering, the on-board MCU was the one uploading sprites and configuring the command buffer over SPI. I had to build up all the software to not only drive the GPU but also handle the OSD. If you think coding up GUIs on a desktop is bad, just wait until you try to design a GUI system completely from scratch on an MCU!
I can only imagine what demoscene artists would be able to accomplish if they could design their own GPUs on an FPGA!
Sounds awesome! When I was in school I made a "Game Boy with vector graphics" using an FPGA and a microcontroller. I synthesized a GPU and display driver on the FPGA and sent vector drawing commands from the microcontroller which ran a simple game. It looked like this:
The eye was pretty interesting. These primitives, what are they? smoothstep() is apparently "Hermite interpolation" but colsca() - col-scale? collerp - col-linear interpolation?
If this is from 2008, does anybody know what's up with one of the last slides which states "There was an interesting procedural full 3d image in this slide.
After some explanations on it, speech attendees were flashed
and their short term visual memory erased."?
But could also be a reference to the 'visual knock-out capsules' from Alfred Bester's SF novel 'The Demolished Man':
"They were cubes of copper, half the size of fulminating caps, but twice as deadly. When they were broken open, they erupted a dazzling blue flare that ionized the Rhodopsin - the visual purple in the retina of the eye - blinding the victim and abolishing his perception of time and space."
[+] [-] fpgaminer|9 years ago|reply
Though it didn't end up involving any procedural generation, I was tasked with cramming an OSD into an FPGA that was already using the majority of its resources. The device was processing an HDMI video stream and needed to display a menu and various other information to the user, for control and settings. All this at up to 1080p60. I ended up using about 16 KB of on-chip SRAM, if I recall correctly, and designed an old-school inspired GPU ala NES/SNES, with perfect alpha blending even! It was palette based, of course, and the memory is used to store sprites and a "command buffer". While pixels are flying through the GPU it reads the command buffer sequentially, which instructs it which sprites to draw in-order. Space between sprites is handled by rendering a fully transparent 1x1 sprite, repeated however many times. I was able to draw the framing for the menus and displays with fancy drop-shadows, logos, text, etc. It worked quite well! The only major limitation was my inability to handle exotic languages like Chinese, simply because there wasn't enough room for all the necessary sprites.
While the FPGA handled real-time, on-the-fly rendering, the on-board MCU was the one uploading sprites and configuring the command buffer over SPI. I had to build up all the software to not only drive the GPU but also handle the OSD. If you think coding up GUIs on a desktop is bad, just wait until you try to design a GUI system completely from scratch on an MCU!
I can only imagine what demoscene artists would be able to accomplish if they could design their own GPUs on an FPGA!
[+] [-] blevin|9 years ago|reply
http://www.linusakesson.net/scene/parallelogram/
[+] [-] niedzielski|9 years ago|reply
https://github.com/niedzielski/swankmania/blob/master/DSC057... https://github.com/niedzielski/swankmania/blob/master/DSC057...
[+] [-] iverjo|9 years ago|reply
https://www.dwitter.net/d/104 (tunnel)
https://www.dwitter.net/d/302 (old school effect)
https://www.dwitter.net/d/406 (rotating cylinder)
[+] [-] pjmlp|9 years ago|reply
[+] [-] M4v3R|9 years ago|reply
For me, achieving something like this in 4k is just mindblowing.
[+] [-] wyldfire|9 years ago|reply
[+] [-] kretash|9 years ago|reply
Don't know if it's the same one but it all it seems to do is scaling and clamping.
[+] [-] dharma1|9 years ago|reply
[+] [-] sigvef|9 years ago|reply
[+] [-] zokier|9 years ago|reply
> Misconception: “4k graphics are not interesting, it’s just a 4k intro without animation and music”. Wrong
[+] [-] agumonkey|9 years ago|reply
[+] [-] Keyframe|9 years ago|reply
[+] [-] atesti|9 years ago|reply
[+] [-] fractallyte|9 years ago|reply
But could also be a reference to the 'visual knock-out capsules' from Alfred Bester's SF novel 'The Demolished Man':
"They were cubes of copper, half the size of fulminating caps, but twice as deadly. When they were broken open, they erupted a dazzling blue flare that ionized the Rhodopsin - the visual purple in the retina of the eye - blinding the victim and abolishing his perception of time and space."