top | item 7730415

Things that drive me nuts about OpenGL

97 points| nkurz | 12 years ago |richg42.blogspot.com | reply

71 comments

order
[+] jarrett|12 years ago|reply
One of the biggest headaches for me is debugging. As the author says, the facilities for reading state back out are often questionable. Even where they're not, I'd rather not spend all my time rolling my own custom OpenGL debugging tools. I'd love a cross-platform OpenGL debugger--even if it only handled basic stuff.

For example, when nothing renders, I don't want to waste an hour, staring at my code with no direction, until I realize I forgot a call to glEnableVertexAttribArray. Instead, I'd like to boot up my trusty debugger and go through a sane process of narrowing down the problem, like I do for just about every other class of bug.

Also a sane way to debug shaders would be fantastic. The usual advice is to write debug info out as color values. The fact that anyone considers that a healthy debugging strategy just illustrates how far behind graphics programming is in terms of developer friendliness.

I don't know if it's better on other APIs. OpenGL is the only one I use, because I never have occasion to develop Windows-only apps.

[+] devbug|12 years ago|reply
You should definitely check out bgfx (https://github.com/bkaradzic/bgfx). It abstracts various graphics APIs for you, making development and debugging markedly easier. It also stops you from having to deal with the complex state-machine known as GL.
[+] flohofwoe|12 years ago|reply
If you're on an nvidia GPU, check out nvidia nsight. And this is also one problem with GL, if there are good debugging/profiling tool they often only work on one OS and/or for specific GPUs (and I hope this is where VOGL will come in and fix that).

Other then that I don't even think of OpenGL as a single standard anymore. It's more like "Nvidia GL" "AMD GL", "Intel GL", "Apple GL", etc... there is a core set of functionality which works across all implementations (and which could be cleaner), but if performance is more important then easy portability, you need to implement driver specific code paths anyway. Whether this is good or bad I haven't yet made up my mind completely. At least on GL you have an environment where GPU vendors can experiment and compete through extensions.

[+] peterashford|12 years ago|reply
Hell yes. Especially with shaders - there are some debuggers for HLSL but GLSL gets no love :-{
[+] smw|12 years ago|reply

  * GL extensions are written as diffs vs the official spec
  So if you're not a OpenGL Specification Expert it can be extremely
  difficult to understand some/many extensions.
This is spot on. I was thinking last week about writing a webapp that displayed the spec with a list of extensions you could check to have them merged in.

Then I decided I was yak shaving and went back to actually working on my project.

[+] jamesu|12 years ago|reply
After spending a few months porting a Direct3D9 game to work in OpenGL 2.x, this article certainly resonates with me. It's quite easy to get code working nice and fast in one driver, while in another it stalls and drops down to 1 fps because you didn't anticipate the driver developer designed their version of the API to work under different performance characteristics.

Meanwhile in Direct3D9, the game still runs smooth at 60+ fps on all major drivers on most recent hardware. Granted, it's a bit of an Apples vs Oranges comparison, but it certainly causes a lot of headaches especially when you need to go so far as to modify the art so it batches better.

There is also still a lot of conflicting information on how best to use OpenGL. OpenGL 3.x certainly helped by consolidating a lot of stuff which was in extensions, but in my case its not really that good for me as I still have to put up with the land of OpenGL 2.x.

[+] yoklov|12 years ago|reply
> OpenGL 3.x certainly helped by consolidating a lot of stuff which was in extensions, but in my case its not really that good for me as I still have to put up with the land of OpenGL 2.x.

Ha, where I work we still get support tickets about our ancient GL1.5 renderer from time to time. If only we could drop it.

And then I get home and see people on /r/gamedev suggesting that OpenGL 3.2 is outdated and not even worth supporting anymore. I even got downvoted for saying that my less-than-three year old laptop ran GL3.2. Maybe I'm just in need of an upgrade...

[+] azakai|12 years ago|reply
> Mantle and D3D12 are going to thoroughly leave GL behind (again!) on the performance and developer "mindshare" axes very soon.

Huh? Performance, maybe, but how is "mindshare" being measured here?

DirectX "beat" OpenGL a long time ago - does the author claim OpenGL beat its way back to the top? If so, I can only assume that was due to GL on mobile platforms. But those mobile platforms - iOS and Android - still use GL, and are growing? How can D3D12 and Mantle beat those when they don't even run on those platforms, while Windows - the platform they do work on - is anyhow already under DirectX control?

Furthermore, GL is seeing another area of growth through WebGL, which now works on even Microsoft's browser.

Am I missing something? That mindshare statement seems completely off base.

[+] corillian|12 years ago|reply
He's referring to mindshare among professional (game) engine developers - those people who try to get maximum rendering performance out of multiple platforms. His statement is totally on target for reasons I enumerated with a blog article on this subject back in December: http://inovaekeith.blogspot.com/2013/12/why-opengl-probably-...

As for WebGL Apple intentionally disables WebGL support on iOS - except for in iAds - so game devs can't circumvent the app store. Since WebGL is needlessly restricted to the feature set of the OpenGL ES specification its usefulness is severely limited. WebGL also has many other problems such as the fact that JavaScript is slow as hell.

[+] sounds|12 years ago|reply
I kind of inserted "because Valve, Steam, and Linux" into his statements about OpenGL versus DirectX, and then I was better able to understand where he's coming from.

I mean, you don't really think he has no idea what he's talking about, right? Then game dev world is really, really big. It's worth noting he has worked at Valve for some time, so he's coming from the Gabe Newell world.

[+] wtallis|12 years ago|reply
I think it's instructive to compare with OpenCL. It was designed to be very similar to GL, but didn't have a backwards-compatibility legacy. It's still not exactly friendly, and some of the nicer bits are optional, but it does serve the purpose of being a hardware abstraction layer for parallel processing. It strikes me as being many times simpler than GL, even though its fundamental purpose is only a bit simpler.
[+] greggman|12 years ago|reply
OpenCL is actually much worse. it requires way more querying of what the hardware does to use it rather than abstracting it away like other GPGPU libraries.

While OpenGL can be full of extensions, if you ignore the extensions and stick to the base it's pretty easy to write a program that doesn't have to care what system it's on

[+] sharpneli|12 years ago|reply
I truly hope the next GL version would be similar to OpenCL. There is no mutable global state so it's threadsafe (one function in OpenCL 1.0 had one but it was swiftly deprecated and is no longer used).

Most importantly there is no global binding. Instead of binding one just sets kernel arguments and executes the kernel.

[+] dkarapetyan|12 years ago|reply
So is there a reason graphics programming is still in the dark ages. This sounds like assembly programming where you have N different add instructions with slightly different semantics for how flags are set.
[+] Hemospectrum|12 years ago|reply
The primary reason is that these APIs are very thin abstractions over the actual assembly-level instructions sent to the graphics card. It's certainly possible to write a higher-level abstraction; it's been done many times, in the form of game engines and so forth. The risk of doing so is that you narrow down the possible types of visual effects you can implement, and historically, the implementation of unique and original visual effects was a big part of the way AAA games competed with each other.

In principle there's no longer any reason why it has to work this way, because now we have shaders. These are a more domain-specific tool than a C API, and can thus safely present large cross-sections of graphics card functionality in a higher-level way without taking power away from the graphics programmer. So, for example, people modding Minecraft can introduce their own visual effects stages by hotloading shaders into the standard Minecraft graphics pipeline.

EDIT: I haven't talked about performance. For game developers, rendering a frame quickly is as important as having total control over the output of the rendering process. For the most part, building an API higher-level than OpenGL also means dictating a particular scene graph structure, as with OS-specific window system APIs. If the API imposes a certain way of organizing your scene graph, this can have serious impacts on your game's performance, because the scene graph traversal is optimized for one type of scene and you're using it for another. I'm not sure I'm simplifying this explanation very well, but that's the gist of it.

[+] flohofwoe|12 years ago|reply
And an "assembly-level graphics API" is exactly what is needed at the moment. The high-level abstractions are taken care of by game engines like Unity or UE, or programming frameworks for a specific scenario (e.g. 2D UI rendering vs. 2.5D side-scroller games vs. AAA first-person-shooters). One problem with GL (also D3D) is that it provides fairly highlevel abstractions which don't exist on the hardware level and either limit the flexibility or performance (e.g. OpenGL has "texture objects", when the GPU actually just sees a couple of sampler attributes and a blob of memory). Mantle provides exactly that simplified view on the GPU, and as a result is a much smaller API (but may be harder to code to).
[+] kcbanner|12 years ago|reply
The thing is, there are people that require that level of control to achieve the performance they want.
[+] kllrnohj|12 years ago|reply
Well the author also contradicts themselves. For example, this complaint:

> Drivers should not crash the GPU or CPU, or lock up when called in undefined ways via the API

runs directly counter to this complaint:

> They will not bother to re-write their entire rendering pipeline to use super-aggressive batching, etc. like the GL community has been recently recommending to get perf up.

Error checking = higher per call overhead = slower performance.

If you think there should be a way to turn on a sort of safe-mode or something where the driver holds your hand that sounds reasonable, except for this:

> I've seen major shipped GL apps with per-frame GL errors. (Is this normal? Does the developer even know?)

In other words, the error checking the driver does to is largely ignored. Adding additional error checking at the cost of performance? That won't help at all.

[+] wolfgke|12 years ago|reply
> 20 years of legacy, needs a reboot and major simplification pass

Since OpenGL 3.2 > http://en.wikipedia.org/wiki/OpenGL#OpenGL_3.2 there is a strict division between core profile and compatibility profile. If you want a major simplified API just request a core context instead of a (default) compatibility context.

[+] pjmlp|12 years ago|reply
It doesn't help when you need to target DX9 cards that only have OpenGL 2.1 drivers.
[+] paddlepop|12 years ago|reply
Having only ever made OpenGL applications, I feel a transition to DirectX may be in order
[+] tachyonbeam|12 years ago|reply
Yeah, so your code can only run on Windows. That's clearly the future. You should trust Microsoft, the company with the most vision.
[+] frozenport|12 years ago|reply
>>The API should be simplified and standardized so using a 3rd party lib shouldn't be a requirement just to get a real context going.

Nonsense! OpenGL is an evolving specification for the best-of-best. Why take away the tools that make games exceed the theoretical performance of a GPU? Removing features is unjustified if we know how to use them.

What the author needs is a wrapper and many exist. For example, Qt will let you write graphics code that runs on the desktop and mobile.

Certainly nobody would call for the end of WinAPI because thats how we wrote the other APIs!

[+] pubby|12 years ago|reply
Simplification does not mean removal of performance or features. In fact, most of the recommendations he makes (such as DSA) could perform better than current OpenGL.

>Removing features is unjustified if we know how to use them.

This is not the path that OpenGL has taken, as shown by all the deprecations and removals done in previous versions.

[+] twelvechairs|12 years ago|reply
Some good points here but what it misses is that generally time-wasting due to crappy API is less than time wasting due to having to learn a different api for different platforms. Performance is a bigger issue and it will be interesting to see whether opengl can improve here or mantle develop on other platforms (and be reasonably fast on them).
[+] soup10|12 years ago|reply
I disagree, crappy API design wastes an enormous amount of developer time and effort. OpenGL is full of legacy cruft and has lots of room for improvement, streamlining, and simplification(same with d3d12 for that matter). There is a lot of needless complexity, hoop jumping, and wheel reinventing when it comes to 3d graphics programming(not to mention, countless common "gotchas" and tricks of the trade that are not as well documented and easy to learn as they could be.
[+] zurn|12 years ago|reply
We should be much more worried about the higher level interfaces/languages that app developers actually program to. They're stuck in the stone age. The level of abstraction needs to be raised many notches and programmers need to be freed from running around in circles after performance tricks.
[+] flohofwoe|12 years ago|reply
Nah, just use a higher-level frameworks or a 3D engine for this. You give up some control and performance, but gain productivity. What we need at the moment is less abstractions, because D3D's and OpenGL's abstractions don't fit the current GPUs very well (D3D works around this somewhat by breaking API backward compatibility with each new release)
[+] jbb555|12 years ago|reply
I can't argue with most of this.