robmaister's comments

robmaister | 5 years ago | on: Apple Terminates Epic Games' Developer Account

Part of the original thing was Epic offering a discount of 20% on V-Bucks if they processed the payment through Epic instead of Apple.

The end user got a nice discount and Epic still pockets more because interchange fees are ~2.5% and not 30%.

The Epic Games Store had a similar strategy, cut the fee for devs and incentivize end users to move onto their platform (weekly free games). They still make a boatload of money even though it's not as much as Valve's money printer.

In general I see this as a great thing for devs (indie in particular) if it triggers more competition to bring platform fees down across the board.

Epic isn't being overtly greedy with end users (yet)

robmaister | 5 years ago | on: Apple Terminates Epic Games' Developer Account

Retailers take far less than 30%. Consoles provide first-party QA, disc/cartridge manufacturing and distribution, bandwidth, and customer service among other services. Whole apps on the mobile stores also have similar services provided by the platform - testing (nowhere near as rigorous as consoles), discovery, bandwidth, customer service, etc. In both cases, a large cut can be reasonably argued for.

IAP is nothing more than moving money around, in an app the user has already discovered and downloaded, that will affect a user's account on the developer's servers. The service to the end user is indistinguishable to what is provided by any other payment gateway for ~3% (Stripe, Square, Paypal, etc.) Your card is charged and you have something new in the app.

The only reason I can think of for why IAP takes the same cut as whole apps is that most if not all apps would be "free" but open up to a screen that makes you pay for the app via IAP to take advantage of the reduced fee.

This could be mitigated by ToS restrictions preventing this exact situation, but there would still be a ton of gray area like "pro" versions of apps.

Opening up to third-party payment processors for IAP would create a vacuum in one of their highest margin and most consistent revenue sources so they won't be doing that willingly. Opening up to third-party stores would be more tolerable but any sufficiently large developer will move to their own store and do everything themselves and pocket the ~27.5% (if they are their own payment gateway, interchange fees are ~2.5%)

It's an interesting situation because the platforms want to be paid for all the services I listed above, but in Epic's case they already have the infrastructure to handle everything on their own, and they offer a fully-featured game for free, with the only source of revenue being a conversion of actual money to V-Bucks.

There is no place where the platform can provide a service that Epic would get any value from, but they are imposing a 30% fee in the only place they can, payment processing.

robmaister | 5 years ago | on: Game Programming Patterns (2014)

I 100% agree and I also work in AAA. On the subject of StackOverflow being worthless... I've been working for 4 years now and I have learned pretty consistently over that time that the best way to solve a problem is to just keep digging deeper into the system with the issue.

You will eventually figure out that either the system has a bug or you used it wrong. And along the way you will familiarize yourself more with the system. (and debugging tools!)

The learning effect of this snowballs the more you do it. I'm a year and a half into a UE4 project and am now the "engine person" who people come to with questions or odd crashes.

I have seen every single pattern this book describes used somewhere within Unreal. They are all super valuable to know especially within game programming where problems are novel and often open ended.

robmaister | 5 years ago | on: A first look at Unreal Engine 5

AMD GCN absolutely supports async compute[1]. Radeon cards for years would only make use of the ACEs in pure compute contexts, as OpenGL and DX11 had no concept of a secondary command queue and could not make use of them. This is a big part of the reason why Vulkan/DX12 require so much boilerplate to get a triangle rendered.

The PS3's SPU definitely counts as async compute especially with how it was used later in the console lifecycle[2] once people had time to familiarize themselves with it.

However, in the current gen consoles, you don't have to deal with a different ISA, command queuing, and shared memory between the GPU and CELL processor. You are only writing HLSL/GLSL/PSSL and setting up an aggressive amount of fencing to transition resources between readable and writable states within the GPU.

[1]: https://www.anandtech.com/show/9124/amd-dives-deep-on-asynch... [2]: https://www.slideshare.net/DICEStudio/spubased-deferred-shad...

robmaister | 5 years ago | on: A first look at Unreal Engine 5

I work in AAA. I'm talking lower level things like picking which "type" of GPU memory to allocate, access to specific registers in shaders, etc. PC didn't have real async compute capabilities until DX12, for example.

On the CPU side yeah it's 100% just a normal computer but nothing will be interrupting your threads. I think Windows 10 tries to do in it's new game mode too.

Sorry for assuming the link was the PS5 one. I have a UDN account and their login setup sometimes just dumps me to their homepage, so I made the assumption that it was the same video that I had seen everywhere else.

robmaister | 5 years ago | on: A first look at Unreal Engine 5

Unreal's Python integration is incredible. I made some modifications to it at work to run a version of WinPython (both for loose scripts outside the engine and pip access)

It's great for complex asset pipelines and quick one-off editor scripts, at least in my experience.

The only thing that felt a bit wonky to me was attempting to use bitflag enums.

robmaister | 5 years ago | on: A first look at Unreal Engine 5

Trying to find a GC-related crash on a stale pointer that is only reproducible in a Shipping build on a single platform is fun (depending on your definition of fun)

(Shipping in UE4 means release mode with full optimization enabled, most logging & profiling stripped out, etc)

robmaister | 5 years ago | on: A first look at Unreal Engine 5

It's possible that it's an async compute task, which could potentially miss a frame and show old data (instead of the whole frame missing vsync).

Also this demo is supposed to be running on a PS5 devkit, which means that you'd need a devkit to run it, which means that you'd need to sign NDAs and join their developer programs and whatnot.

Having worked with current gen consoles (meaning I can't go into any amount of detail), it's not a trivial thing to get a demo like this running well on PCs. This demo is likely making use of every platform specific feature available to them.

That said, the demo might be accessible through some back channels if you're already a UE4 licensee and have a PS5 devkit.

robmaister | 6 years ago | on: Age Discrimination at Work

I'm 25 work at a relatively large game studio. My boss and the whole chain up (3 people) are all in their late 40s or older. I've learned so much from them as well as the senior engineers that I sit next to.

After my experience here, a low average age for an engineering team is a red flag. You're better off for not having been offered or taking that job.

robmaister | 6 years ago | on: The End of Moore’s Law and Faster General Purpose Computing, and a Road Forward [pdf]

Very true, I did just recently pass up a great deal on an NVMe drive because I can't use it in my current setup. I believe PCIe 2.0 will also bottleneck USB 3.2 (or Gen 2x2 or whatever the naming scheme is now), and whatever GPU I upgrade to next (I read that it's a bottleneck with the GTX 1080 and up)

NVMe is the only thing that will cause a noticeable improvement for me though, seeing as I still game on a 1080p60 monitor and generally don't need that sort of speed from any USB peripheral.

Still, the processor itself kicks ass and I think the only reason why most people would need to upgrade are for newer peripherals.

robmaister | 6 years ago | on: Graphics Programming Black Book by Michael Abrash (2001)

I work in games doing mainly graphics work - it's amazing how many of these concepts still exist and have been recycled in interesting ways. Well worth the read if you're in my line of work.

For example, the concept of "sorted spans" in Quake is conceptually the same as how "light culling" is done in deferred and forward+ rendering pipelines. The first I'd heard of the technique was how Battlefield 3 used the PS3's SPU to do light culling for 64x64 blocks of pixels at a time.

robmaister | 6 years ago | on: Graphics Programming Black Book by Michael Abrash (2001)

From my experience (graudated in 2016), most interviewing is centered around algorithmic complexity or at least regurgitating logarithmic complexity algos.

Potential hires still in or just out of school should have no problem answering those questions, but a few years out and most people forget those skills since most of the time the answer is to use an existing implementation or find a way to avoid the problem entirely. All of the people I know with a 4-year CS degree learned all about that stuff in their data structures/intro to algo classes.

I work in games and have had to both implement a few data structures on my own (mainly specialized trees and graphs). I've seen them help performance a ton and I've also had to scrap one or two of them because the naive implementation was faster. Nowadays a lot of indirection means your processor is spending most of it's time waiting on memory reads, while flat arrays can be loaded into CPU caches a lot more efficiently.

robmaister | 6 years ago | on: I was a 10x engineer and I’m sorry

I personally have a huge amount of respect for all 3 as engineers.

There are plenty of legendary gamedev engineers who are somewhat known, most of them hang out on Twitter (Carmack has a great Twitter account IMO)

I've only worked in the industry for a little over a year now, but I've heard some great stories from the senior engineers.

robmaister | 6 years ago | on: Uber, Lyft drivers manipulate fares at DCA causing artificial price surges

You still get to deduct half of your SE tax from your ordinary income, which is fairly generous from the IRS's perspective.

From the perspective of someone like me, my W2 job already puts me in the 24% bracket this year. If you use a platform to freelance, they'll take their 5-20% cut, you'll owe FICA at ~15%, ordinary at 24% (a little less because W4 withholding tends to overestimate), and state taxes at 5-10%. It's fair because those are what the rates are and I know what I'm getting into, they just appear higher at first glance.

It's also easy for me as a software engineer to set my own rate to account for the extra taxes I'm responsible for. In fact, my rate is significantly higher on the one particular platform where I'll only see ~$150 of the first $500 billed to a new client after everyone takes their cut (on which the client also pays ~$10 in transaction fees to the platform).

It's nowhere near as easy for a Uber/Lyft driver to do so since their rate is set for them by some algorithm that tries to balance supply and demand. These companies went out of their way to classify drivers as contractors, but still want to present an interface to the riders that tells them that they're taking an Uber and not that they used Uber to find an independent driver.

IMO they can't have it both ways. Drivers should be able to set their own rates and riders should be presented with a list of drivers and rates to pick from. Uber still gets to take their cut and they can still set up surge pricing as a % of the driver's rate. There can still be an option to pick the cheapest driver for those who don't care. This would better reflect what these companies are trying to legally classify themselves as.

robmaister | 6 years ago | on: Uber, Lyft drivers manipulate fares at DCA causing artificial price surges

One year I made around minimum wage (paying myself just barely enough to live during the early stage of a startup) and paid I think around 25% effective just for federal.

I now make a lot more than that with a W2 job as my primary source of income and my effective federal tax rate was about half of what it was that year.

IMO FICA cap should be raised significantly and the rate should be reduced. There are entry-level FAANG employees who aren't paying their full salary into it. But that's an entirely different topic.

robmaister | 6 years ago | on: Uber, Lyft drivers manipulate fares at DCA causing artificial price surges

The thing about contracting is you don't have to take the job if they aren't willing to negotiate. If no one wants to build your website for $5, then the client is either going to have to pay more, or not make a website.

All of the drivers around that airport incur roughly the same costs for gas, wear and tear, rent, etc. So while you might find someone halfway across the world to build your website for $5, you won't be able to do the same for your driver between the airport and your hotel.

robmaister | 6 years ago | on: A Guide to Rust Graphics Libraries as of 2019

Yes, it's absolutely inefficient but for those platforms it's the only way to execute your own code on the GPU. Metal has a bitcode format that I know nothing about and I believe older DirectX had some intermediate format that was binary. Both are proprietary and only documented via reverse-engineering, so they're not great targets.

Most of the extra cost of feeding in SPIR-V could also be offset if you generate the text shader code at compile/packaging time so that those builds don't have the original SPIR-V in them.

robmaister | 6 years ago | on: A Guide to Rust Graphics Libraries as of 2019

The first link I included has a wonderful diagram of this (but from the perspective of HLSL instead of GLSL), the gist of it is:

GLSL gets compiled to SPIR-V ahead of time by a compiler of your choice (probably glslangValidator). You can take that binary blob and feed it into Vulkan (vkCreateShaderModule) or OpenGL (glShaderBinary as long as you have the right extension).

For everything else, Khronos has a tool called SPIRV-Cross (second link), which reads the SPIR-V binary data and emits a text file/string in ESSL for OpenGL ES, MSL for Metal, or HLSL for DirectX <=11. Those all go through the "normal" paths for loading shader code in their own APIs.

page 1