optionalparens | 9 years ago | on: What Could Have Entered the Public Domain on January 1, 2017
optionalparens's comments
optionalparens | 9 years ago | on: What Could Have Entered the Public Domain on January 1, 2017
It's always a matter of how much attention you grab and what people you piss off. Things can look like they are fine, legal, and alright, until they aren't. It's hard to know if a strategy works until it doesn't.
Beyond the perception issues I mentioned, all I am saying is that the legal issues tend to be somewhat volatile because what is legally or ethically correct isn't necessarily what happens in reality. Combine that with other motivators like politicians and deep pockets and it gets pretty scary.
I am glad my spouse works in what I would call a more moral side of prosecution that's hard to argue the other way. She can't and I can't say that entirely good things happen in other areas of prosecution like piracy and digital crimes. If someone wants to cause you trouble, they can often find a "way" or at the very least dig and bother enough until they invent something or shake things up to find something. The cost of a legal battle alone sometimes is its own form of extortion and can be a deterrent. Gross behavior, but it happens.
optionalparens | 9 years ago | on: What Could Have Entered the Public Domain on January 1, 2017
Chinese walls/firewalls don't quite hold up in court as well as one might think. Obviously it can depend on the country and circumstances, but anytime people feel there is a criminal conspiracy at work, they come up you hard. It is also hard to keep such a wall "clean" as all it takes is one person being stupid to establish a link.
Again, it's also about fear and human nature, not reality. I'm sure there are people with great opsec and planning who don't have this problem, but they also don't necessarily attract enough attention to get shut down. Factor in that many admins start a site because of ego, and that doesn't lead to great opsec. Nor do most people want to use software they know nothing about.
I should probably add FWIW, married to a prosecutor who goes after international criminal conspiracies among other things. Not the computer stuff thankfully, but I've seen and heard about what happens when the words criminal conspiracy get thrown about and people with money are involved. Doubly dangerous when extradition treaties exist or the country handling the case is a bigger power like the US.
optionalparens | 9 years ago | on: What Could Have Entered the Public Domain on January 1, 2017
Most private trackers are using the same pretty dismal PHP bit torrent tracker libraries/servers that work with mysql or sqlite. It would of course be possible to replicate these or do something as simple as cron jobs, rsync, etc., but that's not often the real issue.
The larger issue is the legal aspect. Most of the time a big tracker is taken down, the owners are known to authorities or don't want to risk being known, even by association. As such, continuing the same site, even under different management would put them at risk. If I remember in the case of what.cd and a few others, they put out notices saying they intentionally nuked their databases for this among other reasons.
If for example a big site shuts down and one of the members spins up a new instance, the fear is that the original owners would be liable. It doesn't matter if this holds up or not in court or is even a real thing, merely the threat of it is not worth the trouble to many people. Not many people want to sign up to go to jail or deal with financial threats because of sharing movies or music.
Distributing the database doesn't solve the legal aspect. In fact, distributing the database might be seen as more ammunition to threaten the original site admins as being "enablers." So in theory what you are saying makes sense and would be a good idea, but in the face of reality and human behavior, it is much harder and more than a technical issue.
optionalparens | 9 years ago | on: Facebook Doesn’t Tell Users Everything It Really Knows About Them
Regarding hypotheticals, if you collect data, it is always there for someone to abuse, whether they are technically "allowed" to do so or not by laws, company policy, or otherwise. It can and does happen more often than we believe as I came to find out from friends and family who work in government and legal roles. I am more of the cautious sort and would rather try to make a best effort that might not be perfect to simply mitigate and minimize the issue of the wrong thing happening. Though these cases may never happen, I am in the camp of "let's not make it easier."
As an aside, a lot of feelings towards these issues can be influenced by environmental and contextual reasons. For instance, I was raised in a family that lost a lot of people due, but not limited to things like data collection, humans selling each other short, supposedly good people making bad choices, and individuals acting highly in self-interest despite their "ethics." Further, I also grew up in a country more under constant and tangible threat than the US for part of my childhood and served in its army against very real threats during wartime. I'm certainly no action hero as my job was more of the engineering and intelligence nature, though I have seen first hand what people can, will, and tend to do with data, especially if there is money or physical security involved. Especially on the other side of things (i.e. our enemies) if they are losing. As such, I am more sensitive than most when it comes to people knowing things about me. I assume "they" know everything, but as I implied, I try not to make it easier than it needs to be and I actively throw in disinformation about myself. It helps if you know a thing or two about algorithms and the best ways to confuse them :)
optionalparens | 9 years ago | on: Facebook Doesn’t Tell Users Everything It Really Knows About Them
optionalparens | 9 years ago | on: Facebook Doesn’t Tell Users Everything It Really Knows About Them
optionalparens | 9 years ago | on: Why I close pull requests
In other words, you take measures to catch errors, mitigate failures, and protect yourself rather than say "go code and push to production everyone, we can just roll back!" That might work as stated in some domains, but not others.
optionalparens | 9 years ago | on: Why I close pull requests
Rather than regurgitate what most people here already said, let me list a few programming projects, domains, and tasks where at least thinking about design if not writing design documents or spending days, weeks, or months figuring it all out is worthwhile.
* Programming Languages
* Databases
* Operating Systems
* Medical Devices
* Safety Equipment
* Streaming containers/formats
* Encryption
* Security
* Manufacturing/Robotics
* Aerospace / Space
* App Dev Frameworks
* Game Engines
I could go on.
The point here is that there are plenty of things where thinking about it up front is beneficial, if not required, especially if some combination (but not limited to) the following are true:
* Lives are at stake
* Changing it later would be hard (programming languages are an egregious offender, I won't name names)
* Customer adoption will completely derail or forbid architectural changes
* Fixing it will require essentially doing it again from scratch
* Changes will force the creation of patches that will incrementally kill the project or slow future development
Frankly, I think we have too many things that are poorly designed. Most projects I see in nearly any domain are mostly set in stone once time and money is added to the mix. Everyone talks about redoing or fixing things, but it rarely happens except for minor changes. As projects scale up, few people can afford to constantly back out lots of changes and rearchitect everything. Those that do usually fail or don't get a good ROI, and those that don't change fail anyway.
I've worked with all kinds of people and though there are people I have great admiration for, I can safely say that 99% of them are idiots and have no business being programmers. I know it sounds harsh, but I've been doing this a long time and have worked with all kinds of people. Too often I see the programmer's equivalent of an illiterate child that gets pushed through high school. So no, I don't trust people to do the right thing, I merely trust most people I work with to not act maliciously. Most of all, I don't trust myself. As the progression goes as a programmer - your code sucks -> my code sucks -> all code sucks -> my code sucks but I'll live with it, hope it is better than most, and ask people smarter than me for help.
Most better developers I know do in fact right some form of design documents, even if it's just notes and justification why X or Y won't work, but Z "might" work. Many also take a lot of time to think about something before writing any code, but once they do, they actually finish much quicker with less bugs than the young programmers who want to "move fast." Of course none of this is universal, and as I said, it all just "depends." What do I know?
optionalparens | 9 years ago | on: Finger Trees: A Simple General-Purpose Data Structure (2006)
I also concur cache misses are vital and sometimes don't get enough attention since people pass hand wave them away as micro-benchmarking which isn't true often for critical code paths. I'll add that again, this is why testing on your target hardware is important. You may think you know, but often you don't. I've seen too much stupid stuff where someone wrote code that performs great on their development desktop and awful on a server or a game console because of different CPU architecture and cache design or sizes.
Maybe someone can dig it up, but I've seen some of the sub-variants of red-black trees have dramatically better performance than I thought possible (or worse, ex: left-leaning), and the results can vary across runtimes. In addition to cache misses, there can be some other considerations for wanting to use a red-black tree like concurrent or parallel programming. In these cases, some of the cousin data structures like AVL trees can swing you for or against a red-black tree or other backing structure. I had this come up recently when selecting an interval tree that needed to be accessed between threads for instance.
Selecting a data structure is harder than most people make it out to be. There's just a lot of considerations at work, so you need to balance use-cases, performance (against those cases and raw), access patterns, allocations, and so on to come to a decision. Sometimes if you add up the total cost of doing something like replacing a red-black tree with other methods that are more cache friendly or have some other characteristic that is better, the overall performance cost can end up being worse in aggregate. So the answer is the same as always, "it depends" and the follow-up, "try it."
Parent is definitely right that academic and practical concerns of those of us in the trenches being different. That's why arguing with pure textbook evidence is stupid and you should challenge anyone who does it. Textbooks are a starting point for making and justifying a conclusion, not a shortcut.
optionalparens | 9 years ago | on: Finger Trees: A Simple General-Purpose Data Structure (2006)
I want to add some general comments about anyone thinking about using a finger tree or anything else.
Give any potential data structure a try with real data, not contrived stuff and benchmark on the actual target hardware and in a real code path if possible. Further, try it with real-world access patterns like adding, removing, ordering, etc. at different times at different sizes - the results can be interesting. Even if something does well for one use-case, the other use-cases you need it for it may perform awful, so sometimes it's a compromise. Data structure performance is quite often not performance for a single operation, but a function of several operations. This makes it even more important to test it with real data when possible and access it different ways.
Moreover, too many times I see positive or negative benchmarks where someone is filling a data structure with exactly the same integer, or the same instance of an object or something like that. In practice, the underlying data can introduce other considerations based on the size, pointer chasing, memory allocations (especially if the data structure can dynamically resize) and so on in just about every language. Factor in hardware, and the results can change even further. Also ensure your benchmarking itself isn't flawed - we've been there and discussed it ad-nauseam and yet I continue to see bad benchmarking in the wild (particularly on the JVM or CLR).
As a related example, I've found that even the famed link list can suck in real-world. I've always been a "choose the right data structure for the job" kind of guy, and yet sometimes it's more deceptive than it seems. You might say, oh, my access patterns are exactly a linked list, and yet a simple array with lots of helper cruft code will smoke it in practice. Likewise, I've seen shocking results replacing something with a skip list or other data structure that seems like it should perform worse, but depending on the data, size, hardware, and implementation, I get interesting results. I've seen plenty of stupid things like someone who picked a hashtable for fast lookups, only to iterate through it in some other code path for some other use-case accessing the same data, completely murdering overall application performance due to selecting based on a single criterion. If you're using a language where you can create your own allocators, it can become even more interesting than you think - i.e. there are gains or losses to be had.
The moral is simple: "just try it." This comes up a lot in game programming for example and I've seen it in embedding programming have huge effects as well. I can't even begin to explain how much faster I've made things just by changing a data structure because of real-world conditions vs. lab conditions. Don't just assume that picking what the textbook says in the Big O table is what actually will dictate performance, positive or negative. There's a reason most good data structure books also contain explanation regarding behavior and performance, though even books don't and can't explain the real-world nuances when working with non-contrived data.
optionalparens | 9 years ago | on: Choosing Functional Programming for Our Game
Regarding speed, I think you misunderstood part of my comment. I simply mean you pick something decently fast when picking a scripting language with regard to the host language. Obviously Clojure is decently fast on the CLR in theory, but in practice may be another matter given the properties of a game which are far different than general use computing.
As for the speed with regard to Clojure, I've covered much of the general stuff in other comments, but I was in part writing to the point of using it independent of Arcadia in a scripting context. For this, I am contending there aren't many advantages to using Clojure over other approaches as a general scripting language for most engines, ex: Unreal, CryEngine, in-house, etc. Unity obviously benefits because of the CLR, but there are still issues with both Clojure CLR and using something that is yet another abstraction on top of Unity as others have pointed out. This makes life hard for newer or less-skilled game devs especially which it tends to sound like the original post authors are in general. More than anything, unless you have unlimited time, do you want to get a game done or do you want to help fix a new runtime?
Additionally, I am not comparing Lua to Clojure for the CLR specifically, rather stating that most of the time you add a scripting language to an engine in general it depends to be languages with certain properties. The reasons have little to do with Clojure as a language and more to do with scripting specific needs, which Unity itself doesn't really handle very well depending on how you define "scripting." That is any decent scripting engine must address some of the following below in a game, or it is not a scripting engine but rather just part of the main compilation target.
- Support hot code reloading, ex: from file system, user scripts, in game commands, etc.
- Compile/JIT reasonably fast if required
- Provide bi-directional calling semantics
- Easily integrate with the primary host language of the engine
- Decent experience for script authors
- Easy tool integrations, including with 3rd party middleware
- Minimize garbage if generating any, and make collection predictable if possible
- Not require the game code core to be recompiled in tandem (a primary motivator for using one from C/C++)
Clojure on the CLR certainly does some, but I'd argue not all well to be a huge benefit over C# in terms of productivity, performance, user experience, or user-facing scripting features. Of course there are some cool things due to Clojure's reader and macros, REPL, and so on, but the point is more the aggregate benefits. The reason people use Lua, Squirrel, AngelScript, and many others is not that they are the best, or fastest languages, but are generally good in all those points with regard to the primary language of the engine they are attached. Personally, I'd rather use other languages like Clojure, but there are tons of drawbacks that don't bite you until you are working on a real project, many that I listed elsewhere in this thread.
As I noted (and you repeated), I assume the code is alpha and there are still items to address. Nonetheless, most of these things are obvious and easy up-front, and many well-known in the Unity community, so I'm not 100% buying everything is a "later," just some of it. While in a normal game I wouldn't expect various optimizations or micro-optimizations, in an engine you definitely do. Even when working in C or C++, all the engines I've worked on we used our own allocators, minimized general allocations, added indexing tricks, branch prediction and inlining tricks, array packing, alignment tricks, and so on from the start, because that's the job of the engine and the end-user (game dev) cannot optimize away all of those things if the engine is the issue. Of course Unity doesn't do all these things well and things like Clojure layers are a lot further from the actual engine, but I'd argue that it means that everything needs to strive for equal parity or better performance when and if possible. Those are things you design around mostly from the start, the other optimizations that I think you are referring to happen later. I'm simply contending I see a mix of both.
Finally, following up again on the use of Clojure, as I've said in this thread, I'm a big proponent of Clojure. That aside, the CLR vs JVM implementations do not have parity because of lack of resources more than anything else. This automatically puts a lot more work in the pipeline and ultimately on the game developer, at least until things mature more. This isn't about using something like Arcadia at all, but rather using it right now given the original constraints and resources the article author mentions or implies.
optionalparens | 9 years ago | on: Choosing Functional Programming for Our Game
I'll summarize instead with a simple list:
- It's better to map your input to some game actions than pass around the raw value
- You don't need to do anything creative like queue function pointers themselves, just the raw values.
- You can have an input "queue" but that is typically different from a queue that is going to act on those inputs.
- You need to deal with multiple inputs potentially, like someone pounding a button. Hopefully your input lib deals with this some already, but always be aware of validating the input and deciding which one matters most. To that end, priority queues are good sometimes. You also sometimes want to cancel inputs. If you want to read more about it all, I'd read some about "intention systems" as related to input.
- Once your input is mapped, you can operate it in multiple pieces of codes, systems, etc. that might be downstream in your game loop
- You can transform an action into yet other new state as you go along
- Queues are indeed great for doing multi-threading, just be sure you select the appropriate kinds of queues. Mostly this ends up being thread-safe queues that are lockless. But sometimes you want to just use queues for FIFO, and these queues are generally faster and/or more flexible if they use a non-thread safe approach.
- You can have many queues in your code as mailboxes to distinct systems. Again, think something that is agnostic to the world around it and just receives some state it needs, and has its own "inbox" of things it might need to process this frame. It certainly could be that you can't process them all that frame.
- Keep in mind with queues generally once you pop something off, that's it. So you need other ways of hanging on to state and things that aren't ready. To that end, simple arrays are your friend, especially if you can pull things out quickly by index in relation to some id (ex: entity id) and keep them tightly packed. Dynamic vectors are garbage and mostly not used in serious game engines unless the developer had a wtf moment or the use case specifically calls for it. That's a longer topic though.
- Attaching functions directly to input again is a pretty bad approach. If you need more than one function, you'll have to implement something more complex. If order of function calls matter, you again need yet more. The order should be well-defined in the game loop itself.
- Regarding order, it also relates to communication as well. Note that you don't need to always process every new input or piece of state every tick. Sometimes you actually don't want to do this and defer it to a later time. There's no 100% rule, but an easy thing that helps is to think if the player or game sim will suffer because you didn't process that state during that tick. For some physics related things, that can be bad, but for other things, it's more of a "as long as this happens very soon, it's alright."
- There's no 100% rules for any of this.
optionalparens | 9 years ago | on: Choosing Functional Programming for Our Game
I would recommend thinking in loops, so it sounds you're already on the right track, you just need to expand that more to everything else. You generally want to do the same "thing" to many "things." Since you are newer to programming engines, I won't get too technical and I'll say that the gist is that the CPU likes this. The bi-product is if you take this approach, you'll tend to have better architecture as well as performance and solve some of the problems you listed. This is really the basis of an entity system approach if you want an example architecture.
Whether using entity systems or something else, my recommendation would be to separate out functionality and make them rather agnostic to each other. Again, this fits in with the loops paradigm. You still often need things to communicate as your problem suggests, but the way to do that is to often take some state, update it with new state, and pass it down the chain to see if anyone cares. That chain itself can be a loop, i.e. a bunch of systems in your game loop calling a method like update(tick, state, ... ). That said, you also need to be careful what and how much state gets passed around for performance reasons (ex: avoid excessive copying) and to prevent weird things from happening if you start introducing concurrent and/or parallel programming. To that end, I tend to keep things minimal as I can, or at least have some sort of way to hand things just what they need, not everything.
Queues also relate to all of this and can be your friend. A common communication approach is to use mailboxes for example and tends to be more straight forward to debug and optimize than using callbacks and event-based approaches (they tend to murder the cache and your stack traces). Queues also play nicer for being building blocks for concurrent programming.
Also keep in mind you may need to flip your thinking. Sometimes you don't actually need some action that happens during a tick to be processed the same tick. You can often or will often want to defer things to the next tick, or even several ticks after that. There's a Naughty Dog presentation on Last of Us port to PS4 that relates to this and how they flipped even the way frames are processed on an existing codebase to get better performance. The point is not for you to do what they did, rather to start thinking in terms of what the user will see as the end result. The user often doesn't care that you didn't finish processing some action last tick if it had no bearing on the gameplay.
optionalparens | 9 years ago | on: Choosing Functional Programming for Our Game
FWIW, scanning the Arcadia source there's also some garbage being created that can be easily avoided. I didn't really trace to see what each thing was "doing" or if it even runs during a game, but it certainly felt eye opening to me to see lots of temporary garbage created in something that is supposed to be lower-level. I am not sure at this point what would get optimized away anyway (ex: foreach loops, extra tolist/arrays used to create garbage in the past), so here's to hoping the devs just didn't get around to optimizing that stuff away yet. Seems to me you would want to minimize garbage from Aracadia itself given Clojure is going to create yet more, not to mention cut stylistic corners for pure speed in this context.
optionalparens | 9 years ago | on: Choosing Functional Programming for Our Game
You can argue that lots of games can/do modify things while running. Entity systems allow components to be added at runtime. This not only allows for different code to be executed, but also I've seen code that reads components and hot loads additional code for systems supporting those components if necessary, which can be done a number of ways, and raw from the source even.
Smalltalk also supported this and isn't really functional. In addition to the Smalltalk environment itself pretty much being a live environment, I saw things like serializable continuations in Smalltalk. You could do fun things like a Gemstone Smalltalk app I saw that would take the image state and allow hot modifications via a bug tracker.
There's tons of other systems for doing this. And it's also a reason why many games write big chunks of the type code you would want to hot load in scripting languages like Lua.
There are tons more examples and has no relationship to functional programming other than to say that it was a common thing in the Lisp world many moons before most of these other things existed.
optionalparens | 9 years ago | on: Choosing Functional Programming for Our Game
Unity itself has had to work around challenges too, and the abstractions and development comforts it provides are not free. Even experienced Unity developers who use C# still have to be mindful of how they write their C# (i.e. don't write the prettiest code even if you can because it will punish you). Of course this is true of any language and for other platforms like in mobile game dev. The point is that it's especially limiting to have additional layers of abstraction largely beyond your control. Eventually, it reaches a critical mass where you're writing this weird meta-language to do what you want because writing a game with the ecosystem around you forces that instead of letting you write idiomatic code in the language you picked.
The further you get away from the metal for a game, no matter how simple, the more you will face problems. It's nice to use languages like Clojure, Python, Ruby, JavaScript and so on for games, but for serious work they often get in your way. For instance a common problem the average developer encounters is the game loop vs. frame rate - how do I get enough done during a tick to not grind the game to halt? Garbage collection, de/allocations, and so much more become your enemy and you start to feel like you're fighting some kind of magical force trying to slow down your game or make it less predictable, rather than being productive or even optimizing it in sane ways. And yes, predictability is vital to writing a good game, because the last thing a player wants is your game to do stupid things at inopportune moments like in the middle of a jump, never mind other concerns like debugging, multi-player, or platform requirements.
Of course there are workarounds for many problems you may face, but as game complexity grows, things tend to scale out of control for most people. Many of these problems cut so much in to the time or make you have other sacrifices that you start to feel like you're largely missing the benefits of working in these alternative abstractions. At some point you just end up breaking all the rules of your language/tools/libraries to get the game to the level you want. Worse, you're working on many problems that are quite far from actually finishing your game. Obviously for simple projects, much of what I've mentioned previously is not a problem, just to again make that clear.
Getting back to Clojure, I feel it really suffers from the aforementioned issues for non-toy games. This isn't an indictment of Clojure, just about picking the right tools. Immutability, atoms, refs, agents, CSP, sequences, transducers, recursion, and so much more seem like they would allow making a game quicker, easier, and with less headaches. What ends up happening to most people I've seen who try to use these kinds of tools, whether it is Clojure, Lisp, Haskell, Elixir, or anything else is that at a certain threshold of requirements, is what I mentioned earlier - it all falls apart. At this point, you spend all your time removing all the goodness the language and tools provide. You start writing your own libraries, often down to numbers, matrices, etc. because you have no other choice if you want things to run in a sane, predictable way and to integrate with anything like OpenGL, input libraries, hardware, SDL, and so on. You throw out immutability in huge parts of your game, and you realize that refs, agents, channels, sequences, and more are just making life worse, not better. Pretty soon the entire language is stripped down into something almost unrecognizable, left with only a few core nice things. You then descend into the next layer of hell and start porting things into Java and calling them from there. Even in Java this can happen to a large degree. Add in more unpredictable stuff and abstractions like Unity, Unity plug-ins/add-ons, multi-platform requirements, talking to other libraries you want to use, and so on. The author of the article mentioned simplicity as a selling point, but for non-trivial contexts, you will almost certainly throw simplicity out the door. It starts small and snowballs as I described.
For someone building a text adventure or other simple game, you probably don't even need Unity anyway. If you're building a smaller indie game or want to get something done quickly, just use Unity and C#, and you'll get it done quicker and benefit from the ecosystem better. If you can't/won't learn C#, you shouldn't be programming or making games. I know that sounds cruel, but at some point we all need to acknowledge our skills. A game developer should be able to learn any language and be productive in it quickly. The average game dev may not touch the entire game, but more sophisticated games often use several languages, especially if you count things like shaders and scripting engines as being distinct.
If you're just learning/new to game dev and/or really want to learn Unity, just use it as intended, otherwise you're adding more layers of abstraction and complications that make it actually harder to learn anything, and worse to get things done. It may often seem like you figured something out and using your favorite tool will get things done quicker, but most of the time you'll hit the ugly thresholds I described when you try to combine it with something more sophisticated like Unity. Use things as they are intended. If you want to make a game in Clojure, great, just keep it simple, write your own minimal engine optimized for Clojure or hope someone makes one someday, or go the ClojureScript route to again make something simple.
In summary, Clojure is indeed an awesome language and you can write a game with it, just I wonder what is to gain using it for Unity. In the general sense of things, I wouldn't recommend layering too many abstractions when building games. If you feel otherwise, I'll refer you to the graveyard of projects that have tried to take X and make it work with Y - it is a huge, sad place. That said, I'd love to write a non-trivial game in Clojure or another functional language one day, somehow.
optionalparens | 9 years ago | on: Choosing Functional Programming for Our Game
Game programming like most programming largely requires choosing the right tools for the job. While I agree it is important to factor in the skills that you team has, the choice of Clojure + Unity + this write-up seems like retro-fitting for the wrong reasons. Regarding Clojure, I can appreciate what Arcadia is trying to do, but it seems like an odd choice for an unskilled game dev for anything but messing around/intellectual fun at this point. This is both because the software is quite honest and labels itself alpha, and because Clojure and Unity is not necessarily the best combination IMO for many reasons. The primary reason I'd contend this is because it is a huge pile of abstractions of abstractions with not enough payoff. Even just browsing Arcadia's source, I can also already see a few bits of code that aren't optimized as well as they could/should be for something like this. I can only hope the article author is not serious about pursuing a commercial quality game of significant complexity and power with this approach (i.e. not necessarily AAA level, but upper-tier indie).
As for functional programming in games, it is a big subject and an interesting one, not to mention a worthy pursuit. Indeed, huge amounts of the trends in game programming for the past two decades at least have been moving to functional paradigms. Entity systems are a great example of this. One way to think about Entity Systems in particular in relation to a functional language like Clojure is that it essentially allows you to think about your game state as a reduction of states (entities with attached components and systems operating on them) over time. Furthermore, as multi-core programming becomes more important, some of the constructs that make concurrent programming, or even parallel programming easier are commonly found in highly functional languages. Clojure is no exception and has many things are amazing to this end, but not all are necessarily useful in the context of a non-trivial game. There are many more things I could touch on, but suffice to say that programmers are bending over backwards to make C and C++ in particular, followed by C#, Java, Swift, and other languages behave like functional language. Ironically, I think part of the problem with most functional languages is that people are pulled from the other end and forced to make these languages behave like C++ to get acceptable performance and other traits.
Despite many OO/imperative languages used in game dev moving towards more functional trends, the promises, power, and tools that most of these languages provide are vital. There are too many reasons to list (some good, some bad) why C and C++ are still dominant in this world for serious game dev work, but a few include existing tooling, full control of memory, ability to work around language induced performance roadblocks, possibility of portability, optimization per platform if required, bountiful workforce, and low-level integration points with hardware, SDKs, and tools. That's not to say other languages don't have these things, but rather they have some critical flaws in some of these areas that make life harder than it needs to be at the cost of actually getting a game completed, which in the end is one of the most important things.
I know it's frowned upon, but I'm splitting this rather large comment into two parts.
optionalparens | 9 years ago | on: The mystique of Goldman Sachs
The AI comparison is a really good one, I like that. I'm in part a trained statistician, at least from the University level. I have gotten into machine learning more recently given my stats background and my previous game development background. This was discussed by other people in other articles, but I sometimes shake my head when the word "intelligence" is used to explain what amounts to applying some basic (or even advanced) stats to an input to produce some hopefully decent output.
Having seen the number of ways that supposedly awesome AIs can easily be confused, I am far from convinced. I don't think we are in danger of being replaced by robots when we can't even secure a web server, email, or now it seems, toasters. If the robots do take over, I know there will be so much sloppy code that we can either hack/backdoor them and shut them down after many long nights in a bunker typing away, or we can wait for them to just crash/segfault/whatever because I trust the creators to write bug-free AI as much as I trust a Lion to go vegetarian. On the flip-side, that all should also make it easy for someone to trigger them all to become self-aware or start the next great cylon purge. I'm ready and don't care about my plans next week anyway.
optionalparens | 9 years ago | on: Tech workers pledge to never build a database of Muslims
Firstly, Judaism is not the Old Testament at all. Christian and Jewish philosophy, and for that matter Muslim philosophy are all very different. As such, there are different interpretations of the same input data one could say. The problem is not even the input data is the same. (btw: the term Judaeo-Christian suggesting commonalities IMO is wrong and might as well not exist because it's lumping mostly different things together).
The Old Testament is a Christian edited version of various selections of Torah, namely 5 books, but even the divisions differ between Judaism, Catholicism, and Protestants. Beyond potentially reorganizing and excluding content, another disconnect is that the Torah according to most Jewish traditions is both a written and oral tradition. Most Jews who are knowledgable about theology would agree that Judaism is not explained, nor summed up, or even represented by the Old Testament, let alone the Torah itself. Further, oral traditions as well as cultural and religious duality are so important, that you can't actually read things like the Torah without knowing them. Philosophically, there's a pretty large difference between Judaism and Christianity on so many levels related to spiritual significance and directions of text.
Even inside Judaism there are huge divisions. Note that there are people who devote their lives to studying Torah and while I have a few interesting things to say about that (depending on the context, not nice things), I acknowledge it as intellectually interesting in that you have many people with different interpretations, including seeing the text itself as liquid or even adaptable (ex: to modern times). This kind of studying, questioning, and analysis is encouraged in Judaism. While it happens as well in modern Christianity, the views and treatment of text here are very different suffice to say without going into great detail.
Another large issue is translation. Ancient Hebrew is just that, ancient and not completely understood with 100% accuracy even among native modern Hebrew speakers and scholars. Even Aramaic and other languages of related texts have changed. Pronoun translation alone from a gendered language like Hebrew to English is bad enough for example, but factor in that often the path could be as muddled as Ancient Hebrew -> Greek -> Latin -> <Your Language, ex: English>. There are of course more modern direct attempts, but quite often translations are not from the source, but rather translations of translations. Between translation, time, and oral traditions, there's a lot of signal decay.
As for other things about Judaism related to the Old Testament, the mention of it being 2.0 splits a lot here. Judaism has all kinds of commentary, supplementary texts, additional scrolls, and other books that are various degrees of "holy" or even "canon." Pretty much none of these exist in Christianity and only a few play any role or influence on the New Testament (aside: New Testament also morphed over time and had translation issues and so on). To make matters more interesting, some interpretations become part of oral law and even influence the readings themselves. Indeed, words are often changed or substituted by some people, while others would rather die than do anything but what they perceive as read things in their original form. If you purchase a book even that contains the Torah and what the Old Testament is based on, each version will have wildly different commentary, and sometimes even omissions.
I could go on, but since it's HN, let me just summarize a few things about the Old Testament in computer terms to get back to the point:
- Christianity vs. Judaism is not like a Git branch, but rather taking a few conceptual things and writing different implementations. There is little if any shared code, but maybe you could think of it as people who worked on an old project together and then had a split and built new things.
- New Testament is not v 2.0 but more of a re-implementation with what could argue is some code refactoring and slapping on a new interface (some things in common at a high level, but not at a lower level). Some (a lot) features are missing in the re-implementation.
- New Testament is heavy on marketing Jesus and his disciples, while ironically this would be an offense punishable by death by Old Testament and the Torah (you are not allowed to call yourself or be called the Messiah, the conditions for the Messiah are specifically laid out and according to these texts have not happened). New Testament would fail build tests and constraint checks.
- Christianity and Judaism have different licenses, plug-ins, benevolent overlords, and so on...
- The human compiler pretty much is broken because people aren't following the source of either most of the time, probably thankfully. I know the goats, sheep, etc. are probably happy aboutr that, let alone humans.
Well I am sure someone is going to hate me now or kill me for sacrilege. For the record, I the only thing I believe in is the electric guitar. I just grew up around religious people and spent some time studying it to understand what it was I didn't like.
The average person isn't really going to be targeted for most things though. What is scary sometimes is the recklessness that people exhibit when they do have powerful information about other people. That enables some of the bad actors, whether private citizens, criminal organizations, fraudsters, or otherwise to do bad things with the information. Gathering information is a dangerous thing even if used for noble purposes because it's very hard to guarantee it is used for that.
As far as cooperation between intelligence agencies and copyright lobby, again I can't say. My feeling and first-hand experiences have more been it's an indirect relationship at best. More that lobbies pressure people, who in turn pressure intelligence. That combined with negligence and sloppy information handling and operational security, and sometimes the wrong people are able to see information they shouldn't that was often for other purposes.
I'll never understand what makes corporations and the idea of nations make people act so irrationally and often evil. It's really not that hard in life, don't be an a-hole. If you have to ask yourself if you are doing the right thing, you probably aren't most of the time. Money has just become such a huge part of civilization that it becomes a primary motivator of behaviors for many people, and the results speak for themselves.
I could go on, but I'll leave it at money makes the world go round, and sometimes I feel we've just recreated things like serfdom, monarchy, divine right, and the whole family of awfulness like that in new forms.