soraki_soladead's comments

soraki_soladead | 3 years ago | on: An unwilling illustrator found herself turned into an AI model

> Anyway, just my personal opinion. It's become a very 'us vs them', lines-in-the-sand argument these days, and it'd be great if the conversation could be less heated and more philosophical.

It's heated and less philosophical because many artists are worried about their livelihood while a multi-billion dollar company is working towards making them obsolete often using their own work.

I don't understand the confusion people have towards this issue.

soraki_soladead | 3 years ago | on: Building the Future of TensorFlow

I agree. I truly miss the graph-mode API, especially coming from Theano before TF, but it wasn’t as beginner friendly and Google wanted to capture market share for their cloud.

At least with jax the core library isn’t adopting any of the framework level stuff so those can evolve independently.

soraki_soladead | 3 years ago | on: Transformers seem to mimic parts of the brain

> I think it's relatively uncommon to go back and compare specifically what's happening in the brain with a given ML architecture.

Less common but not unheard of. Here's one example, primarily on focused on vision: http://www.brain-score.org/

DeepMind has also published works comparing RL architectures like IQN to dopaminergic neurons.

The challenge is that its very cross-disciplinary and most DL labs don't have a reason to explore the neuroscience side while most neuro labs don't have the expertise in DL.

soraki_soladead | 3 years ago | on: I used DALL·E 2 to generate a logo

You don't refute my point. In fact, you strengthen it by providing no evidence why this should be a requirement of modern logos for software companies. You list a bunch of things a logo should be useable for in your mind, otherwise its not a professional logo. However, you don't explain why it must "degrade to 1-bit" for those random things nor why the logos should support things like "silhouettes on glass". I can think of a handful of use cases but hardly a minimum requirement for a good logo for the majority of software companies.

I've run several different types of businesses and even those that required print work never required or even benefited from black and white, or even monochrome as another commenter mentioned. We _always_ had the means and preference for full color: emails, brochures, documents, websites, t-shirts—it didn't matter. There was _never_ a time we needed to degrade the logo so significantly. From talking with others that appears to be extremely common in modern businesses, especially software, since the majority of our presence and revenue stream is online, and not glass silhouettes in our office.

As I said, outside of a fairly narrow range of real world use cases, this comment is outdated: "Ignoring this issue is the mark of an amateur." If you have one of those rare use cases, check that box, but otherwise it shouldn't be the norm or a requirement.

soraki_soladead | 3 years ago | on: I used DALL·E 2 to generate a logo

> it won't look good in black & white

That sounds like a concern that stopped being relevant for many software companies a decade ago at least.

These days app icons and hero images are more important than whether you can fax or print the logo.

soraki_soladead | 3 years ago | on: Are language models deprived of electric sleep?

I’m referring to post-training pruning not smaller models. This is already well-studied but it’s not as useful as it could be on current hardware. (Deep learning currently works better with the extra parameters at training time).

Retrieval models (again, lots of published examples: RETRO, etc.) that externalize their data will bring the sizes down by about that order as well.

soraki_soladead | 3 years ago | on: Are language models deprived of electric sleep?

> GPT-3 already uses about twice as many parameters

This isn't accurate. GPT-3 has 175B parameters. The human brain has ~175B cells (neurons, glia, etc.) The analog to GPT-3's parameter count would be synapses, not neurons, where even conservative estimates put the human brain at several orders of magnitude larger. It's likely that >90% of the 175B could be pruned with little change in performance. That changes the synapse ratios since we know the brain is quite a bit sparser. In addition, the training dataset is likely broader than the majority of Internet users. Basically, its not an apples-to-apples comparison.

That said, I agree that simply scaling model and data is the naive approach.

soraki_soladead | 3 years ago | on: Tell HN: GitHub Copilot can now block suggestions matching public code

If the model is learning the nature of code and synthesizing new code then it may not violate some licenses. When people do this its usually fine.

However, if the model is simply compressing code it sees into the model weights, just memorizing snippets, then outputting those snippets, that’s much more likely to violate licenses. Like when people copy+paste from some licensed code without attribution (even if realistically nothing is enforced most of the time).

The truth is likely somewhere in the middle. Lets say 20% of code is stored directly in the weights but the rest is synthesized. That’s a problem for the whole product.

We already know models do a little bit of both depending on the data coverage. Common structures like if/then, loops, etc. are probably “understood” because the model saw them in lots of contexts. However, specific functions, especially those that are seen only a few times in much the same contexts are more likely to be copied. There’s a spectrum here from shortcut learning to understanding.

OSS doesn’t really have the access or resources to evaluate this. Github isn’t really incentivized to share any analysis they’ve done here.

What’s interesting to me is that their solution to this problem is to put the issue on their users/customers. By default, crawl everything public, ignorant of licenses, and if the customer has license concerns its on them to disable public code matching.

soraki_soladead | 3 years ago | on: A Random Distribution of Wealth (2017)

That isn't what post-scarcity means. It doesn't mean there is zero scarcity. It means basic needs are easily met: https://en.wikipedia.org/wiki/Post-scarcity_economy

You're making claims about how space things align with post-scarcity that doesn't match up with the accepted meaning.

Even still, we _can_ send stuff into space, now cheaper than ever. What is the threshold for "send much to space"? Any person can order up a rocket payload on Amazon.space and have it placed in orbit? That doesn't sound good even if it were free. Especially if it were free...

> space exploration is such a good and exciting thing to do, you'd expect that it's one of the first things to get accomplished by a post-scarcity society

Why?

soraki_soladead | 3 years ago | on: A Random Distribution of Wealth (2017)

One concrete disadvantage of _extreme_ inequality, specifically in a democracy like the US, is the erosion of your rights when they are inconvenient for those with more wealth/soft-power than you. When wealth inequality is significant, governments decide it's better to address the needs of a much smaller set of constituents and your rights become curtailed, implicitly or not, as a result. Unless you, personally, are an oligarch, you will be negatively impacted. This may manifest as minimally as living in a society that is worse than it could be.

Separately, speaking as someone that spends a lot of time with optimizers, it seems silly to not have _some_ form of regularization. That doesn't mean zero inequality (that doesn't seem good either) but we should be wary of unchecked inequality.

soraki_soladead | 3 years ago | on: Experts once predicted that Americans would face excess leisure time (2015)

That assumes that people left to their own free will and time would end up in a dismal state as the default. I’m not sure.

I’d love to see a proper analysis but there’s lots of evidence both ways. Some people with excess leisure would end up doing nothing. Some would work on hobbies. Some would start businesses or maybe participate in local government or something.

It might also be that, given enough time in a leisure society, people would adapt for better or worse.

page 2