soraki_soladead | 3 years ago | on: Why Is No One Asking SFB What Happened to the $3.3B He Borrowed?
soraki_soladead's comments
soraki_soladead | 3 years ago | on: A Sun-like star orbiting a black hole
Regardless, if its on arxiv why use sci-hub?
soraki_soladead | 3 years ago | on: Tesla devs at Twitter misunderstand Slack privacy; leak fire lists based on LOC
soraki_soladead | 3 years ago | on: An unwilling illustrator found herself turned into an AI model
soraki_soladead | 3 years ago | on: An unwilling illustrator found herself turned into an AI model
It's heated and less philosophical because many artists are worried about their livelihood while a multi-billion dollar company is working towards making them obsolete often using their own work.
I don't understand the confusion people have towards this issue.
soraki_soladead | 3 years ago | on: Building the Future of TensorFlow
At least with jax the core library isn’t adopting any of the framework level stuff so those can evolve independently.
soraki_soladead | 3 years ago | on: Building the Future of TensorFlow
soraki_soladead | 3 years ago | on: Transformers seem to mimic parts of the brain
Less common but not unheard of. Here's one example, primarily on focused on vision: http://www.brain-score.org/
DeepMind has also published works comparing RL architectures like IQN to dopaminergic neurons.
The challenge is that its very cross-disciplinary and most DL labs don't have a reason to explore the neuroscience side while most neuro labs don't have the expertise in DL.
soraki_soladead | 3 years ago | on: I used DALL·E 2 to generate a logo
I've run several different types of businesses and even those that required print work never required or even benefited from black and white, or even monochrome as another commenter mentioned. We _always_ had the means and preference for full color: emails, brochures, documents, websites, t-shirts—it didn't matter. There was _never_ a time we needed to degrade the logo so significantly. From talking with others that appears to be extremely common in modern businesses, especially software, since the majority of our presence and revenue stream is online, and not glass silhouettes in our office.
As I said, outside of a fairly narrow range of real world use cases, this comment is outdated: "Ignoring this issue is the mark of an amateur." If you have one of those rare use cases, check that box, but otherwise it shouldn't be the norm or a requirement.
soraki_soladead | 3 years ago | on: I used DALL·E 2 to generate a logo
That sounds like a concern that stopped being relevant for many software companies a decade ago at least.
These days app icons and hero images are more important than whether you can fax or print the logo.
soraki_soladead | 3 years ago | on: Translating a Visual Lego Manual to a Machine-Executable Plan
soraki_soladead | 3 years ago | on: Are language models deprived of electric sleep?
Retrieval models (again, lots of published examples: RETRO, etc.) that externalize their data will bring the sizes down by about that order as well.
soraki_soladead | 3 years ago | on: Are language models deprived of electric sleep?
This isn't accurate. GPT-3 has 175B parameters. The human brain has ~175B cells (neurons, glia, etc.) The analog to GPT-3's parameter count would be synapses, not neurons, where even conservative estimates put the human brain at several orders of magnitude larger. It's likely that >90% of the 175B could be pruned with little change in performance. That changes the synapse ratios since we know the brain is quite a bit sparser. In addition, the training dataset is likely broader than the majority of Internet users. Basically, its not an apples-to-apples comparison.
That said, I agree that simply scaling model and data is the naive approach.
soraki_soladead | 3 years ago | on: Most Americans think NASA’s $10B space telescope is a good investment
soraki_soladead | 3 years ago | on: Tell HN: GitHub Copilot can now block suggestions matching public code
However, if the model is simply compressing code it sees into the model weights, just memorizing snippets, then outputting those snippets, that’s much more likely to violate licenses. Like when people copy+paste from some licensed code without attribution (even if realistically nothing is enforced most of the time).
The truth is likely somewhere in the middle. Lets say 20% of code is stored directly in the weights but the rest is synthesized. That’s a problem for the whole product.
We already know models do a little bit of both depending on the data coverage. Common structures like if/then, loops, etc. are probably “understood” because the model saw them in lots of contexts. However, specific functions, especially those that are seen only a few times in much the same contexts are more likely to be copied. There’s a spectrum here from shortcut learning to understanding.
OSS doesn’t really have the access or resources to evaluate this. Github isn’t really incentivized to share any analysis they’ve done here.
What’s interesting to me is that their solution to this problem is to put the issue on their users/customers. By default, crawl everything public, ignorant of licenses, and if the customer has license concerns its on them to disable public code matching.
soraki_soladead | 3 years ago | on: A Random Distribution of Wealth (2017)
You're making claims about how space things align with post-scarcity that doesn't match up with the accepted meaning.
Even still, we _can_ send stuff into space, now cheaper than ever. What is the threshold for "send much to space"? Any person can order up a rocket payload on Amazon.space and have it placed in orbit? That doesn't sound good even if it were free. Especially if it were free...
> space exploration is such a good and exciting thing to do, you'd expect that it's one of the first things to get accomplished by a post-scarcity society
Why?
soraki_soladead | 3 years ago | on: A Random Distribution of Wealth (2017)
Separately, speaking as someone that spends a lot of time with optimizers, it seems silly to not have _some_ form of regularization. That doesn't mean zero inequality (that doesn't seem good either) but we should be wary of unchecked inequality.
soraki_soladead | 3 years ago | on: GitHub Copilot is generally available
soraki_soladead | 3 years ago | on: Amazon drones are coming to town. Some locals want to shoot them
soraki_soladead | 3 years ago | on: Experts once predicted that Americans would face excess leisure time (2015)
I’d love to see a proper analysis but there’s lots of evidence both ways. Some people with excess leisure would end up doing nothing. Some would work on hobbies. Some would start businesses or maybe participate in local government or something.
It might also be that, given enough time in a leisure society, people would adapt for better or worse.
Dropping “conspiracy” from “conspiracy theorists” is a nice touch but it doesn’t lend much to such a bold claim without sources with evidence.