chrgy's comments

chrgy | 2 years ago | on: Gemini AI

I hope founders will come back, Larry or Sergay to the leadership positions and make company more innovative as before.

chrgy | 2 years ago | on: Gemini AI

There are plenty of smart people I know personally at Google and DeepMind that will get this right. Google has 100X more data (data=food for neural networks) than OpenAI, It has youtube, Google Photos, Emails and search histories. There is a lot more pressure on Google than OpenAI to release Safe models, that is why this models are getting delayed, In my opinion they should go ahead and release it by phases to stop all this non sense speculation. We all want competition and I hope Google model will be a good one and free and can lift society forward and more prosperous and productive for everyone.

chrgy | 2 years ago | on: Harmonism Enhanced with e/acc Principles

Harmonism Enhanced with E/acc Principles Harmonism, as a lifestyle philosophy, is enhanced with the principles of Effective Accelerationism (E/acc) to align with the rapid technological advancements of our time, focusing on ethical and sustainable living.

chrgy | 2 years ago | on: A Scientific Inquiry into AI and Human Intelligence Comparison

Exploring AI vs. Human Intelligence: This article delves into how AI models like GPT-4 compare with human intellect, discussing concepts like Moravec's Paradox without offering fixed solutions. It's an invitation for readers to join the conversation and ponder the complexities of intelligence in the digital age.

chrgy | 2 years ago | on: TimeGPT-1

Author could probably make more impact if you have open sourced their models, the way it is presented looks like ClosedAI sort of pathway. Meaning using papers as a way to advertise their model for developers.

chrgy | 2 years ago | on: Show HN: Shimmer – ADHD coaching for adults, now on web

ADHD Management Guide

For individuals with ADHD, leading a balanced life can be a challenge, but with the right strategies, it can be achieved. Here are some essential tips to help manage ADHD effectively:

1. Diet:

Lower Carbs & Sugar: Reducing carbohydrate and sugar intake can positively impact focus and energy levels. 2. Physical Activity:

Regular Exercise: Engage in at least 1 hour of exercise daily to boost cognitive function and reduce hyperactivity. 3. Mental Wellbeing:

Meditation: Incorporate mindfulness and meditation practices to improve concentration. Pomodoro Technique: Use the Pomodoro method to break tasks into manageable intervals, aiding focus. 4. Support Systems:

Coaching & Support Groups: Joining such groups can provide guidance, understanding, and community. 5. Caffeine Intake:

Reduce Caffeine: Excessive caffeine can increase anxiety, especially in individuals with ADHD. 6. Planning & Organization:

Consistent Planning: Establish a routine and stick to it for better time management. Micro-Tasking: Break down tasks into smaller steps. This approach can make daunting tasks feel more achievable. 7. Medication (Consult a Professional):

Consider Adderall: If other strategies aren't enough, consult your doctor about the possibility of medication like Adderall. Remember, everyone's journey with ADHD is unique. It's essential to find what combination of strategies works best for you and to seek professional advice when needed

chrgy | 2 years ago | on: Magic Mushrooms. LSD. Ketamine. The Drugs That Power Silicon Valley

A major theme appears to be the transformative effect these substances can have on our perspectives and patterns of thought. However, it's crucial to remember that these are not magical cure-alls and their effects can vary greatly. Also, an ego-centric belief that one has discovered a profound 'truth' while high can lead to arrogance. Moreover, the risks associated with these substances shouldn't be overlooked, and the potential for misuse is real. Lastly, the need for rigorous scientific studies on the effects and efficacy of microdosing is apparent. It's fascinating, yet we must tread with caution.

chrgy | 2 years ago | on: Textbooks are all you need

Does using the synthetic data generated by GPT has the same effect of being RLHF-ly Aligned by GPT3.5, kind of like aligning the NNs to get similar performance as GPT.

chrgy | 2 years ago | on: Apple should pull the plug on the iPhone (2007)

People who are crazy enough that think they can change the world are only the ones who do, so when Steve Jobs has to deal with these non-sense so he never said what he is working on, he focused on shipping great products: "There is no likelihood that Apple can be successful in a business this competitive. Even in the business where it is a clear pioneer, the personal computer, it had to compete with Microsoft and can only sustain a 5% market share."

chrgy | 2 years ago | on: Apple announces multibillion deal with Broadcom to make components in the USA

Lots of companies are cutting ties with China, Apple is obligated to do so. But the more companies cutting ties with China, make it easier for both China and US to have Military conflict on Taiwan. Replicating TSMC is in the making by intel but It will take Years, and would have been nice if APPLE, AMD and Nvidia would Start a Fab Together in the US as a part of the Chips ACT.

chrgy | 2 years ago | on: RWKV: Reinventing RNNs for the Transformer Era

Here is a summary of all comments by Transformers, wonder how RNN does:

RWKV is a new language model architecture that is comparable to transformers in terms of performance. RWKV is more efficient than transformers, which makes it possible to train larger models on smaller datasets. The RWKV community is open source and welcomes contributions from anyone. There are plans to create larger versions of RWKV, but this will require more computational resources. Here are some additional details about the chinchilla law and the dataset problem:

The chinchilla law states that the amount of data required to train a language model grows exponentially with the model size. This means that it is very expensive to train large language models, even with the latest hardware. The RWKV community is working on developing new methods for training large language models more efficiently. There are a number of datasets available to the RWKV community, including:

The Pile: A massive dataset of text and code. The Chinchilla: A smaller dataset of text and code that is designed for training RWKV models. The Red Pajamas: A dataset of text and code that is being used to train a 65B RWKV model. These datasets are stored in a variety of locations, including:

The RWKV GitHub repository The Chinchilla website The Red Pajamas website The RWKV community is constantly updating the datasets and adding new ones. If you are interested in contributing, please visit the RWKV GitHub repository.

chrgy | 2 years ago | on: ChatGPT vs. open source on harder tasks

I think their strategy is to weaken their opponent by building strong opensource models, they know they have lost the race to Microsoft and trying to go open source so people would not use other models.

chrgy | 2 years ago | on: ChatGPT vs. open source on harder tasks

Open source will be behind always since it is harder to organize a large group of people and once it publishes it will be copied by big players, however if an organization like the original OpenAI gets formed that is truly non-for-profit and support the community, I believe it can beat big players. I think only Meta is the company that can support such an organization at the moment.

chrgy | 2 years ago | on: Unlimiformer: Long-Range Transformers with Unlimited Length Input

In the age of transformers , lets ask a transformer to summarize this paper:

The Unlimiformer paper is about a new way to make computer programs that can summarize really long pieces of text. Normally, when you ask a computer program to summarize something, it can only handle a certain amount of text at once. But with Unlimiformer, the program can handle as much text as you want!

The way Unlimiformer works is by using a special technique called a "k-nearest-neighbor index" to help the program pay attention to the most important parts of the text. This makes it possible for the program to summarize even really long documents without losing important information.

Overall, Unlimiformer is an exciting new development in natural language processing that could make it easier for computers to understand and summarize large amounts of text.

page 1