chrgy | 2 years ago | on: Ask HN: Do you know an online Co-working space for Engineers?
chrgy's comments
chrgy | 2 years ago | on: Gemini AI
chrgy | 2 years ago | on: Gemini AI
chrgy | 2 years ago | on: Harmonism Enhanced with e/acc Principles
chrgy | 2 years ago | on: A Scientific Inquiry into AI and Human Intelligence Comparison
chrgy | 2 years ago | on: TimeGPT-1
chrgy | 2 years ago | on: Recursively summarizing enables long-term dialogue memory in LLMs
chrgy | 2 years ago | on: Show HN: Shimmer – ADHD coaching for adults, now on web
For individuals with ADHD, leading a balanced life can be a challenge, but with the right strategies, it can be achieved. Here are some essential tips to help manage ADHD effectively:
1. Diet:
Lower Carbs & Sugar: Reducing carbohydrate and sugar intake can positively impact focus and energy levels. 2. Physical Activity:
Regular Exercise: Engage in at least 1 hour of exercise daily to boost cognitive function and reduce hyperactivity. 3. Mental Wellbeing:
Meditation: Incorporate mindfulness and meditation practices to improve concentration. Pomodoro Technique: Use the Pomodoro method to break tasks into manageable intervals, aiding focus. 4. Support Systems:
Coaching & Support Groups: Joining such groups can provide guidance, understanding, and community. 5. Caffeine Intake:
Reduce Caffeine: Excessive caffeine can increase anxiety, especially in individuals with ADHD. 6. Planning & Organization:
Consistent Planning: Establish a routine and stick to it for better time management. Micro-Tasking: Break down tasks into smaller steps. This approach can make daunting tasks feel more achievable. 7. Medication (Consult a Professional):
Consider Adderall: If other strategies aren't enough, consult your doctor about the possibility of medication like Adderall. Remember, everyone's journey with ADHD is unique. It's essential to find what combination of strategies works best for you and to seek professional advice when needed
chrgy | 2 years ago | on: Magic Mushrooms. LSD. Ketamine. The Drugs That Power Silicon Valley
chrgy | 2 years ago | on: Magic Mushrooms. LSD. Ketamine. The Drugs That Power Silicon Valley
chrgy | 2 years ago | on: Textbooks are all you need
chrgy | 2 years ago | on: Apple should pull the plug on the iPhone (2007)
chrgy | 2 years ago | on: Apple announces multibillion deal with Broadcom to make components in the USA
chrgy | 2 years ago | on: RWKV: Reinventing RNNs for the Transformer Era
RWKV is a new language model architecture that is comparable to transformers in terms of performance. RWKV is more efficient than transformers, which makes it possible to train larger models on smaller datasets. The RWKV community is open source and welcomes contributions from anyone. There are plans to create larger versions of RWKV, but this will require more computational resources. Here are some additional details about the chinchilla law and the dataset problem:
The chinchilla law states that the amount of data required to train a language model grows exponentially with the model size. This means that it is very expensive to train large language models, even with the latest hardware. The RWKV community is working on developing new methods for training large language models more efficiently. There are a number of datasets available to the RWKV community, including:
The Pile: A massive dataset of text and code. The Chinchilla: A smaller dataset of text and code that is designed for training RWKV models. The Red Pajamas: A dataset of text and code that is being used to train a 65B RWKV model. These datasets are stored in a variety of locations, including:
The RWKV GitHub repository The Chinchilla website The Red Pajamas website The RWKV community is constantly updating the datasets and adding new ones. If you are interested in contributing, please visit the RWKV GitHub repository.
chrgy | 2 years ago | on: ChatGPT vs. open source on harder tasks
chrgy | 2 years ago | on: ChatGPT vs. open source on harder tasks
chrgy | 2 years ago | on: Sam Altman goes before US Congress to propose licenses for building AI
chrgy | 2 years ago | on: Sam Altman goes before US Congress to propose licenses for building AI
chrgy | 2 years ago | on: Unlimiformer: Long-Range Transformers with Unlimited Length Input
The Unlimiformer paper is about a new way to make computer programs that can summarize really long pieces of text. Normally, when you ask a computer program to summarize something, it can only handle a certain amount of text at once. But with Unlimiformer, the program can handle as much text as you want!
The way Unlimiformer works is by using a special technique called a "k-nearest-neighbor index" to help the program pay attention to the most important parts of the text. This makes it possible for the program to summarize even really long documents without losing important information.
Overall, Unlimiformer is an exciting new development in natural language processing that could make it easier for computers to understand and summarize large amounts of text.
chrgy | 2 years ago | on: OpenAI has applied for “GPT” trademark with USPTO