taw1285 | 3 months ago | on: $50 PlanetScale Metal Is GA for Postgres
taw1285's comments
taw1285 | 3 months ago | on: OpenAI are quietly adopting skills, now available in ChatGPT and Codex CLI
Say I have a CMS (I use a thin layer of Vercel AI SDK) and I want to let users interact with it via chat: tag a blog, add an entry, etc, should they be organized into discrete skill units like that? And how do we go about adding progressive discovery?
taw1285 | 3 months ago | on: Cognitive and mental health correlates of short-form video use
I have a daily 30 minute one way commute. I usually put on a YouTube video about startup or tech talk. But I find myself forgetting it all the day after. I am curious how you go about remembering the content without being able to take notes while driving.
taw1285 | 6 months ago | on: Notes on Managing ADHD
1. My brain drifts away very easily. Even in an important work conversation, my brain just starts thinking about a completely different project or upcoming meeting. 2. I have a hard time remembering things/events that my spouse and others can easily recall (ie: which restaurants we have been to) 3. I can't seem to form an opinion on very basic things like do you like restaurant A or restaurant B better? do you like option A or option B? I can't decide or come up with any heuristics.
At first I chalk it up to I am being too critical about myself and others are having the same issue. But that doesn't seem to be the case. Can these all be rolled up in the same conversation with my doctor?
taw1285 | 6 months ago | on: We put a coding agent in a while loop
taw1285 | 7 months ago | on: Gemini Embedding: Powering RAG and context engineering
1) at the end of the day, we are still sending raw text over LLM as input to get output back as response.
2) RAG/Embedding is just a way to identify a "certain chunk" to be included in the LLM input so that you don't have to dump the entire ground truth document into LLM Let's take Everlaw for example: all of their legal docs are in embeddings format and RAG/tool call will retrieve relevant document to feed into LLM input.
So in that sense, what do these non-foundational models startups mean when they say they are training or fine tuning models? Where does the line end between inputting into LLM vs having them baked in model weights
taw1285 | 10 months ago | on: Databricks in talks to acquire startup Neon for about $1B
Say right now I have an e-commerce site with 20K MAU. All metrics are going to Amplitude and we can use that to see DAU, retention, and purchase volume. At what point in my startup lifecycle do we need to enlist the services?
taw1285 | 11 months ago | on: Ask HN: Has anyone quit their startup (VC-backed) over cofounder disagreements?
On one hand, if the leaving co-founder retains all equity, it creates a sandbagging situation on a cap table that's no longer useful to the business. On the other hand, it feels right for the leaving co-founder to enjoy some upside for the years they put in.
taw1285 | 1 year ago | on: Write to Escape Your Default Setting
taw1285 | 1 year ago | on: Show HN: PurePlates – A Recipe Scraping iOS App
taw1285 | 1 year ago | on: Does your startup need complex cloud infrastructure?
My team of 6 engineers have a social app at around 1,000 DAU. The previous stack has several machines serving APIs and several machines handling different background tasks. Our tech lead is forcing everyone to move to separate Lambdas using CDK to handle each each of these tasks. The debugging, deployment, and architecting shared stacks for Lambdas is taking a toll on me -- all in the name of separation of concerns. How (or should) I push back on this?
taw1285 | 1 year ago | on: Speeding Up Your Website Using Cloudflare Cache
taw1285 | 1 year ago | on: Show HN: InstantDB – A Modern Firebase