biwills | 9 months ago | on: Figma files for proposed IPO
biwills's comments
biwills | 1 year ago | on: Show HN: We are building the next DocuSign
biwills | 1 year ago | on: Why Every Programming Language Sucks at Error Handling
With errors, it's hard since many languages can throw errors anywhere, so it's hard to feel like any function is "safe" in terms of error handling. That's one of the reasons why `enwrap` returns a generic error alongside any other result: to support incremental adoption.
If you have a chance to check out `enwrap` and have feedback, email me! (link in bio)
biwills | 1 year ago | on: Why Every Programming Language Sucks at Error Handling
I've been using https://github.com/biw/enwrap (disclaimer: I wrote it) in TypeScript and have found that the overhead it adds is well worth the safety it adds when handling and returning errors to users.
That said, I see parallels between the debate about typed vs. non-typed errors and the debate of static typing vs. dynamic typing in programming languages.
biwills | 1 year ago | on: iTerm2 critical security release
I have heard a lot of great things about https://ghostty.org/ but haven't had a chance to check it out
edit: oops, I misread your question as "what alternatives are there"
biwills | 1 year ago | on: Text Editing Hates You Too (2019)
biwills | 1 year ago | on: Text Editing Hates You Too (2019)
biwills | 1 year ago | on: NPM package is-even has over 140k weekly downloads
> I created this in 2014, the year I learned how to program. All of the downloads are from an old version of https://github.com/micromatch/micromatch.
biwills | 2 years ago | on: Faraday.dev – Connect your phone to LLMs running on your desktop
biwills | 2 years ago | on: Faraday.dev – Connect your phone to LLMs running on your desktop
Excited to share Mobile Tethering, our latest feature release on Faraday.dev. It lets you run local LLMs on your Mac/Windows Computer (Linux soon) and seamlessly use them to chat with AI on mobile. Since all the heavy workloads run directly on your computer (instead of on an expensive cloud server), it's 100% free to use, and your chat data is never stored or logged in the cloud.
I'm one of the founders of Faraday.dev, so would love to hear any ideas you have on what we should build next!
__
PS: For those who've never used Faraday – it's a zero-config desktop app for creating AI characters (custom chatbots) powered by locally running LLMs. Faraday can run on CPU with only 8GB of RAM via llama.cpp by @ggerganov, and the app will automatically use your GPU to speed things up. We also have a community-driven Character Hub, text-to-speech, lorebooks, and more.
biwills | 2 years ago | on: Many options for running Mistral models in your terminal using LLM
biwills | 2 years ago | on: Code Llama, a state-of-the-art large language model for coding
biwills | 2 years ago | on: LLM Constellation
There's already some interesting fine tuned llama2 models:
- https://huggingface.co/NousResearch/Redmond-Puffin-13B - https://huggingface.co/Tap-M/Luna-AI-Llama2-Uncensored
Shopify, Cloudflare, Zoom, Spotify, Roblox, and Coinbase are all notable examples.