acossta
|
1 month ago
|
on: Spec-Driven Development with Claude Code
We dogfood our own product to build itself. The post walks through the actual workflow: a one-liner becomes a structured spec with acceptance criteria, the spec becomes implementation tasks scoped to specific files, and an AI agent executes them sequentially with validation after each step.
The interesting part is the review layer — AI traces each acceptance criterion to specific lines in the PR diff, catching semantic gaps that lint and type-check miss.
We also run browser tests against the live deployment with database-level verification. Happy to answer questions about the tooling or where it breaks down.
acossta
|
1 month ago
|
on: Vibe Coding Turns One
Crazy it's only been a year since Karpathy coined the term.
acossta
|
2 months ago
|
on: Brand as Code
Built this out of frustration. AI can write your code, but it doesn't know your brand.
We added /brand.json and /brand.txt to our website - structured files that define how we sound, what words we use, and what to avoid, what colors to use, and where to get the logos from. Now AI tools have context instead of guessing.
Feels like this should be standard. Curious what others think.
acossta
|
3 months ago
|
on: Brand as Code via brand.json / brand.txt
I ran into a recurring problem when working with LLMs and coding agents: it is surprisingly hard to consistently communicate a product’s brand.
When we rebranded BrainGrid, I wanted a simple, repeatable way to tell any LLM or coding agent what the brand is, without re-explaining it in prompts every time.
I ended up creating two files:
https://www.braingrid.ai/brand.json
https://www.braingrid.ai/brand.txt
Together, they describe tone, voice, terminology, naming conventions, and visual guidelines in a way that is easy for both humans and LLMs to consume.
I tested this by having Claude Code update the branding across our docs site: https://docs.braingrid.ai/
. The experience was smooth and required very little back and forth. The agent had the context it needed up front.
This made me wonder if we should treat brand context the same way we treat things like README files or API specs.
Would it make sense to standardize something like /brand.json or /brand.txt as a common convention for LLM-assisted development?
Curious if others have run into the same issue, or are solving brand consistency with AI in a different way.
acossta
|
3 months ago
|
on: How to Create a Design System Optimized for AI Coding
Author here . I grew increasingly frustrated by the mess coding agents made with the design system, so I took a crack at creating a tighter structure with AI agent instructions in the form of Claude.md and a Claude Skill to hopefully enforce it better.
Curious any thoughts. What's working / not working for folks
acossta
|
6 months ago
|
on: Gemini API Incorrectly Charging Developers Thousands of Dollars
acossta
|
6 months ago
|
on: Gemini API Incorrectly Charging Developers Thousands of Dollars
acossta
|
6 months ago
|
on: Gemini API Incorrectly Charging Developers Thousands of Dollars
We are getting hit with exactly the same at a much greater scale. 260K in our case. Exactly the same issue.
When you create Gemini Flash Cache with a TTL of 1 or 3 hrs, it creates the cache and TTLs it correctly, but the billing system keeps charing the hourly rate for the cache making the charges grow exponentially.
We've seen charges go up since 9/19 even though we turned off all the services from that account.
Struggling to get the attention of folks at Google (ticket, account manager, sales engineer: no one responds)
acossta
|
7 months ago
|
on: Adobe's pipeline for high‑throughput data ingestion with Apache Iceberg
Interesting deep dive into how Adobe built a streaming ingestion layer on top of Apache Iceberg to handle massive volumes of Experience Platform data, addressing challenges like the small‑file problem and commit bottlenecks with asynchronous writes and compaction. All stuff I've had to deal with in the past.
Good nuggets on they partition tables by time, stage writes in separate ingestion and reporting tables, and tune snapshot and metadata handling to deliver a lakehouse‑style pipeline that scales without melting the object store.
acossta
|
7 months ago
|
on: Theia AI framework puts you in charge of AI inside your IDE
The Eclipse Foundation just opened up its Theia AI platform and an alpha Theia IDE that let you bolt the LLM of your choice into your workflow and actually see what it’s doing. You get complete control over prompt engineering and agent behavior, can plug in a local model or a cloud model, and even wire up external tools via Model Context Protocol. The AI‑powered Theia IDE bakes in coding agents, an AI terminal and context sensitive assistants while giving you license‑compliance scanning via SCANOSS. Instead of being locked into a proprietary copilot, you can customize the entire AI stack to your needs and still keep your code private, which is the kind of hackable openness Hacker News loves.
acossta
|
7 months ago
|
on: AI2 releases Asta: open‑source ecosystem for trustworthy scientific AI agents
Asta isn’t just another chatbot; it’s a full stack for building and evaluating AI agents that can actually assist researchers. It ships with an open research assistant that reads papers, synthesizes evidence and even cites its sources. AstaBench’s 2,400‑problem benchmark suite gives us a reproducible way to compare agents on real multi‑step science tasks like literature review and code execution. The project also includes open‑source agents, APIs and language models tuned for research, plus access to a 200 M‑paper corpus.
In a world full of closed, untested agent tools, Asta is refreshing and gives developers all the components they need to build their own trustworthy science agents.
acossta
|
7 months ago
|
on: Open‑source InternVL3.5 crushes GPT‑4V on multimodal benchmarks
This isn’t another hype piece. The InternVL3.5 is a coherent vision‑language model that actually understands pixels and text together. It comes in sizes from 1 B up to a monster 241 B parameters, and on benchmarks like MMMU and ChartQA it beats closed models like GPT‑4V, Claude and Qwen. An open‑source LLM that competitive signals we can build cutting‑edge multimodal apps without depending on a black‑box API, which is a big deal for devs who care about hackability and reproducibility.
acossta
|
7 months ago
|
on: Nvidia halts China-focused H20 AI chip after Beijing flags security risks
This doesn’t just feel like business as usual, it’s a flashpoint. Nvidia hit pause on its H20 chip, the one tailor-made for China, because Beijing raised red flags over backdoors and data risks. It’s classic tech meets geopolitics: export controls, supply chain pressure, and the chip fight getting sharper. If you’re into how hardware becomes policy, this is it.
acossta
|
7 months ago
|
on: Unmasking Phantom Deps W Bill-of-Materials as Ecosystem Neutral Metadata
This is a hidden gem.
This whitepaper digs into the sneaky dependencies you didn’t knowingly add (thanks, transitive bloat). It lays out how an SBOM can be a universal metadata layer across ecosystems—pip, npm, you name it—to let you trace every ghost package in your stack.
Feels like the Python dev community quietly dropped a supply-chain lifeline here.
acossta
|
7 months ago
|
on: Poland foiled cyberattack on big city's water supply, deputy PM says
Wow, someone nearly turned off a whole city’s water, and Poland's cyber team caught it in time just by flipping the switch. They’re basically saying “we stop 99% of attacks,” and not messing around, they’ve spent like $800 million on this. Wild that critical infrastructure still needs defending in 2025.
acossta
|
7 months ago
|
on: Zoom patches worrying security Windows flaw
Here’s a scary one. Zoom would happily load a random DLL in the system path if someone dropped it there, no password needed. This flaw could sneak in recordings, creds, or even root access. If your company uses Zoom on Windows, patch now so you don’t get hit.
acossta
|
7 months ago
|
on: Fortinet VPNs under attack from potential zero-day
This feels like watching a storm from far off but the sky’s starting to crack. There’s a spike in login attempts hitting FortiGate VPNs and FortiManager consoles, and researchers reckon it’s not routine testing. Historically, when this kind of probing shows up, a zero-day drops within six weeks. If you’re running Fortinet gear, it might be time to lock down access, ramp up logging, and spin up your incident playbook.
acossta
|
7 months ago
|
on: News of the Weird: Week of August 14 2025
A hungry osprey drops its dinner on a power line and fries the grid. A Danish zoo politely asks people to donate their pet bunnies to be euthanized and fed to lions. A ballplayer gets traded during the seventh‑inning stretch and has to switch dugouts mid‑game. Somewhere in France, farmers are fighting van‑squatters with tractors and tanks of slurry, while over in Germany a driver manages 199 mph on the Autobahn on a stretch with a speed limit of 79 mph.
Then there’s the bear in Wisconsin walking around with a jar stuck on her head, wolves being deterred by drones blasting AC/DC and Marriage Story audio, and a scholarship that requires climbing six mountains instead of writing an essay. A Catholic parish even made up a Corvette‑raffle winner. If there’s a pattern here, it’s that reality occasionally feels like a Monty Python sketch.
acossta
|
7 months ago
|
on: Injectable 'skin in a syringe' could heal burns without scars
Researchers at Sweden’s Linköping University just published a method where they literally inject a gel full of living cells that can be printed into skin. It’s basically a mix of fibroblasts on gelatin beads and hyaluronic acid that flows through a syringe then solidifies on a wound, letting the body build a real dermis instead of scar tissue. In mice it seems to integrate well and even grows its own blood vessels. The whole idea of 3D printing skin or squeezing it out of a tube like toothpaste is both weird and incredibly promising for burn victims.
acossta
|
7 months ago
|
on: Meta Horizon Creator Competition: Open-Source Champions
Interesting to think VR worlds as open source software. Is a linux-like outcome possible / desired for VR worlds. Also, open source on built on top of proprietary platforms, seems unlikely a linux-type outcome. Or probably what meta is aiming for is to be the ubiquitous platform.
They are kicking off a $1 million contest inviting devs to build “remixable” Worlds that others can clone and iterate. The kicker is that remixable worlds go live when you publish. Besides the main prizes, they’re dangling an extra $500K for mini‑challenges like asset remixing and crafting great documentation.
Not sure, about the incentives here
The interesting part is the review layer — AI traces each acceptance criterion to specific lines in the PR diff, catching semantic gaps that lint and type-check miss.
We also run browser tests against the live deployment with database-level verification. Happy to answer questions about the tooling or where it breaks down.