moqizhengz's comments

moqizhengz | 2 days ago | on: How to run Qwen 3.5 locally

Running 3.5 9B on my ASUS 5070ti 16G with lm studio gives a stable ~100 tok/s. This outperforms the majority of online llm services and the actual quality of output matches the benchmark. This model is really something, first time ever having usable model on consumer-grade hardware.

moqizhengz | 10 months ago | on: JSX over the Wire

BFF is in practice a pain in the ass, it is large enterprise like Google's compromise but many ppl are trying to follow what Google does without Google's problem scope and well-developed infra.

Dan's post somehow reinforces the opinion that SSR frameworks are not full-stack, they can at most do some BFF jobs and you need an actual backend.

moqizhengz | 10 months ago | on: Stop Conflating Genius with Asshole

> Let's say someone made a critical error in their code. Now, it would be nicer and kinder to say "Perhaps you could have done that better, it might have harmful impact on users" and you can also tell the person "This is really bad, you messed up, this type of a mistake is unacceptable and horrific" which uses lots of sharp words and feels abusive, so which is better? It makes the person feel bad for sure with the second option, but isn't that the best way to communicate just how bad what they've done is?

You could have just say "This line here will have harmful impact on users".

The point here is to use negative words on the 'OBJECT', it can be code or anyone's work, not on ppl. You donot need to make an statement on someone's intelligence to make him understand the severity of an issue.

moqizhengz | 11 months ago | on: How Netflix Accurately Attributes eBPF Flow Logs

From my experience in big tech, another reason is that OPS guys just cant resist the concept of eBPF, go all the way done trying to figure out what this beautiful technology can do and forgot what thery really wanted at the begining.

moqizhengz | 11 months ago | on: Evaluating Agent-Based Program Repair at Google

In conclusion, Google selected 178 relatively easy issues out of their 80K BUG database and found out Gemini 1.5 was kind of good when dealing with machine-detected bugs.

Maybe its time to build some post-ut automated patch generation CI pipeline?

And I think the other ongoing experiment mentioned in the paper is more interesting. ``` investigating the ability of an agent to generate bug-reproducing tests ```

moqizhengz | 11 months ago | on: The role of developer skills in agentic coding

This is a very typical reply when we see someone pointing out the flaws of AI coding tools. "You are using it wrong, AI can do everything if you properly prompt it"

Yes, it can write everything if I provide enough context, but it ain't 'Intelligence' if context ~= output.

The point here is providing enough context itself is challenging and requires expertise, this makes AI ides unusable for many scenarios.

page 1