top | item 44852082

(no title)

tyfighter | 6 months ago

You're (they're?) not alone. This mirrors every experience I've had trying to give them a chance. I worry that I'm just speaking another language at this point.

EDIT: Just to add context seeing other comments, I almost exclusively work in C++ on GPU drivers.

discuss

order

almostgotcaught|6 months ago

Same - I work on a cpp GPU compiler. All the LLMs are worthless. Ironically the compiler I work on is used heavily for LLM workloads.

thrown-0825|6 months ago

it really only works for problem domains saturated with medium blogspam and youtube tutorials.

notjoemama|6 months ago

That's a bingo! Christoph Waltz is just a great actor.

I'm building an app in my stack with fairly common requirements. There are a few code examples that cover requirements but none that cover our specific scenario. After searching the web myself, I asked 3 different AI models. All they did was regurgitate the closest public GitHub example, lacking the use case I was trying to do. Solving this problem can only be done by understanding the abstraction of the alteration in design.

These things can't actually think. And now they're allowed to be agentic.

In some ways they're just glorified search engines but there's a geopolitical sprint to see who can get them to mock "thinking" enough to fool everybody.

Out of ego and greed, everything will be turned over to this machine, and that will be the end of humanity; not humans...humanity.

nxobject|6 months ago

There's the market out there for a consultancy that will fine-tune an LLM for your unique platform, stack, and coding considerations of choice – especially with proprietary platforms. (IBM's probably doing it right now for their legacy mainframe systems.) No doubt Apple is trying to figure out how to get whatever frameworks they have cooking up ASAP into OpenAI etc.'s models.

bobsmooth|6 months ago

I can't imagine there is a lot of GPU driver code in the training data.