top | item 42636463

(no title)

maiybe | 1 year ago

Looking for feedback for an AI engine that runs sophisticated game AI locally instead of in the cloud, literally reducing costs to 0. Our tech demo (GARP) runs 20+ autonomous NPCs with memory, planning, and real-time interaction on a single RTX 3090 - something that previously cost $500/day with cloud APIs.

The engine is composable, modular, and integrates with major game engines. We're enabling developers to create deep, responsive game worlds without the burden of cloud computing costs or API rate limits.

Would love to hear the community's thoughts on local vs. cloud AI for gaming applications.

discuss

order

jsheard|1 year ago

How much GPU memory are you using? Demanding games already use most if not all available VRAM just for rendering so there isn't a great deal of room left for big AI models. Even if you target games with simple graphics, the size of the AI model would still dictate the min-spec for it to be playable.

maiybe|1 year ago

Under the hood, we're supporting multiple models that can be selected, but haven't optimized all the quantizations possible (the space is moving fast).

The range is 1GB - 24GB, depending on model selection, but would be amazing to push lower than that. 24GB is high end as only the NVIDIA XX90s can support those.

dataminded|1 year ago

Very interesting. How do we get our hands on it to try it out?

maiybe|1 year ago

We are in closed alpha, but eager to talk with folks about what they're working on. Easiest is to reach out to: hello@atelico.studio