top | item 43754523

(no title)

sumitkumar | 10 months ago

Hi, Thank you for sharing.

I tried this prompt.

``` create a Rubik's cube app with all available moves and show the cube and the animations. add a scrambler and a solver. Also add timer to time the moves. ```

I got this.

https://www.magicpatterns.com/c/psesccrmk41jibfhwp7wh1

Which looks like a good starting point but doesn't work at all. After this it is daunting to look at code. I still have to figure out how to tell the chatbox to fix it.

Gemini 2.5 pro did much better in one shot. (the prompt was different and without the scrambler/solver/timer)

https://sumitkumar.github.io/llmgenerated-static/

discuss

order

alexdanilowicz|10 months ago

Do you have the prompt you used for Gemini 2.5 pro? I think it would be interesting comparing prompt for prompt!

In this case, it looks like the Gemini output that you linked — as you mentioned — doesn't include the requirement for a scrambler/solver/timer, so it's hard for me to comment directly on the comparison.

I ask because we can totally add Gemini 2.5 pro as one of the models we use under the hood!

sumitkumar|10 months ago

Here is the conversation link with gemini. https://g.co/gemini/share/d253e2ef286c

Well, I was not comparing but it was just an observation.

I understand Magic has more constraints on which libraries it can use and probably is for forms-flow kind of workflows and not for managing complex states of games.

jonplackett|10 months ago

I’m constantly intrigued how people are getting funding for entire companies that are essentially going to be a feature of all LLMs pretty soon.

alexdanilowicz|10 months ago

There's a lot of work around UX and how you interact with the LLM. For example, given an entire React app + a user prompt to update it, which code snippet do you feed to the LLM? The LLM cannot read your mind. In a way it feels like the application layer's job to help it read your mind.

echelon|10 months ago

The value is in the application, not the model.

Model providers will be fungible. Applications will capture all the complicated interaction patterns, domain expertise, and distribution.

Apps can route between cheapest/most effective model. And the Chinese and upstart labs will continue dumping open source on the market. To get distribution, to salt the earth, commodify the compliment, etc.

When will an LLM be able to author a directory of GLB files organized into a game, precisely positioned within a world, with a set of user-tweaked PBR textures? Never. And even if it could, could you fathom the pain? The app layer will do that.

2025 is the year of the "App Layer" in AI.

tibbar|10 months ago

If the company can build a big user base first, then they become a possible acquisition target in the future by the LLM company for their distribution, ala Windsurf selling itself to OpenAI.