(no title)
Stiopa | 10 months ago
Only watched demo, but judging from the fact there are several agent-decided steps in the whole model generation process, I think it'd be useful for Plexe to ask the user in-between if they're happy with the plan for the next steps, so it's more interactive and not just a single, large one-shot.
E.g. telling the user what features the model plans to use, and the user being able to request any changes before that step is executed.
Also wanted to ask how you plan to scale to more advanced (case-specific) models? I see this as a quick and easy way to get the more trivial models working especially for less ML-experienced people, but am curious what would change for more complicated models or demanding users?
impresburger|10 months ago
Regarding more complicated models and demanding users, I think we'd need:
1. More visibility into the training runs; log more metrics to MLFlow, visualise the state of the multi-agent system so the user knows "who is doing what", etc. 2. Give the user more control over the process, both before the building starts and during. Let the user override decisions made by the agents. This will require the mechanism I mentioned for letting both the user and the agents send each other messages during the build process. 3. Run model experiments in parallel. Currently the whole thing is "single thread", but with better parallelism (and potentially launching the training jobs on a separate Ray cluster, which we've started working on) you could throw more compute at the problem.
I'm sure there are many more things that would help here, but these are the first that come to mind off the top of my head.
What are your thoughts? Anything in particular that you think a demanding user would want/need?
Stiopa|9 months ago