top | item 41393252

100M Token Context Windows

94 points| gklitt | 1 year ago |magic.dev

22 comments

order

shazami|1 year ago

FYI wouldn't interview here. Got rejected after a 30 minute behavioral screen after spending 8 hours on an unpaid take-home.

cedws|1 year ago

That sucks, sorry to hear that. You should reject take homes like this in future. At least, don’t invest 8 hours before they’ve even interviewed you. The time investment should be symmetrical for the candidate and the employer.

thedevilslawyer|1 year ago

Are you saying you cannot be rejected in any following interview if you do a 8 hour unpaid take home?

dinobones|1 year ago

Long context windows are IMO, “AGI enough.”

100M context window means it can probably store everything you’ve ever told it for years.

Couple this with multimodal capabilities, like a robot encoding vision and audio into tokens, you can get autonomous assistants than learn your house/habits/chores really quickly.

segmondy|1 year ago

infinite context window is not AGI enough, memory is not substitute for planning and reasoning. imagine you have infinity memory, but can't plan or reason. you can memorize all chess games you have ever played. You will be crushed every time a new move/variation is introduced since you won't know what to do next. So it's not enough for us to have very long context windows, we need stronger planning and reasoning and ability for AI to have a world model of whatever universe it exists and operates in.

dogma1138|1 year ago

Has anyone measured the performance of very large context windows like this vs a good RAG that you also constantly update and curate?

At least with other very large context windows like for example Claude offers a RAG is still very much preferable as it avoids confusion and collisions with information in the context that isn’t correct or relevant.

Sure you can also prune the context window and for many existing models you also need to do that (I often use an LLM to summarize a context to keep it going) but doing it with a RAG seems to still be much easier. This especially holds true of you use good knowledge management techniques to structure your RAG so your retrievals are optimized.

P.S. on a side note how confident are we that these very large context window models are not just a RAG in disguise? As the models which boast very large windows are at least for now all locked behind API access only.

jokethrowaway|1 year ago

Context window size is not the limiting factor. How well will it be able to use that information is the problem.

Even GPT and Claude make glaring mistakes with short prompts.

smusamashah|1 year ago

It should be benchmarked against something like RULER[1]

1: https://github.com/hsiehjackson/RULER (RULER: What’s the Real Context Size of Your Long-Context Language Models)

ipsum2|1 year ago

> To incorporate this, we ask the model to complete a chain of hashes instead (as recently proposed by RULER):

They did mention it but didn't provide concrete benchmarks

fsndz|1 year ago

Context windows are becoming larger and larger, and I anticipate more research focusing on this trend. Could this signal the eventual demise of RAG? Only time will tell. I recently experimented with RAG and the limitations are often surprising (https://www.lycee.ai/blog/rag-fastapi-postgresql-pgvector). I wonder if we will see some of the same limitations for long context LLM. In context learning is probably a form of semantic / lexical cues based arithmetic.

Sakos|1 year ago

I was wondering how they could afford 8000 H100’s, but I guess I accidentally skipped over this part:

> We’ve raised a total of $465M, including a recent investment of $320 million from new investors Eric Schmidt, Jane Street, Sequoia, Atlassian, among others, and existing investors Nat Friedman & Daniel Gross, Elad Gil, and CapitalG.

Yeah, I guess that'd do it. Who are these people and how'd they convince them to invest that much?

IHLayman|1 year ago

Assume around $3/hr per H100 (pretty generous pricing for GCP), that is $2250/month-gpu, or for their fleet of 8000 comes to $18MM/month or around $216MM/year in just compute costs alone, not looking at SSD, bucket storage, or egress. At their initial investment of 465-320=$145MM that means they can’t have operated that cluster for longer than 6ish months without their funds running dry or the got massive discounts somewhere.

Something doesn’t add up here.

0cf8612b2e1e|1 year ago

For those names (access to $billions), curious how much due diligence they do any more. Just make a “chump change” investment in every hot trend? One phony AI startup pitch deck will look identical (if not better) to one with a real edge.

anonzzzies|1 year ago

What is the state of art on context on open models? Magic won't be open I guess after getting 500m in VC money.

samber|1 year ago

Based on Mamba ?

htrp|1 year ago

does anyone have a detailed tech breakdown of these guys? not quite sure how their LTM architecture works.

why_only_15|1 year ago

They're not saying