(no title)
eqmvii | 1 month ago
If you could only give it texts and info and concepts up to Year X, well before Discovery Y, could we then see if it could prompt its way to that discovery?
eqmvii | 1 month ago
If you could only give it texts and info and concepts up to Year X, well before Discovery Y, could we then see if it could prompt its way to that discovery?
ben_w|1 month ago
You'd have to be specific what you mean by AGI: all three letters mean a different thing to different people, and sometimes use the whole means something not present in the letters.
> If you could only give it texts and info and concepts up to Year X, well before Discovery Y, could we then see if it could prompt its way to that discovery?
To a limited degree.
Some developments can come from combining existing ideas and seeing what they imply.
Other things, like everything to do with relativity and quantum mechanics, would have required experiments. I don't think any of the relevant experiments had been done prior to this cut-off date, but I'm not absolutely sure of that.
You might be able to get such an LLM to develop all the maths and geometry for general relativity, and yet find the AI still tells you that the perihelion shift of Mercury is a sign of the planet Vulcan rather than of a curved spacetime: https://en.wikipedia.org/wiki/Vulcan_(hypothetical_planet)
grimgrin|1 month ago
https://www.robinsloan.com/winter-garden/agi-is-here/
opponent4|1 month ago
Well, they obviously can't. AGI is not science, it's religion. It has all the trappings of religion: prophets, sacred texts, origin myth, end-of-days myth and most importantly, a means to escape death. Science? Well, the only measure to "general intelligence" would be to compare to the only one which is the human one but we have absolutely no means by which to describe it. We do not know where to start. This is why you scrape the surface of any AGI definition you only find circular definitions.
And no, the "brain is a computer" is not a scientific description, it's a metaphor.
markab21|1 month ago
water-data-dude|1 month ago
Ways data might leak to the model that come to mind: misfiled/mislabled documents, footnotes, annotations, document metadata.
gwern|1 month ago
reassess_blind|1 month ago
alansaber|1 month ago
franktankbank|1 month ago
andyfilms1|1 month ago
armcat|1 month ago
dexwiz|1 month ago
nickpsecurity|1 month ago
Trufa|1 month ago
As a thought experiment I find it thrilling.
Rebuff5007|1 month ago
The fact that tech leaders espouse the brilliance of LLMs and don't use this specific test method is infuriating to me. It is deeply unfortunate that there is little transparency or standardization of the datasets available for training/fine tuning.
Having this be advertised will make more interesting and informative benchmarks. OEM models that are always "breaking" the benchmarks are doing so with improved datasets as well as improved methods. Without holding the datasets fixed, progress on benchmarks are very suspect IMO.
feisty0630|1 month ago
LLMs have neither intelligence nor problem-solving abillity (and I won't be relaxing the definition of either so that some AI bro can pretend a glorified chatbot is sentient)
You would, at best, be demonstrating that the sharing of knowledge across multiple disciplines and nations (which is a relatively new concept - at least at the scale of something like the internet) leads to novel ideas.
al_borland|1 month ago
mistermann|1 month ago
[deleted]