top | item 39494957

(no title)

User3456335 | 2 years ago

Tbf, half the linguistics discipline thought that language's grammar was somehow hardcoded into our brain, which is clearly ridiculous if you look at how LLMs work, so you're not the only one who had misconceptions.

Perhaps you can turn your idea around slightly into finding a language that finds a balance between formality and universality, rather than computers and humans. Because even though computers now speak our language they do not use it in a logical way at all (arguably because we humans don't).

And while mathematics is very formal it has a lot of trouble expressing ideas from different branches that aren't as formal. Things like fuzzy logics have been created and many things like that but they are still very much on the formal side.

Perhaps you could even derive an academic language for a specific field, perhaps standardizing between synonymous constructions. You could even use LLMs to accelerate the process. Maybe LLMs are a good thing that makes your work easier!

discuss

order

breck|2 years ago

> You could even use LLMs to accelerate the process. Maybe LLMs are a good thing that makes your work easier!

Oh I 100% agree. LLMs are amazing. Plenty of neural agents in my brain are on board. I use them everyday to work on problems in a way not possible before.

I think what I was trying to express is that a contrarian idea might require developing a large number of your own original solver brain circuits that are very dumb, always running, trying to brute force a path for your idea to work.

Later you can then develop new circuits that recognize there's now a better approach, but those solver circuits that you grew are still in your brain, occasionally still running (like sometimes when I wakeup in the morning), because that's what you trained them to do.

In other words, there's a risk to taking on a contrarian idea in that you have to build up lots of brain circuits that will stick around for life, even if your idea turns out to be wrong. I'm sure people have written about this more eloquently. I need to search more.

User3456335|2 years ago

Ahh yeah I was trying to help you repurpose these circuits given the new information. But perhaps that's not possible.

It sounds very similar to what happens with love. In my experience, at least, when you love someone you build up these circuits that care about the other person and you cannot break them down, it seems. You can ignore them but then there's this part of your brain you're ignoring.

So perhaps you could say you were/are literally in love with the idea.

andrewflnr|2 years ago

> ...clearly ridiculous if you look at how LLMs work

This is well off topic now, but this doesn't follow at all. LLMs aren't brains and don't even resemble them that closely. LLMs demonstrate that it's possible to learn grammar from scratch, not that humans actually do. I for one think it's pretty plausible that humans have a little bit of neural wetware-acceleration for syntax. In much the same way, it's possible to implement AES with just an ALU and memory operations, but your CPU probably has special hardware anyway.