(no title)
akst | 1 month ago
However I imagine linguists have a more precise definition than most of us, but instead of speculating, I’ve decide read the paper.
Something they explain early on is a concept called multi-words (an example incomplete this is an idiom) tend communicate meaning without any meaning grammatical structure, and they say this
> “… multiword chunks challenge the traditional separation between lexicon and grammar associated with generativist accounts … However, some types of multiword chunks may likewise challenge the constructionist account.”
I’m an amateur language nerd with a piecemeal understanding of linguistics, but I’m no linguist so I don’t know what half this means, but it really sounds like they have a very specific definition here, that neither of us are talking about, and possibly hasn’t been well communicated in the article.
That said I’m out of my depth here, and I have a feeling most ppl replying to this article are probably too if they are going off the title and article that linked to the paper. But I would be interested to hear the opinion of a linguist or someone more familiar with this field, and their experimentation methods.
—-—-—-—-—-
[1] With the hypothesis testing typically done in science you can’t really accept a alternative hypothesis only reject a null one given you’re evidence, so you get titles like “may” or “might” or “evidence supporting x, y, z”, so you get these noncommittal titles like the one. In social sciences or nonnatural sciences I feel this is even more the case given the difficulty of forming definitive experiments without crossing some ethical boundary. In nature science you can put to elements together control variables see what happens in social sciences it’s really hard.
foldr|1 month ago
This is just silly (the paper, not your comment). Do these folks really think they're the first people to think of associating meanings with multi-word units? Every conceivable idea about what the primes of linguistic meaning might be has been explored in the existing literature. You might be able to find new evidence supporting one of these ideas over another, but you are not going to come up with a fundamentally new idea in that domain.
As another commentor has pointed out, many of the sequences of words they identify correspond rather obviously to chunks of structure with gaps in certain argument positions. No-one would be surprised to find that 'trees with gaps to be filled in' are the sort of thing that might be involved in online processing of language.
On top of that, the authors seem to think that any evidence for the importance of linear sequencing is somehow evidence against the existence of hierarchical structure. But rather obviously, sentences have both a linear sequence of words and a hierarchical structure. No-one has ever suggested that only the latter is relevant to how a sentence is processed. Any linguist could give you examples of grammatical processes governed primarily by linear sequence rather than structure (e.g. various forms of contraction and cliticization).
akst|1 month ago
But this is also academia they want to have evidence behind claims even if they feel intuitive. Like in the social sciences you'll have models and theories that are largely true in a lot of cases, but fail to explain variance from the models. The constructivist and whatever stuff sounds like one of those larger models and they are pointing out where it falls short, not to entirely invalidate it but to show the model has limitations.
I have a feeling the authors are well aware they aren't the first people to consider this, but they did leg work to provide some empirical evidence about the claim. Which is something you want to have in challenging the orthodoxy of a field. Entirely possible they're working on a larger piece of work but they're being asked to demonstrate this fact which this larger piece of work rests on. But I'm largely speculating there.
> On top of that, the authors seem to think that any evidence for the importance of linear sequencing is somehow evidence against the existence of hierarchical structure
The way I see it if you can demonstrate comprehension in the absence of this structure, I think you can make the case that it is optional and therefore may not rely on it. Which is a different claim from it benefits in no way whatsoever, which I don't think their evidence necessarily challenges (based on my read)
My view is when a language depends a lot on complex grammar what's happening is its trying resolve ambiguity, but languages can address this problem a number of ways. In languages like Russian they handle more of this ambiguity in inflection (and many non-English indo-european languages), in tonal languages to some extent tone creates a greater possible combination of sounds which could provide other ways of resolving ambiguity. That's my guess at least, I also accept I have no idea what I'm talking about here.