top | item 42529036

(no title)

goldemerald | 1 year ago

While I love XAI and am always happy to see more work in this area, I wonder if other people use the same heuristics as me when judging a random arxiv link. This paper has one author, was not written in latex, and no comment referencing a peer reviewed venue. Do other people in this field look at these same signals and pre-judge the paper negatively?

I did attempt to check my bias and skim the paper, it does seem well written and takes a decent shot towards understanding LLMs. However, I am not a fan of black-box explanations, so I didn't read much (I really like Sparse autoencoders). Has anyone else read the paper? How is the quality?

discuss

order

mnky9800n|1 year ago

I think that we should not accept peer review as some kind of gold standard anymore for several reasons. These are my opinions based on my experience as a scientist for the last 11 years.

- its unpaid work and often you are asked to do it too much and therefore may not give your best effort

- editors want to have high profile papers and minimise review times so glossy journals like nature or science often reject things that require effort on the review

- the peers doing a review are often anything but. I have seen self professed machine learning “experts” not know the difference between regression and classification yet proudly sign their names to their review. I’ve seen reviewers ask you to write prompts that are mean and cruel to an LLM to see if it would classify test data the same (text data from geologists writing about rocks). As an editor I have had to explain to adult tenured professor that she cannot write in her review that the authors were “stupid” and “should never be allowed to publish again”.

chongli|1 year ago

A further issue is peer review quid pro quo corruption. The reviewer loves your paper but requests one small change: cite some of his papers and he’ll approve your paper.

I don’t know how prevalent this sort of corruption is (I haven’t read any statistical investigations) but I have heard of researchers complaining about it. In all likelihood it’s extremely prevalent in less reputable journals but for all we know it could be happening at the big ones.

The whole issue of citations functioning like a currency recalls Goodhart’s Law [1]:

”When a measure becomes a target, it ceases to be a good measure.”

[1] https://en.wikipedia.org/wiki/Goodhart's_law

3abiton|1 year ago

Scientific peer review is another facit of civilization that its current design does not allow it to scale well. More and more people are being involved in the process, but the qualityis forever going down.

cauliflower2718|1 year ago

It looks like it's written in latex to me. Standard formatting varies across departments, and the author is in the business school at CMU.

In some fields, single author papers are more common. Also, outside of ML conference culture, the journal publication process can be pretty slow.

Based on the above (which is separate from an actual evaluation of the paper), there are no immediate red flags.

Source: I am a PhD student and read papers across stats/CS/OR.

ersiees|1 year ago

Another clue: there is no way to download the latex, while you can if someone uploaded the latex on arxiv.

woolion|1 year ago

The Latex feel comes in good part from the respect for typographical standards that is encoded as default behaviour. In this document, so many spacings are just flat-out wrong, first paragraph indents, etc. If it's indeed Latex (it kinda looks like it), someone worked hard to make it look bad.

The weirdest thing is that copy-paste doesn't work; if I copy the "3.1" of the corresponding equation, I get " . "

refulgentis|1 year ago

> I wonder if other people use the same heuristics as me when judging a random arxiv link.

My prior after the header was the same as yours. The fight and interesting part is in the work past the initial reaction.

i.e. if I react with my first order, least effort, reaction, your comment leaves the reader with a brief, shocked, laugh at you seemingly doing performance art. A seemingly bland assessment and overly broad question...only to conclude with "Has anyone else read the paper? Do you like it?"

But that's not what you meant. You're geniunely curious if its a long tail, inappropriate, reaction to have that initial assessment based on pattern matching. And you didn't mean "did anyone else read it", you meant "Humbly, I'm admitting I'm skimmed, but I wasn't blown away for reasons X, Y, and Z. What do you all think? :)"

The paper is superb and one of the best I recall reading in recent memory.

It's a much whiter box than Spare Autoencoders. Handwaving what a bag of floats might do in general is much less interesting or helpful than being able to statistically quantify the behavior of the systems we're building.

The author is a PhD candidate at the Carnegie Mellon School of Business, and I was quite taken with their ability to hop across fields to get a rather simple and important way to systematically and statistically review the systems we're building.

apstroll|1 year ago

This paper is doing exactly that though, handwaving with a couple of floats. The paper is just a collection of observations about what their implementation of shapley value analysis gives for a few variations of a prompt.

johndough|1 year ago

Two more heuristics:

1. The figures are not vectorized (text in figures can not be selected). All it takes is to replace "png" in `plt.savefig("figure.png")` with "pdf", so this is a very easy fix. Yet the author did not bother, which shows that he either did not care or did not know.

2. The equations lack punctuation.

Of course you can still write insightful papers with low quality figures and unusual punctuation. This is just a heuristic after all.

chongli|1 year ago

I didn’t even read the paper, I just read the abstract. I was really impressed by the idea of using Shapley values to investigate how each token in a prompt affects the output, including order-based effects.

Even if the paper itself is rubbish I think this approach to studying LLMs at least warrants a second look by another team of researchers.