(no title)
wholehog | 1 year ago
The Circuit Training repo was just going through an example. It is common for an open-source repo to describe simple examples for testing / validating your setup --- that does not mean this is how you should get optimal results in general. The confusion may stem from their statement that, in this example, they produced results that were comparable with the pre-trained results in the paper. This is clearly not a general repudiation of pre-training.
If Cheng et al. genuinely felt this was ambiguous, they should have reached out to the corresponding authors. If they ran into some part of the repo they felt they had to "reverse-engineer", they should have asked about that, too.
dogleg77|1 year ago
wholehog|1 year ago
[8] Prior to publication of Cheng et al., our last correspondence with any of its authors was in August of 2022 when we reached out to share our new contact information.
[9] In contrast, prior to publishing in Nature, we corresponded extensively with Andrew Kahng, senior author of Cheng et al. and of the prior state of the art (RePlAce), to ensure that we were using the appropriate settings for RePlAce."