A new-ish field of "mechanistic interpretability" is trying to poke at weights and activations and find human-interpretable ideas w/in them. Making lots of progress lately, and there are some folks trying to apply ideas from the field to Alphafold 2. There are hopes of learning the ideas about biology/molecular interactions that the model has "discovered".Perhaps we're in an early stage of Ted Chiang's story "The Evolution of Human Science", where AIs have largely taken over scientific research and a field of "meta-science" developed where humans translate AI research into more human-interpretable artifacts.
No comments yet.