(no title)
spacetimeuser5 | 1 year ago
Stochasticity sounds like there has been performed some theoretical modelling to infer this. But does it imply that there would be some tiny % of any ligand molecules - endogenous or exogenous - which would just by chance get "an empty run" and didn't bind to their receptors (though structurally they're fine ligands with high affinity) and would be removed via waste removal systems? Is there any experimental evidence for this, like some study using radiolabelled high affinity ligand molecules to see what % of them gets into "an empty run"?
The mean free path seems sort of sensible in the extracellular space, though it still seems that the variables affecting mean free path (large amounts of receptors and binding thingys, the very small spaces, and the temperature) may be not enough. But wouldn't mean free path be near zero inside cells, where every nanometer should be occupied by some other biochemical pathway/reaction or bioelectric activity?
>>Neither you nor I will see biology as a mature science.
I personally wouldn't care a lot about proving anything to anybody in some absolute sense, but first of all to prove instrumentally and make stuff work for myself at least. I think that any biology student with the descent understanding should have some mini lab for personalized medicine (as e.g. Sinclair mentioned that his recent research on using 6 chemical compounds for OSK epigenetic reprogramming (rather than bulky viral vectors) can be done by any biology student).
Balgair|1 year ago
The mean free path is pretty much 0 all over, so to speak. I was just trying to tie it back into more EE concepts for you. The idea is that things are just randomly moving about, with a 'free' mean free path, until they aren't, and that stoppage costs energy. At body temperatures, it doesn't take much to knock binding ligands out of a cleft. So the stiffer the bind, the harder to disassociate, and the harder to get it to unbind at the end. Nature kinda figures this all out on her own, and the optimal energies are found out via evolution. It's all a 'good enough' system.
So, the trick with bio is that it's a lot like how Clausewitz thinks of war: War is easy, it's just that all the easy stuff is really hard. In that, it's conceptually easy to do bio. It's just that it's really hard to implement anything. Feynman talked a bit about it in one of his lectures. In that, getting a rat to randomly go into a room and then discover that there is cheese in it will take a tremendous amount of prep and careful cleaning and the like. Rats have really really good noses. It's so easy to fool yourself in bio, because the systems are just so complicated. And, for me, that's been true up and down the size scale, from single cells to whole animals. The systems are just so complex, you really only get to ask simple questions and then hope you controlled the experiment correctly.
spacetimeuser5|1 year ago
Evolution also "tries" to save energy anywhere possible, so spending energy on the synthesis of endogenous ligands, which eventually will be discarded, seems a bit redundant. There is also a theorem in evolutionary game theory, that probability that natural selection will allow an organism to see reality as it is (=the truth) is exactly zero, as it's enough to make it just "good enough". I was arguing about that with Gemini, and it agreed with me. My point is that "evolution" is just a tool (like ChatGPT) with it's own instrumentally limited pool of empirical data (80% of which was also obtained from macroscopic enough observations rather than reverse engineering or experimentation) to build upon.
I actually want to apply one EE concept, which has some experimental basis. The reason why I am digging this, is that I am searching for some possible explanations of a couple of dozens of experimental studies in bioelectrics/magnetics I found. (though won't discuss in depth on a public forum)
authorfly|1 year ago