Colleague of mine was working on research of vision system of a spitting fish. The fish hunts by spitting water at insects sitting on the leaves above the water surface. What they found out that "firing solution" is formed in the retina and passed to the brain to initiate the spit.
I'd be interested to see if the LGN of the turtle is performing distinct functions to those of mammals. LGN is a super interesting component of the visual system - e.g. iirc people with dyslexia have morphologically distinct LGNs.
Pitts is probably known to HN readership for his early work on neural networks which put him in dialogue with people like Claude Shannon. It's fascinating to see how much that early world of 1940s/50s era computer researchers overlapped with people literally cutting into animal brains (not to mention experimenting on their own via psychedelics and the like).
My experience with turtles is confined to reef diving.
Turtles are unusual in that they are extremely tolerant of hypoxia. It may be that they have adapted to a relatively shallow network that doesn't parse much, but is pretty good at extracting the most meaningful features (oxygen/energy efficiency favored over information efficiency)
I wonder if this patterns persists for other reptiles.
I think it's unrelated. You could say dolphins and whales are "tolerant of hypoxia", yet from their behavior and the layout of their brains it is evident that evolution has not limited "information efficiency" in exchange for "tolerance to hypoxia".
hey, i dont know much about turtology/tortology, nor do i know much about the eye other than i need glasses because my eyes are a weird shape. could somebody ELI5 please. for us plebs.
Disclaimer: this is all _very_ rough and approximate - my only qualifications are that I hung out with neuroscientists in grad school.
You can largely think of mammalian eyeballs as camera sensors, which see the world and pass lightly pre-processed pixel data through cables (your optic nerves) into the back part of your brain where your visual cortex "starts". That pixel-level data flows through a kind of pipeline of filters which detect increasingly complex features the further forward they are in the brain.
The first few layers are detecting fairly understandable things - like little lines bits, and corners that turn left, or turn right, etc. The downstream layers take those detections as inputs to detect more complex features, e.g. helping to detect larger closed shapes and determine which pixels are inside vs. outside those shapes. This data then feeds into higher level object detection, etc etc.
What's really cool is that the first few layers of this are more or less laid out like an image in RAM. A star pattern of shapes in the real world would cause a star pattern of neurons in your head to start firing. This sensible layout has really helped neuroscientists grasp what the brain is doing, and has IMO been one of the big reasons we've been able to take such inspiration from neuroscience into machine vision.
Apparently though, the turtle brain doesn't work like this at all. As far as I can tell, the turtle visual processing system is just a big weird soup of neurons with no familiar structure to it. It works somehow, and I'm excited to follow the progress of decoding what it's actually doing.
By the way, if any of this interests you I recommend grabbing a copy of David Marr's book "Vision". It's probably pretty out-dated at this point, but it's wonderfully written and I think gets the basic points across very well: https://mitpress.mit.edu/books/vision
I'm also a layperson but my take was that the turtle visual cortex doesn't carry individual 'pixel' signals (and low-level features) like ours does. Instead it immediately turns the pixel signals into a high-level holistic flow of perceptions.
It is fascinating. But I always get uncomfortable reading stuff like this... I mean, they are placing sensors in a turtle’s brain, physically immobilizing it, aiming its head at a screen for a while, and you generally don’t release something to the wild after that. The ethical thing is considered to be killing it.
I get that there are a lot of aspects to modern life that sterilize the gruesomeness of reality. Still makes me uncomfortable when the illusion is peeled back and I catch a glimpse.
[+] [-] helge9210|5 years ago|reply
[+] [-] tibbydudeza|5 years ago|reply
Since they hunt in a school they need to predict where the prey will fall due to competition.
The fish will initiate a swim maneuver after observing the first 75ms of the ballistic path of the prey as it falls.
https://www.cs.bgu.ac.il/~ben-shahar/Publications/2018-Ben_T...
[+] [-] shatnersbassoon|5 years ago|reply
[+] [-] techbio|5 years ago|reply
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] benbreen|5 years ago|reply
Pitts is probably known to HN readership for his early work on neural networks which put him in dialogue with people like Claude Shannon. It's fascinating to see how much that early world of 1940s/50s era computer researchers overlapped with people literally cutting into animal brains (not to mention experimenting on their own via psychedelics and the like).
[+] [-] gumby|5 years ago|reply
[+] [-] matthewdgreen|5 years ago|reply
[+] [-] killjoywashere|5 years ago|reply
Turtles are unusual in that they are extremely tolerant of hypoxia. It may be that they have adapted to a relatively shallow network that doesn't parse much, but is pretty good at extracting the most meaningful features (oxygen/energy efficiency favored over information efficiency)
I wonder if this patterns persists for other reptiles.
[+] [-] nullcat|5 years ago|reply
[+] [-] zksmk|5 years ago|reply
[+] [-] nomoreusernames|5 years ago|reply
[+] [-] rcv|5 years ago|reply
You can largely think of mammalian eyeballs as camera sensors, which see the world and pass lightly pre-processed pixel data through cables (your optic nerves) into the back part of your brain where your visual cortex "starts". That pixel-level data flows through a kind of pipeline of filters which detect increasingly complex features the further forward they are in the brain.
The first few layers are detecting fairly understandable things - like little lines bits, and corners that turn left, or turn right, etc. The downstream layers take those detections as inputs to detect more complex features, e.g. helping to detect larger closed shapes and determine which pixels are inside vs. outside those shapes. This data then feeds into higher level object detection, etc etc.
What's really cool is that the first few layers of this are more or less laid out like an image in RAM. A star pattern of shapes in the real world would cause a star pattern of neurons in your head to start firing. This sensible layout has really helped neuroscientists grasp what the brain is doing, and has IMO been one of the big reasons we've been able to take such inspiration from neuroscience into machine vision.
Apparently though, the turtle brain doesn't work like this at all. As far as I can tell, the turtle visual processing system is just a big weird soup of neurons with no familiar structure to it. It works somehow, and I'm excited to follow the progress of decoding what it's actually doing.
By the way, if any of this interests you I recommend grabbing a copy of David Marr's book "Vision". It's probably pretty out-dated at this point, but it's wonderfully written and I think gets the basic points across very well: https://mitpress.mit.edu/books/vision
[+] [-] pshc|5 years ago|reply
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] tomrod|5 years ago|reply
[+] [-] FPGAhacker|5 years ago|reply
I get that there are a lot of aspects to modern life that sterilize the gruesomeness of reality. Still makes me uncomfortable when the illusion is peeled back and I catch a glimpse.