Well, if we're nitpicking here, it is not 86,000s/day (24 hours * 3600s/hour) and 7.27x10^-5 radians/s, but 86,164.091s and 7.29x10^-5 radians/s.
24 hours is the time it takes the sun to return to the same spot in the sky due to earth having to rotate for another 3m56s to make up for angle gained by revolving around the sun in the same direction as the rotation of the Earth. This applies for the other planets that also rotate and revolve in the same direction - Mercury, Earth, Mars, Jupiter, Saturn, and Neptune. A sidereal day is 23h 56m 4.091s for distant stars to return to the same spot in the sky.
Damn, I knew that is why I botched my 6-stop exposure at my daughter's graduation! She can't blame me now! Thank you HN!
You don't need GPS to figure out the correction for this. Inertial navigation systems in aircraft (which use very stabilised platforms with a lot of math involved) worked before GPS was available.
It helps to have a rough indication of the current latitude on startup, but you can also figure it out from the gyro outputs. Just takes longer.
With modern sensors (solid state laser gyroscopes) it has all become a lot smaller so if you really want to you can do this in a camera. It's just probably going to be too expensive for what it brings, because 6+ stops of stabilisation is a lot already.
Aerospace grade laser gyroscopes are incredibly expensive (and bulky), and even then, they still have massive drift after several hours. If you don't have GPS to relocalize precisely at least every day, there is no way you can know the location of the camera on earth for more than a day, even with state of the art aerospace stuff
Worked makes it seem like you throw a switch and it just gives you position data. Those units take anywhere from 6 to 10 minutes to align, if you move the platform, it will error out and you must restart the alignment. The current systems take their initial fix from GPS, but the initial systems, the operator had to manually know and then key that information into the unit.
"Worked" with extreme care operated by a qualified professional.
I mean, surely if you are doing something that requires this level of precision, you could just ask the user to input its current known location? I doubt that even if the user misdialed by ten or twenty meters the difference in compensation would matter (or even if the camera was actually moving around).
> The second solution is much more plausible, but still very difficult. The user would have to be pointing the camera at the subject for long enough such that the drift in their aim at the subject is smaller than the drift from the rotation of the earth. This is also implausible. What is concerning though, is that this second method is one that could work very well to cancel out Earth’s rotation on the CIPA specified stabilization test apparatus.
So, basically dieselgate but for image stabilization
It seems the camera could use optical flow to get a baseline reading and calibrate the inertial frame offsets. They don't need to point accurately for a long time?
Or maybe that is the method they assume for the second solution and they calculated that it's infeasible.
The image with the 2 earths.. that only works if the camera is not also on the ground, but it is? How is the rotation of the object and the camera not identical? Why would it rotate ‘upwards’?
Also, if the issue is relative motion or rotation between camera and object, wouldn’t two sensors, one on the camera and one on the subject be able to solve this, since we can see if their rotations/movement match up or not?
Imagine the camera were floating just above the surface of the Earth, and also that it had perfect image stabilization. This image stabilization would keep the camera always oriented in the same direction. Same direction relative to what? To the rest of the universe. So if it was pointing right at a star, it would continue pointing directly at that star as it went around and around the Earth. From our perspective on the surface, the camera would appear to be flipping over itself as it kept pointing at that star.
Unfortunately, this would be pretty bad for taking a picture of something that was right in front of the camera (relative to the surface of the Earth). You'd be in front of the camera, ready for your picture, and the camera would appear start rotating as it kept that distant star in view.
So with a perfect image stabilizer, this is what the camera is actually trying to do, even when standing on the Earth with a tripod. It actually senses the rotation of the Earth, and tries to cancel it out, just like it would cancel out your hands shaking. But while it's good to cancel out your hands shaking (because that's a motion that's independent of the subject of the photo), it's not good to cancel out the rotation of the Earth (because the subject of the photo is actually moving with you).
We all want to keep missiles out of the hands of bad people.
Parts to make really good cameras could be taken out and used in missiles, to tell them where to go.
So we now have laws to keep those really good parts out of cameras, for safety. Cameras still work fine, but you need a tripod to get good pictures when it's dark out.
you can back calculate orientations with high pass filterd gyro data, to rotate the unfiltered gyro date into the current reference frame, then low pass the unfiltered but rotation corrected gyro data to get the earth rotation axis in the current reference frame, then one can estimate the expected rotation that should be ignored.
Solution (2) as written seems to imply that the camera can only use the gyroscope signal while the camera is pointed at the subject, but I cannot see why that is a strong limitation.
In theory, you can take the last N seconds of data from the gyroscope (I assume it is running while the camera is active) to get the overall drift, even if it is tumbling around for a while before being pointed at the subject... Assuming the tumbling has enough periods of time that are correlated with the earth's rotation (e.g. someone carrying it, not pointing it an an aircraft or something moving EW for the window duration that is anticorrelated with the rotation).
That would only work in the case that the camera is fixed on a tripod and has a long period of stable / rigid pointing before the exposure during which to collect this data. This is sometimes the situation in which image stabilization is used. (But if you can be that stable for that long on a tripod, you may not actually need image stabilization.)
By far the more common case for image stabilization is one in which the photographer is hand-holding the camera and may not frame the subject until the moment before the exposure begins. The camera movement will likely be several orders of magnitude (~4 to 7) larger than the drift that you want to measure. A low pass filter will tell you nothing at all.
At a certain point we can just start using guide stars [0].
> The first isn’t a good solution for many reasons. Don’t have GPS signal? Shooting next to a magnet? Your system won’t work.
These seem trivial to work around. Just store the last known position and use that. It's rare that you'll be without a GPS signal or beside a magnet, and you certainly won't be traveling long distances in those conditions. And since when do magnets block GPS signals?
It’s not that a magnet blocks GPS signals, but it does affect the compass in the context of using 6 of the 9 degrees of freedom in the first proposed solution: “Use the camera’s GPS, accelerometer, and compass to calculate exactly where it is pointed and its latitude. ” (This solution should also do sensor fusion with the gyroscope, not just accelerometer and compass for orientation from a 9DoF system.)
Version 2 sounds to me as the probably reason for the ability of Cameras like the OM1-2 to go over 8 stops. Yes, it is probably not a simple task to measure the earths drift with the gyroscopes, but there is one thing that might help: the frequency of that drift is exactly known - it is the speed of the earths rotation. So it should be possible to tune a very narrow filter to that frequency and only analyze the gyroscope signal for that frequency. With that one could at least partially compensate for the drift.
("Based on CIPA standards; when using the telephoto end of the NIKKOR Z 24-120mm f/4 S" - for clarity, that lens does not have optical VR in the lens itself, so this is all based on in-body stabilization.)
The issue is rotation of the sky about the line of sight axis. Whether the exposures are short or long, over time the sky will rotate more than what an in-camera system can compensate for (the amount of rotation depends on location/time/direction). Over these timescales a rotator that can perform larger movements is needed. This can be provided by an equatorial mount or an internal rotator.
I believe that fancy astrophotography tripods already do that rotation for you, right?
I think that for astrophotography, the shutter times are so long that you have to build it into the tripod, instead of relying on the tiny amount of stabilization that can be done in-camera.
Although maybe it would be helpful to cancel out some motor noise of vibrations from the tripod. But probably the existing image stabilization already does this.
This is analogous to astro-photography problems with keeping stars as points rather than as blurred lines in long exposures. If you think about it, if a long exposure at night has a static landscape but moving stars, the IBIS equivalent would have static stars and a moving landscape :)
You should be able to calculate it out by telling the user to press a button and after this, not rotating the camera away.
Right?
Might just not be practical at all.
On the other hand, shouldn't the earth rotate fast enough to figure this out in a short timeframe while the photographer starts looking through the finder?
Yes, basically Method (2) with stable measurement window. Just put the camera down on a stable surface, click button. Let system wait some ms to allow click disturbance to pass, then integrate signal over some fixed time to establish the rotation, then pick up and continue...
I am probably missing something huge. But if the goal is a stable image why use gyros. use the image itself to apply the correction factor to the final integration. sort of the same way videos are stabilized.
You can do this two ways: you can take a bunch of images, and align and stack them. Or you can take one motion-blurred image and infer the convolution kernel somehow. The former is undesirable because each frame you take has a fixed amount of "read noise" from the sensor. So you'd change your sensor noise for the worse.
The second way is undesirable because it's really hard. There is a lot of research into this and some of the results are good but some are not.
I still don't quite follow the explanation. The duck and I are on the surface of the same body and are rotating together, maintaining a constant distance... why does Earth rotation need to be corrected for?
Ignore the camera. Instead you have a planet (a circle in flatland), a gyroscope (an arrow that always points in the same direction on the page in flatland), and Mr Square.
--> [.]
|
/----\
| |
\----/
Start off at noon, with Mr Square and the arrow at the top of the planet, the gyroscope to the left of Mr Square pointing at him. Now progress time by 6 hours, by rotating the planet clockwise by 90 degrees. Mr Square and the gyroscope will move with the surface of the planet, resulting in them being on the right side of the circle on the page (the gyroscope above Mr Square on the page). Mr Square's feet will be on the surface of the planet, meaning his rotation matched the planet. However, the gyroscope always points in the same direction on the page. It's now pointing at the sky.
/----\
| | -->
\----/-[.]
In conclusion: both Mr Square and the gyroscope move with the surface of the planet - in exactly the same way. However, Mr Square will always be standing (along with everything else on the planet), while the gyroscope always points in the same direction on the page (irrespective of the time of day). A camera using the gyroscope would have to account for that.
We wouldn't have the same issue on a (non-rotating) space station. That's why planetary rotation is blamed.
It's about actual gyroscopes (motion sensors), not optical stabilisation. Gyroscopes in cameras are now so good they can pick up earth rotation. Perfect for stabilising image of stars, not so good for stabilising imae of duck translating over those stars. For that you would need optical stabilisation. In-body stabilisation is inertial, not optical.
There was an escape system in the Soyuz rocket that fired if the rocket tilted too much. It was based on gyroscopes.
Once, a launch was aborted just before liftoff. The rocket stayed on the pad and the cosmonauts were sitting in the spacecraft for some time. Suddenly the abort system fired and pulled the capsule from the rocket. They landed safely on parachutes.
It was discovered that earth had rotated and the gyroscope had detected the tilt of the rocket, so it fired the escape system.
> Your camera, which is using its IBIS system to attempt to keep everything as still as possible, may not realize that you are rotating with your subject and will instead try to zero out any rotation of the camera, including that of the Earth
The problem is that the stabilization system tries to compensate for the rotation of Earth (because it can't make the difference between the rotation of Earth, which shouldn't be compensated for, and the movement of the holder which should be).
So it would work if you were taking a photo of a subject not rotating together with the Earth. Like the stars.
Let's do a small though experiment: Assume you have fixed your camera and the duck on a surface. Then while taking the photo, you rotate the surface. The motion sensor in the camera tries to cancel out this motion, which is suitable for taking a photo of something that's not fixed on the surface, which means it does not work well for the duck that's moving with the camera.
The opposite: Earth rotation is measured by the camera and can't be easily distinguished from camera rotation relative to earth. So image stabilization will also correct for earth rotation, which is undesirable.
You should be able to exceed 6.3 stops if you are pointing north/south rather than east/west, right? Maybe they are just measuring it pointing north/south.
Bullshit. It's ITAR, they don't want parts floating around in the world that can make a dead nuts accurate INS - inertial navigation system, as this enables weapons we don't want in the wild.
You can stabilize out everything and account for the rotation by simply watching the vector of gravity over time.
However, it's not the aperture range that matters. Theoretically, if earth were not rotating, then 10 stops would still be useful for long-exposure photography. In other words, the stop differences in stabilization are more useful when you think of then in terms of shutter speed, NOT aperture.
eggy|1 year ago
24 hours is the time it takes the sun to return to the same spot in the sky due to earth having to rotate for another 3m56s to make up for angle gained by revolving around the sun in the same direction as the rotation of the Earth. This applies for the other planets that also rotate and revolve in the same direction - Mercury, Earth, Mars, Jupiter, Saturn, and Neptune. A sidereal day is 23h 56m 4.091s for distant stars to return to the same spot in the sky.
Damn, I knew that is why I botched my 6-stop exposure at my daughter's graduation! She can't blame me now! Thank you HN!
trhway|1 year ago
how about driving for 6-stop before taking the shot a tank with stabilized gun trained to the target. Now the tank gunner has the excuse too.
t0mas88|1 year ago
It helps to have a rough indication of the current latitude on startup, but you can also figure it out from the gyro outputs. Just takes longer.
With modern sensors (solid state laser gyroscopes) it has all become a lot smaller so if you really want to you can do this in a camera. It's just probably going to be too expensive for what it brings, because 6+ stops of stabilisation is a lot already.
throw0101c|1 year ago
Perhaps not, but a lot of cameras already have it for geotagging purposes (EXIF), so why not use it:
* https://en.wikipedia.org/wiki/List_of_cameras_which_provide_...
* https://www.digitalcameraworld.com/buying-guides/best-camera...
crubier|1 year ago
akira2501|1 year ago
Worked makes it seem like you throw a switch and it just gives you position data. Those units take anywhere from 6 to 10 minutes to align, if you move the platform, it will error out and you must restart the alignment. The current systems take their initial fix from GPS, but the initial systems, the operator had to manually know and then key that information into the unit.
"Worked" with extreme care operated by a qualified professional.
svalorzen|1 year ago
Rinzler89|1 year ago
Inertial measurement units for aircrafts and submarines cost as much as a house in California. Good luck putting those in a phone.
isoprophlex|1 year ago
So, basically dieselgate but for image stabilization
nick7376182|1 year ago
Or maybe that is the method they assume for the second solution and they calculated that it's infeasible.
moi2388|1 year ago
The image with the 2 earths.. that only works if the camera is not also on the ground, but it is? How is the rotation of the object and the camera not identical? Why would it rotate ‘upwards’?
Also, if the issue is relative motion or rotation between camera and object, wouldn’t two sensors, one on the camera and one on the subject be able to solve this, since we can see if their rotations/movement match up or not?
SamBam|1 year ago
Unfortunately, this would be pretty bad for taking a picture of something that was right in front of the camera (relative to the surface of the Earth). You'd be in front of the camera, ready for your picture, and the camera would appear start rotating as it kept that distant star in view.
So with a perfect image stabilizer, this is what the camera is actually trying to do, even when standing on the Earth with a tripod. It actually senses the rotation of the Earth, and tries to cancel it out, just like it would cancel out your hands shaking. But while it's good to cancel out your hands shaking (because that's a motion that's independent of the subject of the photo), it's not good to cancel out the rotation of the Earth (because the subject of the photo is actually moving with you).
llm_trw|1 year ago
mikewarot|1 year ago
Parts to make really good cameras could be taken out and used in missiles, to tell them where to go.
So we now have laws to keep those really good parts out of cameras, for safety. Cameras still work fine, but you need a tripod to get good pictures when it's dark out.
exitb|1 year ago
DoctorOetker|1 year ago
you can back calculate orientations with high pass filterd gyro data, to rotate the unfiltered gyro date into the current reference frame, then low pass the unfiltered but rotation corrected gyro data to get the earth rotation axis in the current reference frame, then one can estimate the expected rotation that should be ignored.
unknown|1 year ago
[deleted]
chris_va|1 year ago
In theory, you can take the last N seconds of data from the gyroscope (I assume it is running while the camera is active) to get the overall drift, even if it is tumbling around for a while before being pointed at the subject... Assuming the tumbling has enough periods of time that are correlated with the earth's rotation (e.g. someone carrying it, not pointing it an an aircraft or something moving EW for the window duration that is anticorrelated with the rotation).
felixhandte|1 year ago
By far the more common case for image stabilization is one in which the photographer is hand-holding the camera and may not frame the subject until the moment before the exposure begins. The camera movement will likely be several orders of magnitude (~4 to 7) larger than the drift that you want to measure. A low pass filter will tell you nothing at all.
At a certain point we can just start using guide stars [0].
[0] https://en.wikipedia.org/wiki/Guide_star
Asraelite|1 year ago
These seem trivial to work around. Just store the last known position and use that. It's rare that you'll be without a GPS signal or beside a magnet, and you certainly won't be traveling long distances in those conditions. And since when do magnets block GPS signals?
sokoloff|1 year ago
_ph_|1 year ago
sib|1 year ago
https://www.nikonusa.com/p/z-f/1761/overview
("Based on CIPA standards; when using the telephoto end of the NIKKOR Z 24-120mm f/4 S" - for clarity, that lens does not have optical VR in the lens itself, so this is all based on in-body stabilization.)
GuB-42|1 year ago
dakr|1 year ago
SamBam|1 year ago
I think that for astrophotography, the shutter times are so long that you have to build it into the tripod, instead of relying on the tiny amount of stabilization that can be done in-camera.
Although maybe it would be helpful to cancel out some motor noise of vibrations from the tripod. But probably the existing image stabilization already does this.
bongodongobob|1 year ago
mrandish|1 year ago
cesaref|1 year ago
wiml|1 year ago
Delmololo|1 year ago
Right?
Might just not be practical at all.
On the other hand, shouldn't the earth rotate fast enough to figure this out in a short timeframe while the photographer starts looking through the finder?
mikhailfranco|1 year ago
somat|1 year ago
I am probably missing something huge. But if the goal is a stable image why use gyros. use the image itself to apply the correction factor to the final integration. sort of the same way videos are stabilized.
thrtythreeforty|1 year ago
The second way is undesirable because it's really hard. There is a lot of research into this and some of the results are good but some are not.
kybernetyk|1 year ago
Photo hobbyists are snobs :)
gwill|1 year ago
https://explore.omsystem.com/us/en/om-1-mark-ii
tetris11|1 year ago
zamalek|1 year ago
Ignore the camera. Instead you have a planet (a circle in flatland), a gyroscope (an arrow that always points in the same direction on the page in flatland), and Mr Square.
Start off at noon, with Mr Square and the arrow at the top of the planet, the gyroscope to the left of Mr Square pointing at him. Now progress time by 6 hours, by rotating the planet clockwise by 90 degrees. Mr Square and the gyroscope will move with the surface of the planet, resulting in them being on the right side of the circle on the page (the gyroscope above Mr Square on the page). Mr Square's feet will be on the surface of the planet, meaning his rotation matched the planet. However, the gyroscope always points in the same direction on the page. It's now pointing at the sky. In conclusion: both Mr Square and the gyroscope move with the surface of the planet - in exactly the same way. However, Mr Square will always be standing (along with everything else on the planet), while the gyroscope always points in the same direction on the page (irrespective of the time of day). A camera using the gyroscope would have to account for that.We wouldn't have the same issue on a (non-rotating) space station. That's why planetary rotation is blamed.
yetihehe|1 year ago
Gravityloss|1 year ago
Once, a launch was aborted just before liftoff. The rocket stayed on the pad and the cosmonauts were sitting in the spacecraft for some time. Suddenly the abort system fired and pulled the capsule from the rocket. They landed safely on parachutes.
It was discovered that earth had rotated and the gyroscope had detected the tilt of the rocket, so it fired the escape system.
seszett|1 year ago
> Your camera, which is using its IBIS system to attempt to keep everything as still as possible, may not realize that you are rotating with your subject and will instead try to zero out any rotation of the camera, including that of the Earth
The problem is that the stabilization system tries to compensate for the rotation of Earth (because it can't make the difference between the rotation of Earth, which shouldn't be compensated for, and the movement of the holder which should be).
So it would work if you were taking a photo of a subject not rotating together with the Earth. Like the stars.
aljgz|1 year ago
lolc|1 year ago
unknown|1 year ago
[deleted]
contravariant|1 year ago
Which is normally not a problem, but relative to something on the surface of the Earth the stars do move.
So I guess you should ask people to stand directly in front of Polaris if at all possible.
unknown|1 year ago
[deleted]
aidenn0|1 year ago
quonn|1 year ago
unknown|1 year ago
[deleted]
unknown|1 year ago
[deleted]
mikewarot|1 year ago
You can stabilize out everything and account for the rotation by simply watching the vector of gravity over time.
kybernetyk|1 year ago
unknown|1 year ago
[deleted]
kqr|1 year ago
noselasd|1 year ago
nimbleal|1 year ago
vouaobrasil|1 year ago
imglorp|1 year ago
pixelpoet|1 year ago