There is a number of applications in development that will rely on in-space processing. Such as forest fire prediction, business intelligence from space (estimate the crop yields, oil reservoir levels, city planning and so on). Most of them use ML algorithms to extract valuable insights.
I remain skeptical as to the broad applicability of this approach. Some amount of preprocessing on on the space segment is valuable for certain mission types, but I don't know how generalizable it is.
By doing all your processing on the space segment you have eliminated the possibility for analysts to take the level zero / level one products and reprocess them manually, either for quality assurance purposes or to develop enhanced or entirely new capabilities with the data.
The other main problem I have with satellite-as-a-service type approaches is that it requires building generic spacecraft hardware by necessity, which means that you'll never be optimized for a particular mission type. When you build a mission from scratch, you get to carefully specify sensor parameters to achieve your remote sensing objective. Not so much when you're trying to build a generic bird that does everything. For most applications that benefit from generic data, what's the advantage over, say, downloading data from Copernicus (which already has pole to pole coverage at moderate resolution and revisit) or tasking a DigitalGlobe satellite to do the work?
I can think of some edge cases where realtime calls might be valuable (eg: dynamic re-tasking based on realtime image analysis, especially if you have a wide-view forward squinted sensor and a higher resolution nadir sensor), but I really don't see the broad applicability.
It's true that communications is definitely a bottleneck in high resolution wide-coverage missions, but there are other intermediate approaches that work before going all the way to doing the entirety of the processing including decisionmaking on the space segment. We have a lot of room to grow in the comms space, such as moving EO missions to Ka-band (and above) and free-space laser for missions without stringent data latency requirements. Sure, it's a crowded world if you're doing X-band from an isoflux antenna, but there's other architectural options to improve throughout. We can also look at doing selective preprocessing on the space segment (ie: up to Level 1 products) which may allow for better data compression.
I'm not saying there isn't room for space segment processing and dynamic tasking - I saw a really interesting mission proposal a while back that used it extensively. But I don't see the business case for it on generic satellites in all but particularly niche cases.
Is bandwidth from outer space so expensive that you save money by moving compute to the edge and only pushing down results to earth instead of the entire data set?
d_silin|5 years ago
Sanzig|5 years ago
By doing all your processing on the space segment you have eliminated the possibility for analysts to take the level zero / level one products and reprocess them manually, either for quality assurance purposes or to develop enhanced or entirely new capabilities with the data.
The other main problem I have with satellite-as-a-service type approaches is that it requires building generic spacecraft hardware by necessity, which means that you'll never be optimized for a particular mission type. When you build a mission from scratch, you get to carefully specify sensor parameters to achieve your remote sensing objective. Not so much when you're trying to build a generic bird that does everything. For most applications that benefit from generic data, what's the advantage over, say, downloading data from Copernicus (which already has pole to pole coverage at moderate resolution and revisit) or tasking a DigitalGlobe satellite to do the work?
I can think of some edge cases where realtime calls might be valuable (eg: dynamic re-tasking based on realtime image analysis, especially if you have a wide-view forward squinted sensor and a higher resolution nadir sensor), but I really don't see the broad applicability.
It's true that communications is definitely a bottleneck in high resolution wide-coverage missions, but there are other intermediate approaches that work before going all the way to doing the entirety of the processing including decisionmaking on the space segment. We have a lot of room to grow in the comms space, such as moving EO missions to Ka-band (and above) and free-space laser for missions without stringent data latency requirements. Sure, it's a crowded world if you're doing X-band from an isoflux antenna, but there's other architectural options to improve throughout. We can also look at doing selective preprocessing on the space segment (ie: up to Level 1 products) which may allow for better data compression.
I'm not saying there isn't room for space segment processing and dynamic tasking - I saw a really interesting mission proposal a while back that used it extensively. But I don't see the business case for it on generic satellites in all but particularly niche cases.
unknown|5 years ago
[deleted]
lozaning|5 years ago