top | item 43739882

(no title)

jpk | 10 months ago

The basis for my understanding is a convo with a Google engineer who was working on self-driving stuff around 10-15 years ago -- not sure exactly when, and things have probably changed since then.

At the time they used just a single roof-mounted lidar unit. I remember him saying the one they were using produced point cloud data on the order of Tbps, and they needed custom hardware to process it. So I guess the point cloud data isn't necessarily harder to process than video, but if the sensor's angular resolution and sample rate are high enough, it's just the volume of data that makes it challenging.

discuss

order

xbmcuser|10 months ago

Maybe at that time 10-15 years later we have graphic cards doing actual ray tracing lidar computing is way less complex. Anyway the $200 I is for the whole system not just sensors so that would include signal processing

fc417fc802|10 months ago

Makes sense. Maybe doing self driving well just requires a ridiculously high bandwidth regardless of data source. Related, the human visual system consumes a surprisingly large quantity of resources from metabolic to brain real estate.

sitkack|10 months ago

The whole point of lidar is to massively increase the amount of ranged data you have to work with.