Hacker News new | past | comments | ask | show | jobs | submit login
How the LIDAR tech GM just bought probably works (arstechnica.com)
109 points by deepnotderp on Oct 24, 2017 | hide | past | favorite | 71 comments



Interesting. Frequency-modulated continuous-wave LIDAR is easy to do as a one-point device. Many such devices have been built. But they're usually short range, such as the discontinued Swiss Ranger, and don't reject ambient light as effectively as pulse systems.

Being both eye-safe and sunlight-tolerant is hard. Eye-safe is easier if you can increase the diameter of the outgoing beam. Eye safety is measured based on beam energy through a 1/4" hole (an eye pupil), and if the outgoing beam is made wider (say an inch) the energy per unit area drops. But the optics become bigger. Flash LIDAR units emit a spreading beam, and if you can keep people from getting close to the emitter and staring into it, it's not a big problem. (It's a distance measuring device, so if it detects something at range < 1 foot, it must cut the power way down.)

Sunlight tolerant is done with a few tricks. Narrow-band interference filters cut out everything but the color of the beam being used. A pulse LIDAR can outshine the sun for a nanosecond. Continuous-wave systems could in theory operate below the noise threshold, but the detector has to not saturate.

Anyway, there are lots of technologies that can work. Continental bought Advanced Scientific Concepts' technology, which is known to work fine; it just cost too much when each unit was built by PhDs in Santa Barbara. Continental is a huge auto parts maker; making a million of something cheaply is what they do.


The Mesa Imaging SwissRanger is an amplitude modulated system, not frequency modulated. It's also not really LIDAR, as it uses LED illumination, but AMCW LIDAR is more or less the same principle. A better comparison might be laser tape measures - these often use phase modulated LIDAR rather than direct ToF. You can also buy phase shift scanning systems from people like Leica Geosystems.

In a ToF camera (at least some of them - see lock-in pixels), each pixel is sampled four times per cycle to detect the phase offset from the outgoing illumination. The SwissRanger was one of the first time of flight cameras. The reason it suffers short range is because of phase ambiguity - the lasers are modulated at around 30MHz which gives a wavelength of 10 m or so. The ambiguity distance is half this (5 m). LIDAR systems historically got round this by using multiple modulation frequencies for different distance scales.

This tech is now everywhere thanks to Microsoft buying Canesta.


Leica Geosystems used to sell phase-based scanners. These units were actually rebadged Zoller+Frohlich scanners. But the partnership ended a few years ago when Leica introduced the Pxx series. Today the company only makes time-of-flight scanners.

source: I work at Leica Geosystems.


You can get some pretty impressive dynamic range out of the new silicon photomultiplier detectors that companies like SensL, Hamamatsu, and Ketek have been putting out.

Despite the name, they aren't related to old-school photomultipliers - they're basically large arrays of very tiny Geiger-mode avalanche photodiodes on a single chip. This solves the low dynamic range issues from traditional large-area avalanche photodiodes. Traditional APDs have a long recovery time, whereas an array of small APDs has both a shorter recovery time per cell as well as a greater overall dynamic range due to the ability of multiple cells to be struck at once by incident photons.

SensL actually has a bunch of videos of their products in use for a LIDAR application: http://sensl.com/applications/lidar1/


Advanced Scientific Concepts built some actual photomultiplier LIDAR devices for DoD. Photomultipliers have very low latency - picoseconds. One photon in, one electron out. So they built an InGaAs sensor as the target of a photomultiplier tube. Sensitivity is very good, but the technology is expensive.


There's a promoted comment on the original post that addresses the question of interference between lidars from multiple cars on the same street (short version: coded pulses so each unit can recognise its own reflections). But your comment makes me wonder about another issue: what about eye safety in a street crowded with autonomous cars? Each car's lidar may be eye safe on its own, but how safe is a busy street that's being continually scanned by dozens of cars at once?


I think more graceful failure modes would be better as well. One thing that AI systems don't seem to do a lot of nowadays is quantify their failure level. Voice assistants always reply with the same degree of sureness leading to some silly situations that wouldn't occur with a human. (Despite them having a confidence score)

It's not that better and better solutions are bad but there are probably a lot more gains in mitigating failure cases. I think our brains may even have dedicated systems for potential failure detection leading to timidness. Failure is currently mostly defined as lack of ability to get the right answer, it is negatively defined. I wonder is there's any merit in positively defining failure.

Another way to think of it is if timidness/shyness, typically "negative" traits (especially in American culture) are actually features rather than bugs.


Another way for a CW lidar to work is to use diffraction optics trick of some kind, some smart coding scheme, and a lot of computing power to do a lot of Fourier transforms.

Continuous ToF calculation from a specially coded continuous signal is already used in Chinese military rangefinders (coded continuous signal is hard to spot unlike repetitive pulses)


> Sunlight tolerant is done with a few tricks. Narrow-band interference filters cut out everything but the color of the beam being used.

What about a beam collimator at the detector? It would reject a lot of noise but would cost some power.


COntinental Bought High flash 3D Lidar company. It seems promising to detect objects at 200mts and at very less cost compared to velodyne Lidars.


Isn't flash lidar range limited unless at a very high power output?


The peak power output is high, but only for a nanosecond or less.


Wondering why no one has thought of using a Acousto-optic modulator? https://en.wikipedia.org/wiki/Acousto-optic_modulator

It's solid state, has a very fast response time, super cheap to manufacture, and can steer a beam with high precision. And they are very efficient, achieving a >90% first-order diffraction efficiency. The AOM material can be solid and thermally insulating (such as glass), so you can get very high power.

In the paper cited, they produced a device that had an effective power of 4 mW with active cooling. How are they going to scale this up, taking into consideration first order diffraction depends on the angle of incidence of the incoming beam? The steering mechanism is basically a tunable diffraction grating which means the active area is going to be tiny.

Most high power diffraction gratings work by expanding the surface area of the grating, it would still be very expensive to create a large tunable waveguide/grating (think how much it costs to make a CPU die), making the whole cost savings of solid state a moot point. You could make an array of the devices, but you're going to need a ton of them to get the necessary power and reasonable deflection angle range.


From the Wikipedia page:

> Consequently, the deflection is typically limited to tens of milliradians.

This is the big drawback with AOMs. They're very precise, but the angle through which they steer beams is small. You would need hundreds of scanners to scan a full circle under the best conditions.

AOMs are interesting though in that they provide a way to modulate the frequency of the light they deflect. This is how I've seen them used before: as a very precise way to introduce small frequency shifts into a laser beam for physics research.


I have a genuine question to all the LIDAR experts here: all the optical phased array research I've seen, such as the MIT, UCB and DARPA "SWEEPER" work only has a range of something like 2 meters, and 10 meters after lots of work. All the research papers I've seen also mention that accurate and high power phase shifters are a challenge as well as getting the optical phased array to a high enough output power. On the other hand, Quanergy seems to claim that their standard optical phased array on silicon is getting them 200 meter range and really good specs. Do they have something that academia doesn't have? Also Strobe's advisor's thesis says that their LIDAR only has a range of ~2 meters, isn't that totally unworkable for SDC LIDAR?


The Quanergy system looks like it's doing conventional pulsed time of flight (maybe FMCW, but I don't know). They're operating in a frequency band that lets them put out a lot of power without risking eye safety, which means long range. Pulsed LIDAR is basically limited only by signal to noise, so provided you get some photons back and you know they're 'yours', you can measure long distances.


The DARPA SWEEPER was also pulsed lidar, and it only had a range of 2 meters. And the problem really isn't the regulations for power output at the wavelength, but rather that the optical phased array itself can't put out the necessary power, regulations notwithstanding.


I assume LIDAR is needed since the AI isn't advanced enough to do depth perception? If I can drive with two eye balls, then I'd think a circular camera array would be plenty. Maybe this is what Tesla plans on using (in combination with RADAR).


That's what Tesla used until they plowed into cars partially blocking a lane. Four times for which there's video. I've posted the links previously.


While LIDAR and stereo cameras (as well as RGBD depth cameras like Kinect) both effectively do the same thing, LIDAR typically has a much longer effective range while keeping high resolution.

To give you an example, this[1] commercially available sensor head can give great resolution stereo depth at around 10m but the included LIDAR unit is good out to around 30m.

LIDAR also has the advantage of operating on a different portion of the EM spectrum. It can sometimes be more well behaved in situations where there might be interference on the visible light wavelengths but not on the LIDAR wavelengths which is typically IR (e.g. extremely bright sunny days).

I think that there's a place for both in vehicles as a sort of redundancy. While they both mostly do the same things, they do them in different ways, and if LIDAR can be made cost effective then having both is a huge gain.

[1]: https://carnegierobotics.com/multisense-sl/


That sensor you linked looks to have its two cameras about 10cm apart. A car is easily 1.5m wide, giving a very cheap 15x increase in range for far objects and beating the LIDAR quote by a ton (and of course you can have multiple camera pairs for different distance ranges of course).

I'm with the grandparent: I genuinely don't understand the obsession with LIDAR in this space. It's complicated and fiddly, and seems to be competing with an "obvious" solution involving $3 camera parts.


More cameras still doesn't fix the interaction with ambient light and LIDAR units like the ones produced by Velodyne or Ibeo have ranges out to 200m[1]

Again, I think they both have their uses. If LIDAR can be made not so complicated and fiddly, I think it brings a lot to the table.

[1]: https://autonomoustuff.com/product/ibeo-lux-standard/


Sure, it could. But again its competition is cheap camera hardware that can be had for a few bucks and that outperforms the human eyeballs that we know are safe enough to drive cars on real roads.

I don't doubt that LIDAR can work with some development. I'm just shocked that it seems to be the default position in the industry and want someone to explain this to me in a way that makes sense.


This is also LIDAR.


Don’t forget you can also drive reasonably well with one eye closed. The stereo stuff isn’t buying you much for driving.


Humans tend to move their heads, which is why one-eyed depth perception still works.


Humans can also drive RC cars, airplanes, quadcopters, etc using a single unmovable camera.


Also video games


We don't want something that merely drives as well as a person. We want something that does better. To that end, if it is easier to build sensors that don't have to train so hard on just 2d images, why wouldn't we?


something that merely drives as well as a person

would be _awesome_ today, depending on which person we're comparing it to anyways... :)

And if in 15 years it could do much better, great.


If you're moving through easy to distinguish objects then optical flow is usually enough to provide depth and avoid hitting things. But not always. Your eye+retina are still a better camera and processing system than anything we can produce artificially at tasks like this. I'd bet a machine could do as well in 5 years or so but not now.


Forget about cars, this tech is applicable beyond cars. They could potentially scan colors by changing the frequency, discern materials and even use different resolutions, which could make scans much faster. I have no idea why nobody did it so long ago.


Don't you always want a real world measurement of distance?

Even if AI gets it (mostly) right, getting a second input that verifies your path is clear seems like a good idea.


not for 20% higher vehicle costs.


By analogy, why don't we make planes that are flappy winged ornithopters? It works for birds.


One of the advantages for solid-state LIDAR has to be an increase in reliability.

Nobody wants to buy a car which requires a new LIDAR to be installed after 100,000 miles for the sum of $7,000.

Hopefully once production starts and yield rates increase the unit costs can shrink small enough to being becoming feasible for integration into lower-end products like cell phones or laptops. I think there are a lot of cool applications for LIDAR which have been blocked by the current expense.


>Nobody wants to buy a car which requires a new LIDAR to be installed after 100,000 miles for the sum of $7,000.

How much does a chauffeur cost for 100,000 miles?


It will probably lead to the advent of the "car as as service" sales model. Although the pushback from traditional dealers will be huge.


Traditional dealers push back against anything and everything, so, that can only be expected.


For cities yes. For suburbs and rural areas, not as much.


That brings up an interesting thought. Self-driving cars are going to have to come with automated tests that must be passed for the car to even start. You won't be able to make a choice about whether to drive around with a slightly broken car part like that. Some parts of the system will have to not only work, but work near perfectly or require service.


> You won't be able to make a choice about whether to drive around with a slightly broken car part like that.

As I mentioned in another post: cars already have these sensors in the form of short range (ultrasound?) sensors for parking. These are regularly covered with ice if you live in a winter climate, and in that situation the parking sensors will continuously beep when you have the car in reverse. A human driver just ignores that and backs out anyway, because you look with your eyes and see nothing, so you assume the warning is false and due to ice. The device isn't even broken, it's just temporarily not able to work until the ice has melted, which is a few hours away when the sun comes up.

An autonomous car could just say "I don't trust the readings from the sensors so you are on your own kid" and I'd drive. And that would be sort of fine (although it can't be several dozen mornings then it's annoying).

But an autonomous taxi can't do that. This is why I think assistant drivers such as teslas autopilot, that can just defer driving to a human, will be around for a very long time (Decades) before we have fully autonomous cars that can drive without backup driver.


People will not be buying for a long time. Use will come through ride sharing services.


That's irrelevant. 7 cents per mile in LIDAR costs is expensive, regardless of how the vehicle is owned and it's use charged.


7 cents per mile on LIDAR would be reasonable in a truly driver-less system. Uber costs like $2.8/mile and vehicle cost estimates are usually around $0.5/mile, leaving a healthy overhead for a driver-less system to live in.


and Manufacturing / shipping / installation costs


Out of complete curiosity, and knowing nothing about these technologies:

How does these system avoid interference with other cars? Say if ten exact same car, with the exact same model of LIDAR are on the same street / crossing. Can I expect some noise being received by the sensor?


Yep. Every sensor will receive the output from the other emitters. This problem is actually pretty similar to the problem of sharing the wireless signal spectrum - how do all the cell phones in one area work at the same time? There's a few standard approaches to solving these problems:

- Frequency Division: each detector uses a different frequency/wavelength. This approach is very simple to implement, but the usable range of wavelengths for LIDAR is fairly narrow. So we'd run out of choices pretty quickly.

- Time Division: each detector is allocated a different operating time slot, so that only one is active at a time. For cell phones, this is relatively easy to implement because they all connect to a central system that can coordinate the timing. But for cars, this would be more difficult since there's no central system that links the detectors on different cars together (although they could still communicate with each other through other means).

- Code Division: each detector's output is pulsed in a unique pattern so that the reflected signals will return that same pattern, then the processor can reject any patterns that it didn't send out. This approach is much more complex, but doesn't suffer from many of the drawbacks of the other solutions.

Code division is by far the most widely used approach here, but frequency and time division also play a small role as well since different manufacturers/detectors use different wavelengths and not all detectors are being operated at exactly the same time (the time needed to send and receive a single signal is extremely short).


thanks!


I genuinely don't understand obsession with LIDARs in the autonomous vehicle community, a millimetre wave radar is late seventies tech, does the job many times better, for less money, and can be made by an electronics engineering undergrad from radio shack parts.

Commies had them in such abundance that they put millimetre wave imagers (and that was in seventies) on thing as cheap as vision aids for tank drivers, field guns, single shot atgms, and even small arms.


I don't see how you can say millimetre wave radar "does the job many times better".

State-of-the art 79 GHz automotive radar: 0.2 m range uncertainty, 5 degree azimuth angular uncertainty, dozens of points per second.

Spinning lidar: <0.05 m range uncertainty, <0.1 degree azimuth angular uncertainty, a million points per second.

Radar definitely has its uses as it can see through weather conditions and tell velocity, but lidar is the vastly superior sensor otherwise.


Looks surprisingly close to specs of a Panasonic made radar.

That is far from what is currently considered advanced for mm-wave radar in 93-95ghz range - 0.5 degrees

Google "GPU Acceleration of SAR/ISAR Imaging Algorithms" and see how much you can get with with regular high bandwidth 10ghz radar and computing power in tenths of gigaflops.


SAR generally relies on assumptions that your target is not changing. I question how well it would work in a complicated environment like a road scene. In my opinion, the single-shot nature of LIDAR is more robust. But its possible that SAR techniques have advanced since my work in it 3 years ago.


What's the difference in e.g. surface ice handling between optical and radar?

I worry that these modern cars will be basically requiring fighter jet storage in order to work. I need something that works in a snowstorm when the car is covered in ice.

My "parking sensors" (the cheap little distance sensing thingys mounted around most new cars) are screaming every time there is a little surface ice over them. I work around this by simply ignoring it. But if a car would require these sensors to function, which would be the case if it was autonomous, I'd be starting my morning drive by somehow getting the ice off them.


Normally with radar you get very precise range and radial velocity measurements but very slopping phi and theta.


Resolution?


Theoretical one is lower than that of lidar, but in practice a well made mm-wave radar, currently, has superior resolution to commercially available lidars


Can you point to any reasonably priced mm wave radar that has greater than lidar resolution? Genuine question.


I researched the topic further, I would say that commercial offerings for automotive use with 1<degree resolution are not yet there, but that can be achieved.

Generally, to increase resolution you can do following - increase antenna size, increase number of antennas, increase amount of processing power thrown on it.

Right now, existing offerings can be improved in all 3 ways with ease.


I mean sure, but there are difficulties with doing so, no? I mean,even the military has only really relatively recently been using AESA radars.


Yeah some links to actual products would be nice


Skimming those papers and reading about the silicon tunable lasers was a bit mind blowing.

I studied optics a bunch in college but never managed to work it into my electrical engineering job. Might be lame, but I didn't have any clue this kind of integration was possible yet, it's certainly quite exciting. Probably the coolest thing I've read about in quite some time.


OK I have a question, but dont shit on me if you think its stupid:

Wouldnt any of these smart cars be able to know they physical volumes of many number of cars, including themselves, as well as communicate with eachother - and share their velocity, direction, volume, intentions so as to have a hive mind of the moving bodies?

What if you also just had a beacon which reported this information to other cars in a short network where the driver is human - so the smart cars all know where the human driver operated cars are.

Then, in addition, you collect all the point data, from lidar, you subtract the known data points given "a prius with a shape and mass and volume of this moving at a speed of that - and over time you have built out a complete point cloud of the static, non-variable topology?

rather than spending $75,000 on one lidar periscope, spend 75,000 on developing a beacon such that all cars are telling eachother where they are, their mass/volume and speed?


(Assuming I'm understanding you correctly,) if the issue were only other vehicles, this might be feasible. However, the world includes much more than just the other vehicles on the road, and the road and its environs aren't static.

I can imagine something similar to what you're proposing being a part of the overall system, I suspect there will always be the need for something vehicle-based that "sees" the environment.


Yeah I meant in addition to, not instead of.

And I was saying that these beacons should be just a part of the car and super cheap. The beacon just says "hey I'm a Prius and this is my speed direction and location"

But you'd have to make them completely anon - we already have too much activity tracking.

So the devices should cycle through random IDs that just spray out the specs of the vehicle, but they change frequently the ID so you can just say that a smart car would know that it just sees various cars - not that it knows that "Jim is just ahead on the left"


Imagine you are potentially the first autonomous car maker to market. You need to build a car that assumes no other cars can talk to it. The idea you have is good, but is probably a solution that gets addressed much farther down the road.


I wonder who is going to buy Luminar[0].

[0]: http://www.businessinsider.com/peter-thiel-backed-austin-rus...


I see a number of beams going forward. I wonder what this tech can do however to avoid side collisions, number of my friends have been involved in those and those are particularly horrifying - perp running red light and ramming on the side. Schematically it can be an interesting problem to solve too.


So "no moving parts" by leveraging optical phased arrays and prisms.

I wonder how much their patents restrict others from going the same general route.

It does seem like the only obvious truly "no moving parts" solution. MEMS still has moving parts, they are just tiny.


How would this be better than / supplement 0 lux-capable machine vision? Rangefinding? Doppler shift relative velocity?

If the benefits aren't at least 3x better, it might not be marketable enough to justify the added complexities and costs.


In other words: Swept Source Fourier Domain OCT




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: