How they'll manage to deal with interference once the roads are full of these self-driving cars, and each and everyone of them is shooting out laser-pulses all over the place...
The same goes for radar-based systems...
A closer look at LIDAR sensors – a key component in autonomous vehicles – reveals the lucrative and competitive nature of the self-driving car industry. Essentially, if you want to enter this space, and take on the likes of Waymo and BMW and Ford, you'll need deep pockets – tens of thousands of dollars per test vehicle – and …
Bats solve the interference problem by recognizing their own sonar chirps. With radar etc. you modulate it and autocorrelate the returns.
Alternatively you don't use Lidar. Intel splurged $15 billion on Mobileye, which relies on simple cameras which return much less data but you need to process that data more to infer distance and speed, and more importantly relative speed (which is comparatively easy). The precedent that it "should" work is that it is how us humans do it.
The precedent that it "should" work is that it is how us humans do it.
The way we humans (and bats) do it is by having massively parallel processing available. You're right to point out that it's not only distance but also speed that matters. Both of those give timing and that matters even more. You don't mind occupying the same piece of road that another car will occupy but you really don't want to occupy it at the same time.
"The way we humans (and bats) do it is by having massively parallel processing available. "
Which, by the way, is very probably why it was *Intel* that splashed out on MobileEye - imagine the profits if every car made needs a smallish supercomputer rather than just something powerful enough to run the entertainment system and backup camera.
To amplify (ha ha) on this, bats avoid jamming each other with frequency hopping. Individual bats will use higher or lower sound frequencies to stay 'in the clear.' There are various fish that use alternating electrical currents to detect prey, and they do the same thing -- it's a strategy that's independently evolved at least twice.
How fast is that constant C? (the speed of light and its only traveling 100m or less.)
so you have your sensor inline with your emitter and the probability that you will get interference from another vehicle... minimal.
The funny thing.
The photo is of a HERE mapping car. Not self driving.
The Lidar for HERE and Google are very precise and can yield a really detailed image. More detail than needed for self driving cars.
I also wonder how jammable / spoofable they are.?
LIDAR's spiffy and all, but I'd really hope there are other sensors involved, and then all you have to do is decide what's lying. In real time, with the risk of expensivel consequences. Easy peasy.
(Old cartoons with two guys carrying huge sheets of plate glass also spring to mind)
(Old cartoons with two guys carrying huge sheets of plate glass also spring to mind)
I suppose that depends whether LIDAR can penetrate glass or not. The answer seems to depend on the frequency. That could be quite confusing for the computer. It might (or might not) see the pane of glass just as it might (or might not) see objects inside glass fronted buildings. Hmmm.
Reflections add all sorts of fun. Things appear to be travelling at twice the speed and angle. I'm guessing that LiDAR tends to stop at first return signal to reduce this, but that'll also make it more susceptible to other interferences.
Dammit, this stuff is hard to get right. Still, good enough will be fine, eh?
I also wonder how jammable / spoofable they are.?
Depends on how they encode their identity. If each system uses a static encoding pattern then I'd say it was pretty simple.
If they use a dynamic encoding that changes unpredicably over time (sort of like PFS for LIDAR) then it would be quite a bit harder. But, given the propensity for only securing against known threats, I would imagine even that encoding would be broken fairly quickly.
And I wonder about safety of LIDAR to humans present on the road that may happen to glance in the direction of this setup. Even the laser in CD/DVD player had a warning label. How about the more powerful one that also happens to be outside human vision range and will not cause victims to avoid the exposure?
This is a real concern, and one laser light show operators have had to address. The beam isn't much of a danger as long as it's moving rapidly, since the amount of heating applied to the retina in a brief sweep is miniscule. The problem is if the scanning fails, so that the beam is stationary, anyone looking directly at where it happens to stop could be injured. Rapidly detecting scan failures and shutting off the beam is the obvious solution, but doing it quickly enough for high-power lasers can be difficult at best. I don't know what the emitted power levels are for these systems.
You're right that the problem is worse for lasers outside the human vision range, because our natural instinct to avert our eyes from a bright light doesn't kick in.
I always thought you would need a combination of sensors: ultraviolet lasers for the front of the vehicle, infrared for the rear, cameras, radar, and a vehicle-to-vehicle communication system. For the V2V communication, I always thought the car could send out a very-high frequency short-range signal that sent out some basic data only, such as the speed it is going and whether the brakes are on.
... but now I've read this line "... if you want to enter this space, and take on the likes of Waymo and BMW and Ford, you'll need deep pockets – tens of thousands of dollars per test vehicle" I'm not going to bother.
If I'd known it was going to cost me tens of thousands of dollars for a test vehicle I wouldn't have even considered this idea. I just don't have pockets that deep.
When stuff like tellies in the home were too expensive to buy, people got them on the never never instead.
Same applied at work, e.g. to logic analysers in the electronics lab: a posh one might be too expensive to be affordable to buy so it would be rented from LabHire or Livingston or whoever (especially if it was only needed for a few months on a specific sub-project).
There's obviously some magic in the LiDaR world that makes that not an option any more. Perhaps it's that money the startup spends on hiring physical assets (e.g. equipment) doesn't go on the pre-IPO spin+PR budget. Or maybe there's some other reason.
Sounds more like the bespoke plastic optics and complex mixed-signal ASIC(s) that they're relying on to bring the cost down don't exist yet, so that little module is rammed with general purpose optics, analogue and FPGAs, at eyewatering BoM and manufacturing cost.
Once the design's stable and they know what they want to make, then the lead time for all of the above begins - and it's not short, and no amount of wishing will make it much shorter.
(Veteran of a sorta-similar project)
"When stuff like tellies in the home were too expensive to buy, people got them on the never never instead."
Your ability to pay off the TV loan didn't rely on your watching the TV so it can be assessed on your earning history.
The ability to pay off an R & D cost relies on the outcome being successful to create future earnings. That means there's no history on which to rely. You won't be able to go to a hire purchase company for that. The people who'll be lending money on that scale in that sort of way are going to want
your first-born a slice of the company. It's called venture capitalism and the pre-IPO spin and PR are also factors in the VCs being able to get their money back. They won't see R&D and PR as alternatives, they'll see them as complementary.
"Wayne Seto, a product line manager at Velodyne LIDAR told The Register his company was not comfortable disclosing its current price list"
That's quite all right, I'm not comfortable seeing those extortionate rip-off prices either. Something that should indeed be feasible to make for $200-300 should not ever cost ten to a hundred grand - I don't give a flying fuck whether tiny elves hand-load each photon into those lasers, they quite clearly expect you to pay for the college education of the kids of a "production team" larger than a Hollywood blockbuster's, plus several new yachts for the CEO each time you buy one of these gadgets. Sorry boys, if all I ever do is take out the trash once a month and I reckon I "deserve" a ten grand monthly income, that does not mean "it costs" ten grand to take out some trash. Not that it matters all that much in the end - this is very much an ugly crutch, the equivalent of having your car bristling with "whisker" touch sensors all around to figure out when you're about to hit something; the future belongs pure "vision" systems that use nothing but simple cameras to see much the way we do - admittedly, I never said that would be the near future...
Yet again somebody here is suggesting that a device should be sold at its bill of materials, whilst wilfully ignoring development costs.
This isn't the only company making Lidar kit, yet its competitors aren't able to drastically undercut its current prices. That observation should cause a thinking person to pause and examine their assumptions before commenting.
However, first one to reduce the BoM by 90% will be happy - able to shift volume crippling the competitors rather than themselves. No matter how deep your pockets, it's nice to have a gap between your costs and your price, to pay off the development / fund more of the same / spend on beer.
I'm not sure that all of these LiDAR manufacturers will still be in the game in 5 years.
DB- have a look at short's comment previous. the price only gets down that low, once you can amortise custom silicon (processing) and fabrication tooling (lenses etc) over hundreds of thousands of units. The wafer mask (setup cost) of a cutting edge chip is now around US$10M. Otherwise you have to custom build it with lots of much larger, very expensive, parts. For example, for processing think a compute board similar to a professional (not gamer) GPU. One of these for every processing slice you need (say each direction you want to sense obstacles).
Clearly you haven't worked with LIDAR or processed their images.
I can tell you that the units used by Google and HERE are very expensive and very accurate. Far more accurate than you would need for a self driving car. I mean you can read the speed limit and the letters STOP on signs.
There's more than just the cost of the components that are included in the cost of the equipment. Certification for one.
There's more to self driving cars than just the issue of LIDAR and cameras. Your GPS also has to more accurate and the maps themselves have to be more accurate. This gives the Germans an advantage and why they bought HERE makes sense.
Is it even really viable in the real world?
A) Easily jammed or spoofed?
B) What if every vehicle is using it, interference?
Also GPS doesn't work if there are tall buildings etc and only Galileo might be accurate enough. But is response time fast enough?
There should be more off road testing in simulated environments. It's just cost cutting and irresponsible to test this embryonic tech on public roads.
Considered, and put it on the 'Edge case, hard, solve later, got a product / demo to ship ASAP' pile.
Self-driving cars strike me as the hardest, most edge-case-ridden, timescale-pressured project I can think of. If I was younger, I'd be trying to land a job, I think. Sounds like fun.
Yeah, I'm not ready to assume they have worked out all the issues around interference if the whole road was fully of LIDAR vehicles, let alone the issues around spoofing if you wanted to induce a self driving car to think an object was suddenly blocking the road to make it stop (and then kidnap Liam Neeson's daughter, or whatever)
The first task is to get self driving working with individual cars, they can assume away problems of a road full of them like interference and security for now by saying "well, when the time comes for deployment surely the LIDAR manufacturers will have figured out a device using a dynamically modulated carrier that allows us to detect one car's LIDAR from every other car's and protect us perfectly against spoofing"
"... if you wanted to induce a self driving car to think an object was suddenly blocking the road to make it stop ..."
Just push a pram into the road, it would be easier than spoofing LIDAR returns. That would (should) work for human driven cars too.
Yes, there are many issues to be considered that have not been discussed (at least openly).
In light of recent security problems with Windows and WannaCry I would be concerned about keeping the computing features on line.
Having a loose computer security policy in your entertainment system is no big deal. But having mission critical functional features that can become faulty by network security penetration is totally unacceptable.
Further, what about lifetime support? Do you have to be concerned about having security updates disappear after 12 years?
Guess I'll stay with my 71 El Camino.
... if lidars really are so important. We humans seem to manage fine just with ordinary vision, and depth perception from owning a pair of eyes works fine most of the time. Given the progress in so called AI I would guess that computer simulated depth perception from collated vision on a pair (or more) cameras may only improve, and quick. Yes there are limitations, but I guess vision is not going to be the only source of information.
"We humans seem to manage fine just with ordinary vision, and depth perception from owning a pair of eyes works fine most of the time"
Not sure we do. Try a you tube search of car crashes and see just how many were caused by lack of awareness.
Humans crash and injure/kill themselves all the time. It is accepted that they do that. It would not be acceptable for an AI car to crash and kill it's occupants all the time. Look at the fallout from the Tesla system that killed the guy when he went into the side of a truck. If he had been in a normal car it wouldn't have made the news. With an AI system in charge it becomes global news for months.
Most crashes are due to drugs, alcohol or youth aggression. Some due to fatigue or texting at wheel. Sane, sober, rested humans are pretty amazing and can cope with the unexpected, but not self driving cars as they are not real AI (that's marketing), they are huge data bases, massive expensive arrays of sensors and basically "Expert System" software, with about zero ability to cope with scenarios the human programmer didn't envision. They also have massive security and privacy issues.
On/Off disable system coupled to alcohol and drugs sensor. Sensor for "head droop" etc due to exhaustion. Let's solve computer aided driving first.
"...Sane, sober, rested humans are pretty amazing and can cope with the unexpected..."
Well I would disagree, also about the statistics of the causes of crashes. Study after study has shown that humans behave in varying and unexpected ways when faced with unexpected situations. Some run, some freeze, some head towards danger, some run towards a known dead end.
WHen there is a major issue in a vehicle, many humans will deal incorrectly with it. They may brake hard and carry on straight, whereas a combination of braking and steering, followed by easing off the brakes would have been better. A motorbiker will sometimes get panicked, brake and head straight towards the danger object (a car on the other side of the road for instance) rather than looking where they want to go and easing the bike over further. Car drivers will often stare at a road accident on the other side of a dual carriageway and reduce their speed to a crawl without realising, causing someone else - also looking at the other side of the road to crash into them.
There are so many situations where a human will not carry out the optimum action in a given situation, and it is only when they experience those situations can they learn from them (hence why new drivers are so much more likely to have an accident). A computer can be programmed with an optimum action for situations that can be simulated (at a virtual 100x speed) and also have the combined knowledge of every other car on the road that have encountered different situations loaded into it. Google at I/O were talking about how their image recognition rate has now surpassed human subjects (although I haven't seen the data or study for this)
I could never see how a self driving car could ever work 5 years ago, however I'm prepared to acknowledge that they could be a universal feature on the roads in my lifetime.
The optimum action for something simple like a computer reading a JPG image ought to be relatively simple and doesn't include unauthenticated remote code execution ie security exploits. Any such basic security exploit (buffer overflows etc) ought to have been fairly simple to prevent using readily available tools, skills, and personnel, but twenty years of shiny software development, together with unhelpful relationships with the "security services", seems to lead to it being too much effort to get simple stuff right.
Plenty similar examples, before we get into things like the logistics of keeping a fleet of vehicles and their control systems up to date.
What's going to be different with self driving vehicles ?
In order to perceive distance you first have to perceive the objects in the visual field. That in turn involves edge detection. Then you have to correlate the relative positions of the objects as seen from the two eye points and the feedback from the muscles controlling the eyes. It's all massively parallel - some of the processing seems to be done in the retina itself. And none of it is conscious so I'm not sure that the I bit of AI applies.
We humans seem to manage fine just with ordinary vision, and depth perception from owning a pair of eyes works fine most of the time
That's because we have a massilely-parallel computer backing up the vision system that also has a lot of fairly hard-wired stuff supporting it.
Well - normal people do. I don't know if it applies to politicians :-)
Understanding depth by using one eye is a really difficult problem. However, using two eyes to measure depth nearby is relatively easy, and AI labs have been doing it for a long time.
When driving, humans do not use stereo vision, distances are too great. But a computer can have two or more cameras spaced well apart. The hard part is to recognize that an edge seen from the two cameras are actually from the same object. Then a simple bit of trig gives you the distance.
I am surprised this approach is not used. But I suspect that the car manufacturers are more auto engineers hacking AI rather than AI researchers.
The comments have links to cheaper modules. If you have that interesting mapping project there are options.
6 month wait for top of the line gear..... They must be well ahead of the competition.
There are many start-up in this space. Some are well funded. If you don't have a specific approach, but have lots of money, you buy the best equipment available. That way you won't be limited by your sensors. Or something.
The real reason is that spending money for gear is seen as Doing Something. Spending lots of money for top-end gear shows that you are a Professional and that you are Doing Something Important.
A LIDAR system is useful, but it's not absolutely required. Beyond the sensors, it gives you a quick way to provide a 3-D model to your system. All pre-built and ready to go. Otherwise you would have to develop your own, which takes time and expertise.
The alternative is a combination of vision, radar, and ultrasonic systems. Vision system can use multiple cameras ("binocular" distance estimation), focus, other optical tricks, object recognition and motion tracking to build a 3-D world model. It's more work than just buying an off-the-shelf system, but you eventually need to do some of that work to move from "don't run into something in the parking lot" to handling moving vehicles.
Vision systems also have the practical advantage of being something we understand. Not just for the benefit of developers, but for convincing the public that self-driving vehicles are safe enough and convincing the jury that it wasn't the machine's fault.
The article also mentioned training up your vision system against LiDAR data - that makes some sense. If that's a planned route to ditching the LiDAR, then nobody really cares how expensive they are in low volume, as long as they work and can be bought. 6 months delay will concentrate the mind, though. Maybe people will design the systems while they wait, rather than just knocking out some code and hoping it compiles?
Biting the hand that feeds IT © 1998–2019