144 trillion operations per second
"...its chip could do 144 trillion operations per second" - AI, as an autopilot, does not need that much.
Tesla claims to have built a "fully self driving" (FSD) system using custom-designed math processors, allowing its vehicles to potentially drive themselves completely autonomously. At an all-day event at the company's Los Angeles headquarters on Monday, executives were on stage to outline the computer system, which has been …
I'd love to see something that would back your statement up.
To be honest, I'm just moving past basic theoretical understanding of neural networks and moving into application. I've been very interested in reducing transform complexity and therefore reducing the number of operations per second for a given "AI" operation. Think of me as the guy who would spend 2 months hand coding and optimizing assembler back in the 90's to draw a few pixels faster. (I did that too)
I don't entirely agree from my current understanding with the blanket statement that it wouldn't need that much. I believe at the moment that there are other bottlenecks to solve first, but at least in my experience processing convolutional networks in real time from multiple high resolution sources at multiple frequency ranges could probably use all 144 trillion operations and then some.
Do you have something that would back up your statement... I'd love to see for better understanding of the topic.
Very difficult to calculate because I don't know everything, but I'll try.
Let's assume that Tesla AI paragraph is an average of 100 words, 50 patterns before creating synonymous clusters, and 200 patterns after that. Suppose that there are 1,000 subtexts paragraphs that surround it (dictionary definitions + examples, and which define its meaning/ context-subtexts = 100 patterns each = 100.000 patterns in total): 200 + 100,000 = 100,200 patterns.
Tesla computer does not need to create them each time, they are hard coded.
Next, Tesla = eight cameras, 12 ultrasonic sensors, and front-facing radar = 21 sensor;
each the sensor provide 100,200 patterns per second = approximately 2.3 million patterns per second, matched with 100,200 AI patterns.
Thus we get less than 300 billion operations per second, plus 1 trillion (? - have no idea how it works) on commands, but Tesla's processor = 144 trillion.
Why the definitions? Without them you cannot create synonymous clusters. There is a sentence:
-- After this publication was made, the correspondence was lost and our mail failed to deliver.
There are three synonyms here: "publication", "correspondence" and "mail"; and so "was done", "was lost", "failed" and "deliver" refer to all three: these twelve patterns are the sentence's synonymous cluster.
If you don't get these three synonyms - because you profile/ train on terabytes and don't see the words - you can lose 75% of possible matches and therefore 75% of information cannot be found. Add one more synonym and you lose more than 93%! The same with a paragraphs, if you have more than one sentence (with synonyms) in it.
Are you ready to lose up to 99% of your information? Train by terabytes!
Dictionary definitions allow unique indexing of data, so that the desired AI paragraph with the desired patterns can be found, by comparison of several hundred bytes, kilobytes the max. Indeed, the computer literally remember and understands what is wanted from it.
The training by terabytes doesn't work! Look how Google Waymo lost years on terabytes.
A theoretical understanding of ANNs is just the start. The problem is we understand how they work at a high level - data in , matches out - and we understand at a very low level - back propagation changes weightings which changes neuron synapse combinations and/or values required to fire. What no one really understands is what is going on inbetween, eg how exactly do the dozens or even hundreds of hidden linked layers actually achieve their results. The whole seems to be greater than the sum of the parts with ANNs and anyone who tells you they completely grok the processes going on deep inside are deluded or liars. And until we really understand how they do it then we can never be 100% certain how they'll behave in a given situation.
Pattern weights are almost always irrational numbers that computer always rounds and almost always differently, finding the end result. For example, 0.(3) sometimes 0.4, sometimes 0.3. The number of patterns is huge, and the computational error accumulates: often in the same situations AI gives different answers. That's my observation, what I got from talking to lexical clones.
How to deal with it? Helps statistics, ie training. AI gets feedback and if the result is positive, etc... Basics of Cybernetics. People learn the same way, do you remember your school and College?
"So we have all these Teslas sending data/video to where?"
The same place your wetware sends most of its petabytes. The bit bucket.
The ability to _FORGET_ and filter is one of the more crucial requirements for intelligence and development. The important part is knowing WHAT to filter and forget.
Didn't PT Barnum say something about suckers being born every minute? Without any hardcore testing of these chips and the firmware the claim is pure BS. Give me a minimum of 5 years of widespread testing in all conditions with the system performing flawlessly (per the hype) and then I might grudgingly grant you are close. A minimum of 10 years before I believe you have likely seen about every reasonable driving situation someone, somewhere will face.
Exactly. I'd want to see about a decade of driving results that are substantially better than the typical Human before I accepted any claim out of his gob.
Fully autonomous? Until I, a totally blind person without a drivers license, can climb into the passenger seat, tell it where to go, & not have to take control for *any* reason then you don't get to make that claim. If I have to be in the driver seat, have a license, & be able to take control in an emergency, then it's not a *fully* autonomous ride at all.
You want to offer a robo taxi? Fine. I'll believe it when I hear that insurance companies will insure your cars. If they won't insure them then you won't be able to offer them & nobody will get to ride in them.
Mr. Barnum would marvel at the audacity the MuskRat shows every time he opens his muzzle, amazed at the suckers that believe the BS that comes out of it, & turn green with envy over the amounts of cash the bastard rakes in for having done it.
"I'd want to see about a decade of driving results that are substantially better than the typical Human before I accepted any claim out of his gob."
It isn't that hard to drive better than a human most of the time, and it isn't that hard to drive better than most humans.
The bigger problem is programmatically adhering to rigid rules such as "PEDESTRIANS WILL NOT CROSS THE ROAD EXCEPT AT DESIGNATED POINTS", which sound fine to wetware but when interpreted literally result in automated killing machines roaming the streets mowing down pedestrians who dare to try.
Thankfully only the USA and a couple of other authoritarian countries have such "car is KING" type rules, and even then, "common sense" gets applied by human drivers. A machine programmed with such rules that runs into a parade and marching band won't stop for them, which will make for a whole new interpretation of American (road) Pies.
Google have got it pretty much right. Uber utterly fucked up.
So, then, Level 1.9993 autonomy? Good enough for Elon and (too) many others. I am inclined to think that Silicon Valley success criteria -- "Works much of the time" -- are NOT going to prove in the long run to be adequate for autonomous vehicles.
Recommended reading -- Richard Feynman, Appendix F to the Rodgers Commission Report on the loss of the Space Shuttle Challenger. https://science.ksc.nasa.gov/shuttle/missions/51-l/docs/rogers-commission/Appendix-F.txt
Why wont they be? They don't have to be better than Lewis Hamilton - just better than the average human driver - which is a pretty low bar tbh.
If Tesla can show the insurance companies that a Tesla will have less accidents than a human over a relevant statistical sample then they'll be queuing up - especially since they would still charge an extra "AI Premium" and cream off profits for a few years.
The Feynman report whilst an excellent example of no-BS scientific communication isnt relevant...
They need to be much better than the 'average' driver, machines killing people at the same rate as clumsy people is not going to be acceptable.
One way to make sure they're good enough is to make the CEO (and other Board members) personally liable for their systems mistakes as if they were at the controls.
"Why wont they be? They don't have to be better than Lewis Hamilton - just better than the average human driver - which is a pretty low bar tbh."
The average human drive ris actually very good and orders of magnitude better than any automatic system we can produce. In the UK on average a car occupant is killed once very 4 million miles travelled.
This drags the average down because it includes the drink drivers and boy racers. Middle aged sensible driver do far better.
Also, a fair number of accidents are also caused by distractions whether that be fiddling to find a particular station on the radio, texting, or what have you. The big issue with this assistive technology is that some people are using it as an excuse to be distracted further. As a result the idiot who is flashing back and forth between distraction and road is now allowing the distraction to take up much more of their time which likely results in a much worse accident when the automation misses the mark.
"They don't have to be better than Lewis Hamilton - just better than the average human driver - which is a pretty low bar tbh."
Amusingly I would guess that it's far easier to be faster than Lewis Hamilton on a racing circuit (ignoring overtaking) than to drive around a crowded city without crashing into anything.
"If Tesla can show the insurance companies that a Tesla will have less accidents than a human over a relevant statistical sample then they'll be queuing up - especially since they would still charge an extra "AI Premium" and cream off profits for a few years."
They'll charge the extra premium on repair costs, however that won't last long - because:
As soon as it's shown that Robocars in control have statistically fewer big crashes AND small dings than humans (and the video evidence will be compelling evidence that humans are driving into robocars, therefore insurers will probably start foregoing "knock-for-knock" handling of claims, resulting in humans facing increasingly $$LARGE repair/medical liabilities) then you'll see premiums for human-controlled vehicles (or humans taking control of robocars) start to climb steadily whilst robocar premiums either stay steady or decline (There will still be vandalism, hit'n'run whilst parked and other claims to deal with, but that constant surveillance is going to make anonymous damage to an unattended vehicle pretty much a thing of the past and result in a lot of people finding themselves with criminal records for behaviour they've been getting away with for years)
That in turn is going to result in a knee point of adoption.
Think it won't happen? Look at photos of cities in the early 20th century and look at how fast the transition from mostly horse+cart to mostly motor-vehicle was (about 15 years)
The flipside of this is that personal vehicle ownership is likely to start declining. The single most expensive part of a taxi is the driver. Eliminate the wetware and it's far cheaper to use a hire vehicle than to pay the standing costs of a personal one - personal vehicles will become the preserve of the rich again and pedestrians are likely to reclaim the streets as car-park jammed curbs become a thing of the past.
This will have knock-on effects too. Westminster and other councils haven't planned for this change and they stand to lose 30-50% of their income very quickly - even in the short term, a robocar can be instructed to go park somewhere cheaper. They're going to be scrambling to make up the difference, as are podunk shitholes that make money by setting up speedtraps.
"Maybe their new chip will..."
Comparing oranges and applesauce there sunshine. The older Autopilot very explicitly came with warnings that it COULD NOT detect and stop for stationary objects in its path when travelling in excess of fifty miles per hour.
Letting your cruise control drive into such things is on par with putting your hand into a blender and then complaining that your fingers hurt.
"Comparing oranges and applesauce there sunshine. The older Autopilot very explicitly came with warnings that it COULD NOT detect and stop for stationary objects in its path when travelling in excess of fifty miles per hour."
That might wash if some fucking hyping idiot had not called it "Autopilot" and called it "Fancy cruise control" instead.. wonder which self premoting twat that was? Elon?
People have died because of the twats ego..
If all cars/trucks were autonomous, if all roads were well maintained (even the side roads in neighborhoods, etc.) and if pedestrians, motorcycles, bicycles, etc. were restricted for using roads, then maybe autonomous vehicles will work. Until the human factor is removed, there's still too much variability in human actions for the vehicles to contend with.
Autonomous vehicles can probably be made to work in a few situations in the next few years. Long haul trucking is one. Get on the expressway. Stay in your lane. Don't run into things. Don't pass anything that isn't stationary or moving VERY slowly. And a couple of thousand additional rules will probably prove to be good enough. But what's the point? You'll need a human driver on board to handle unexpected situations. Eventually, perhaps the driver can be dispensed with and only dispatched when required. But that'll require vehicles smart enough to recognize situations beyond their ken, pull over safely, and call for help.
The other likely situation is campus shuttles, airport parking lot shuttles and the like. They move very slowly on fixed routes. As long as they don't run into/over stuff/people/pets, maybe they don't really need a full time driver on board.
But I think fully autonomous vehicles are likely a lot further off than most folks think. I suspect insurance underwriters are probably going to share my skepticism.
And something that troubles me. While some common vehicle safety features like seat belts work really well, others like automated braking systems and electronic traction control, notoriously not only don't work very well in some situations -- e.g. some unpaved roads, ice and snow. In the worst cases, they actively endanger those in vehicles as well as bystanders. Yet, the problems are ignored. In some cases, these things are hard to turn off, and in a few it is actually illegal to do so. Are we going to have similar problems with autonomous vehicles?
1. "Serve the public trust"
2. "Protect the innocent"
3. "Uphold the law"
4. "Any attempt to arrest a senior officer of OCP results in shutdown"
238."Avoid destructive behavior"
239. "Be accessible"
240. "Participate in group activities"
241. "Avoid interpersonal conflicts"
242. "Avoid premature value judgements"
243. "Pool opinions before expressing yourself"
244. "Discourage feelings of negativity and hostility"
245. "If you haven't got anything nice to say don't talk"
246. "Don't rush traffic lights"
247. "Don't run through puddles and splash pedestrians or other cars"
248. "Don't say that you are always prompt when you are not"
249. "Don't be over-sensitive to the hostility and negativity of others"
250. "Don't walk across a ball room floor swinging your arms"
Long haul trucking is one. Get on the expressway. Stay in your lane. Don't run into things. Don't pass anything that isn't stationary or moving VERY slowly.
Sounds like a train, which would be a much better way to move containerized goods long haul, with the trucks reserved just for the endpoint collection/distribution.
Unfortunately the problem there isn't technical, it's truck drivers & their unions.
It's also legislative because we could take it a step further and use ships where possible. The problem in the US is the Jones Act which keeps foreign merchant1 vessels, container ships in particular, from stopping in more than one US port. That means all goods destined for the US must be disgorged in one place rather than hitting several ports before the ship heads back across the seas.
Ideally, ships would handle all coastal movement, trains carry it inward and trucks only make up the short haul local moving. There wouldn't be anywhere near as many trucks on the road which would help both traffic and smog as well as probably save a whole bunch of money. Pity it will never happen because we have to protect the jobs of truck drivers, ship builders, and merchant marines.
1 - It's the Passenger Vessel Services Act for cruise ships.
I get the impression that in your world there are no crashes caused by human error..
In my world virtually all crashes are caused by human error - generally inattention or impatience.
That is what these vehicles are being designed to reduce - and yet somehow they are required to be infinitely better than humans before you'd consider them. That's a very odd train of thought.
By stuck I don't mean careering off the road or colliding with something - which a computer is very good at avoiding, but the car being literally stuck behind something and unable to progress due to being unable to take the initiative needed to proceed.
For example when a driver encounters parked cars on their side of the road or a slow cyclist they have to decide whether and when it is safe to move into the oncoming lane and pass them. That includes being able to move into oncoming traffic based on judging distances. it isn't enough to simply brake behind the parked car ahead and wait for the road to be entirely clear - on some roads at busy times you'd end up stuck.
"when a driver encounters parked cars on their side of the road or a slow cyclist they have to decide whether and when it is safe to move into the oncoming lane and pass them."
I've watched SUV drivers put cyclists into hedges on lanes - breaking bones on more than one occasion.
I've watched those same SUV drivers _SCREAM_ at me to get my car out of the way because they can't get their tanks past me, despite having 8 feet of clearance between my car and the other hedge.
I've watched an elderly boy racer take off every single wing mirror on parked cars for 100 yards, and not stop (he must've been about 70)
I trust a machine to measure such things far more accurately than any human AND to give adequate clearance to other users.
You may THINK you are a better driver than average - but apparently so do 80% of other road users - and the better you think you are, the worse the driving usually is.
Currently when they get stuck a human driver has to take over.
I don't know how true that is (isn't the human there for emergencies - like a driving instructor with dual-controls?), nor what value of "Currently" you're using. But taking that at face value, a human driver could presumably be in a control centre, enabling a pool of drivers to deal with "stuck" situations for a much larger pool of vehicles.
Human drivers don't get stuck.
Oh my. You've led a sheltered life!
(erm, for the record I'm not one of your downvotes. Nor your upvote).
"There's always an Audi to break the deadlock."
And the cyclist - one of my cow-orkers ended up underneath such a vehicle with a shattered skull and months off work.
The driver's excuse? She didn't see him as she came off the roundabout at 40mph and straight over the top of him - and tried to drive off before being stopped by witnesses.
One of the benefits of having robocars is that their driving standards will rapidly result in MINIMUM human standards being sharply increased - after all, if you're an incompetent driver you can get a machine to do it. This will be driven by insurers and they WILL require periodic retesting just like they did in aviation before it became law. No play, no premium - and you won't be discriminated against if they refuse to insure you as you can always get a machine to drive you.
"Currently when they get stuck a human driver has to take over."
The human driver doesn't have to be onboard.
"Human drivers don't get stuck."
Uh yeah. Right. You haven't been watching the same "bad driving UK/Oz/USA/NZ/wherever" youtube videos I have.
Even Lewis Hamilton has off days. Mere mortals make mistakes every couple of minutes some drivers are just plain asleep at the wheel whilst others have a death wish. Road rules and safety regulations are generally setup so that it takes _at least_ 3 (if not 5) serious errors to cause a crash (sometimes the errors are the road designer's), but they still happen regularly.
Having _consistent_ driving on the roads will bring its own benefits:
Slow rural drivers in particular are a dire statistical menace all of their own as they put everyone trying to pass them on the wrong side of the road (speeders only put themselves on the wrong side) - this is why inconsistent speed limits for different vehicle types is dangerous.
City gridlock is invariably caused by arseholes trying to barge their way through, blocking intersections and choking narrow points.
Then there are the rat runs, etc etc.
"Uh yeah. Right. You haven't been watching the same "bad driving UK/Oz/USA/NZ/wherever" youtube videos I have."
I've seen loads of those videos, both online and in TV programmes dedicated to them. On the other hand, I've averaged in the region of 50,000 miles per year over the last 25 years or so and I've seen far, far fewer incidents in real life. I've seen a fair few idiots doing stupid things and nearly coming a cropper. I've seen the results of accidents (and been stuck in the queues) a number of times too. But in my personal experience, the numbers of idiots on the road is relatively small. I've been the one taking evasive action a number times too, but I can't stress enough that those times and experiences are the exception rather than the rule. All just my personal experience though. YMMV :-)
"That is what these vehicles are being designed to reduce - and yet somehow they are required to be infinitely better than humans before you'd consider them. That's a very odd train of thought."
If someone you know dies in an RTA, you have one or more other drivers to blame and the insurance pays out. Who you gonna blame when it's a robo-car? Especially if $BigCorp is going to be sending in it's high priced lawyers to fight every case because they don't ever want to admit to faulty hardware or software in their cars.
In my world virtually all crashes are caused by human error... That is what these vehicles are being designed to reduce
The main types of human error that cause crashes are inattention, distraction, overconfidence, intoxication and impaired visibility. What doesn't cause many crashes is complete misinterpretation of the context. That's because we're using a perceptual system that's had millions of years of optimisation for our physical world, and driving in an environment that's been modified to accommodate the limitations of our perceptual system when controlling motor vehicles at speed.
Everything I read about autonomous vehicles makes it clear that they are a long way from this degree of general perception. For example, experienced drivers can infer the direction of an invisible road ahead by checking the line of telegraph posts or trees, and on country lanes at night they become aware of an oncoming car by the loom of its headlights long before they see it.
It's the sort of idea that could only have been dreamed up in a country full of grid-pattern towns and wide, straight highways. In 1911 somebody drove a Ford Model T up Ben Nevis to prove its capability. I can think of plenty of drives in Europe I'd like to see an autonomous car complete before I trust it.
> For example, experienced drivers can infer the direction of an invisible road ahead by checking the line of telegraph posts or trees, and on country lanes at night they become aware of an oncoming car by the loom of its headlights long before they see it.
The presentation shows a car predicting (without maps) the direction a road takes over a crest it cannot see - solved problem.
Until the human factor is removed, there's still too much variability in human actions for the vehicles to contend with.
Never mind the human factor, there's also a huge amount of natural variability. So a cat in the road. Or a deer. Many versions of those available. Or for Australian vehicles, cane toads, kangaroos and cassowarys. All challenges that human drivers can potentially learn to deal with better then 'AI' can.
But more hype. Musk promised this years ago, and failed to deliver. Much like the $35k 'people's EV'.
"Earlier this month, researchers outlined how a Tesla could be made to veer off course into oncoming traffic with a few simple stickers on the ground, demonstrating the danger of relying too heavily on video camera inputs."
Nah. That's the danger of relying too heavily on LANE MARKINGS as primary guidance data. Humans rely on a couple video camera inputs and do just fine. I'm still flabbergasted that they were able to trick a system that should have been stress-tested on Highway 101 and its Shittastic Markings by now.
These companies working on self driving cars are not required to fully disclose their research and have no incentive to admit how impossible the task of making self driving cars actually is. They possibly even delude themselves, or perhaps hope legislation will allow them to sell the technology with no liability for accidents.
Getting a car to follow lane markings is pretty achievable, so is detecting an obstacle ahead and braking before hitting it (couldn't both be done 40 years ago?). But as soon as the situation becomes a little different, a little more complicated, then you find you have to start throwing more complicated rules into the AI, and in turn those complicated rules open up more vulnerabilities, which require even more complex rules to patch up.
IMO you need artificial general intelligence to drive cars autonomously, by which point self driving cars would be the least of our concerns.
"These companies working on self driving cars are not required to fully disclose their research"
In the USA.
"and have no incentive to admit how impossible the task of making self driving cars actually is."
In the USA.
Thats' the same USA where the aviation regulator allowed Boeing to rat rod the 737 into the NG (which is mildly unstable(*)), shopped whistleblowers to Boeing when they flagged forged documentation on crucial airframe structural members (supposed to be precision CNC machined, actually made sloppily by hand and beaten to fit, broke badly in several minor crashes killing at least 20 people and corroded badly in service, will result in airframes falling out of the sky) and then waved extreme rat rodding of the 737MAX and sloppy software through.
It was a world technology leader once. Now it's leading the world in other areas - such as corruption.
(*) You can't power a NG out of a stall. It's one of the few aircraft you MUST put the nose down _FIRST_ in, then power up. The MAX is even worse - and a stall can happen easily in a low speed turn, so don't think it's just "pulling up"
I've worked in a few environments where we did our own HDL development. We worked almost entirely in the FPGA world because we did too much "special purpose" algorithms which would often require field updates... an area not well suited for ASICs.
But, I believe what Tesla is doing here is a mistake.
Large scale ASIC development is generally reserved for a special category of companies for a reason. Yes, their new tensor processor almost certainly is a bunch of very small tensor cores, which each are relatively easy to get right, and the interconnect is probably either a really simple high speed serial ring bus... so, it's probably not much harder than just "daisy chaining" a bunch of cores. But even with a superstar chip designer on staff, there are a tremendous amount of costs in getting a chip like this right.
Simulation is a problem.
In FPGA, we often just simulate using squiggly lines in a simulator. Then we can synthesize and upload it to a chip. The trial and error cycle is measured in hours and hundreds of dollars.
In ASIC, all the work is often done in FPGA first, but then to route, mask and fab a new chip... especially of this scale, there is a HUGE amount involved. It requires multiple iterations and there are always going to be issues with power distribution, grounding, routing... and most importantly, heat. Heat is a nightmare in this circumstance. Intel, NVidia, Apple, ARM, etc... probably each spend 25-50% of their R&D budgets on simply putting transistors in just the right places to distribute heat appropriately. It's not really possible to properly simulate the process either... and a super-star chip designer probably know most of the tricks of the trade to make it happen, but there's more to just intuition with regards to this.
Automotive processors must operate under extreme environmental conditions... especially those used in trucks traversing mountains and deserts.
If Tesla managed to actually make this happen and they managed to build their own processors instead of paying NVidia, AMD or someone similar to do it for them, I see this as being a pretty bad idea overall.
Of course, I'd imagine that NVidia is raking Tesla over the coals and making it very difficult for Tesla to reach self-driving in a model 3 class car, but there has the be a better solution than running an ASIC design company within their own organization. Investing in another company in exchange for favorable prices would have made more sense I think. Then the development costs could have been spread across multiple organizations.
Once people might have listened to Musk, but between the previous failures to deliver and the new blatant overreach (robot taxis making $$$$ next year?!) no one puts any value on anything he has to say on autonomous vehicles.
And some unkind people might even say he was making a lot up as he went expecting to use the resulting hype to overshadow the imminent release of the Q1 numbers. Pity the reaction seems to have been a combination of 'meh' plus various others (like nVidia) immediately ripping his words apart.
Icon showing live footage from a Shanghai car park. ->
"Tesla claimed its chip could do up to 144 trillion math calculations per second..."
Well, that may be true, but the bottleneck is almost always the interconnect between cores (of which there are undoubtedly many) and the data paths to get the data from the sensors to the appropriate core / buffer.
Paralleling data crunching is also difficult for many algorithms (I have worked in the HPC space where such things come up regularly) and floating point (the IEEE version) is not fully reciprocal (close but some rounding errors may have unfortunate consequences) although it is perfectly possible that some version of fixed point is being used.
That said, there are some places self-driving cars will really struggle to ever become a reality, such as Cornish lanes.
The local Cornish approach is far too often "it's OK to drink dive if you're a local as you know the roads".
A particular hazard on many of these roads is curiously abandoned cars. In the middle of fecking nowhere, about a metre from the edge of the road, usually just after a blind corner at the top or bottom of a hill. No sign of the driver of course, or even why they would stop and park where they do. Other hazards are:
/(still) honary Cornish I guess...
Cornish lanes is not a problem for Tesla AI!
Tesla AI is texts, ordinary everyday human texts.
Why texts? Any text pattern has a context and a huge number of subtexts (e.g. dictionary definitions, explanatory instructions, etc.). These contexts and subtexts, superimposed on the description of a particular situation (which is also text), allow you to find a single pattern and tell the machine what it should do.
That is, is enough to describe once a certain situation - Cornish one - and no problems with Tesla AI will never be.
It's quite another to make the software. As a mental exercise, think of all these things that happen quite frequently on the roads - a set of traffic lights that are broken or out, road works, potholes, broken glass or debris on the road, large puddles, a police man giving directions, diversions, some lorry trying to back into a driveway, narrow lanes with oncoming traffic and spots to yield, an icy hill, a pedestrian crossing, a lollipop lady, cyclist on a narrow windy lane, road bumps, box junctions, etc. etc. etc.
These are situations that are not covered by a simple set of rules but a complex understanding of the prevailing conditions, recognition / cooperation with other humans in order to make a decision. These are basically intractible problems, each and every one.
Software in an autonomous car has to cope with all of that sanely, safely while making good progress in every scenario and permutation. It has to do it at least as well as a human can. If it can't do all that then Tesla can forget about self driving cars tooling around without drivers for the forseeable future.
What I expect to happen instead is Tesla will dump some features out there which will improved autonomy in limited situations but still won't be anywhere close to a level 5 vehicles. The car might be level 4 in some scenarios but you'd better believe you will require an alert and attentive driver in every scenario. I actually wonder if they're going to turn some of that the processing power in their new module inwards to monitor the driver and ensure they are being attentive.
"It's quite another to make the software."
Exactly what I was going to say. Autonomous cars are not a hardware problem. Sure, there will be benefits, especially in power consumption, from using dedicated chips designed specifically to do the job, but it would already be trivial to get more than enough computing power by just sticking a half-decent workstation laptop under the bonnet. Same for the sensors - arguing about exactly what kind are best and how many you need isn't the big problem, it's figuring how to actually use all that information in a sensible way that is the tricky part.
Tesla will not be controlled by algorithms but AI, where AI is texts, ordinary everyday human texts. This is the highlight that makes possible for a car without a driver to exist!
Thanks to my discovered and patented method of text structuring, is possible to select a pattern from a text, in its explicit context and implicit subtext. And this pattern can be used as a computer command, driving a car.
Why texts? Any textual pattern has a context and a huge number of subtexts (for example, definitions from a dictionary for its words, explanatory instructions, etc). These contexts and subtexts, superimposed on the description of a particular situation (which is a text too), allow you to find a single pattern and tell the car what it should do.
What's text got to do with a runaway, unmarked (read: no text to read) car barrelling towards you on a single-lane road? AND there's another car a short distance behind you? Can an AI think outside the box and realize the only hope is to go OFF the script (and the road)?
Already got one! :- ) Now your text should be tied to a specific situation :-)
This is called machine learning: AI adds structured texts that contain all the necessary patterns for a particular circumstance (sets of commands).
What is a programming language and program?
A programming language is a set of special words (commands) that structure a text/ description of what you want, and make it understandable to computer. In other words, a program is a structured text and programmers are translators.
AI structures texts and makes programs out of them automatically, without the participation of programmers/ highly paid individuals. Can you imagine how many trillion AI saves only on cars without a driver?
Amazon has already hired thousands of people to annotate texts. I think Tesla, like all other companies, has also hired thousands of people to annotate sensor data, which is the answer to your question.
It all depends on how well used Tesla or Waymo sensors are, and how many different situations Tesla or Waymo have described by texts.
Indeed, today it is possible to structure texts, that is, to decompose them into patterns and make them understandable to computer, thanks to AI-parsing. Previously, it was done by specially trained people, the so-called programmers. They laid out the texts into patterns using the standards of so-called programming languages, combining their commands and using manually structured data from SQL databases (on the basis of n-gram parsing). Today, the same and much cheaper, faster and better makes the computer.
That is, if, for example, Tesla has the right sensors and textual descriptions of "a little and MY little kid while the sun is glaring in your lens because it's sunrise or sunset" - some proper actions may occur. For example, lollipops and soda will be issued to both the children.
This is AI - it finds answers according to the NIST TREC demand, and acts accordingly.
"The aim is to push firmware updates over the air to vehicles to bring their software up to level four, which is what you'd call baseline human-free driving, all powered by the supposedly superior FSD hardware."
So this morning my car behaves differently from the way it did when I drove home yesterday. Nevertheless, until it reaches level four I'm still supposed to take over when the car loses it's marbles, but what's going to trigger that today?
"two fully independent math processors on board which both receive full video from the car and make their own independent evaluations before another part of the system compares them to make sure they match. "
Are they running the same software?
How do they avoid the Boeing effect: Self-certification of safety and business pressures work against each other.
Maybe directors need to be in the chair from Marathon Man: https://www.youtube.com/watch?v=kzw1_2b-I7A
"1 million self driving taxis means that everyone who spent north of 60-100K+ on a Tesla is dying to turn their luxury car into a taxi."
No, it means they spent north of 60-100K to have their own personal, clean space, guaranteed free of other peoples mess, dirt, left over takeaway, shit, piss and vomit while they travel.
They already bought the car. They own it. Tesla can't hijack it for use as a taxi while they are not using it without permission/payments.
Either he's going to be sued into oblivion for his claims about a "robot taxi that will let you keep 75% of the earnings" since that's financial information people would use to base purchases on, or he'll be sued into oblivion when a driverless car that's years from being ready to do that kills its first pedestrian while on "robot taxi" duty.
So, no speed limits on them ? If they're "totally safe", then they can drive as fast as they decide. There would be no logical reason whatsoever for externally-mandated speed limits on self-driving vehicles that can provide their own "totally safe" assurance.
Well, perhaps we'd want to avoid sonic booms, so maximum ground speed should be Mach 0.95.
I suspect that we're rapidly heading into a new version of the 'omniscient omnipotent paradox', except with self-driving cars instead of gods.
I'm happy that Elon is happy.
I presume he will therefore have no objections to being required to spend "a while" chained to the front of one of his vehicles, Mad Max Style, as it is tested in a multitude of different driving environments around the world?
Seems reasonable enough to me. . . .
As others have noted, the speed of the processor seems largely irrelevant to the real issue. Fundamentally the current algorithms that control these “AI” cars cannot, in any sensible way, think for themselves. This is almost certainly deliberate as it would be all but impossible to predict the behaviour of a self driving car that could perform such a feat (and therefore, even if we could build such a thing, hugely reckless to unleash them on public roads). Unfortunately, lacking this “think on your feet” ability, these cars are entirely constrained to handling situations they have been explicitly trained on. Given I cannot see how it is possible to train a car for every possible situation, I find it hard to believe any claims of impending success. My big concern is that many governments will fall for the hype / propaganda and green light them well before they are even vaguely ready. The current safely claims / comparisons are so ridiculously biased it beggars belief. Comparing the safety record of brand new, high end sports cars, driven entirely within posted speed limits, almost entirely on highways in largely dry climates with average human behaviour seems borderline fraudulent. If a comparison were made with the safety record of similar cars, on similar roads, in similar conditions, at similar speeds, excluding intoxicated drivers (as this is illegal, so seems unfair to include in such a comparison) then I might be willing to reconsider (at least for its use on such highways).
How does one deal with the human element, such as 1. Wanting to pull over suddenly to look at a nice view or take a leak? "Car! Quick, pull into that lay-by!" How will car know which lay-by? A human passenger would gesture to the driver, pointing. Can internal camera see what the occupants are doing?
2. And there hasn't been any discussion as to what Tesla's AI will do when faced with two unavoidable collisions:
a) Elderly couple crossing the road or b) Younger mother with child?. If course, this situation is rare even for humans, but owner of vehicle should be able to specify. This will also move the burden of responsibility away from Tesla (or other manufacturer) to the owner for insurance etc purposes.
IOW, there's no real solution to a Trolley Problem, so it's extremely hard to pick between two such scenarios because each case can be different. Say you say run over the old last but what if the next situation involves your gran? Pick the mother, then it's your wife and baby next time.
Two other situations bear considering. First, snap object recognition, which even humans can fail at (You can tell the difference between a brick and a bag? Fine, how about a brick in a bag?).
Second, intuitive situations that humans solve without conscious recognition. Since we don't know we're even doing it, we don't know HOW we're doing it, and without the knowledge of how we do it, it's impossible to teach it to another human, let alone a machine.
I'm waiting for someone to prove a few of these serious automation problems to be physically intractable (except for the Trolley Problem, as that's a dilemma and has no real solution by design).
The idea is simple: you need to find a right pattern that helps a driverless car to achieve its goal, and not to get into an accident. That is possible only knowing the explicit context and implicit subtext of the pattern. That is, by solving the problem posed by NIST TREC QA, creating a system capable of answer both Factoid and Definition (Other) questions.
I answered NIST TREC challenge by discovering and patenting AI-parsing, which allows structuring text and extract 100% of its information. (Only n-gram parsing was known before my discovery.) In addition, the same parsing helps to explain the pattern by its context and subtexts, annotating it with other texts.
That's it, now you can applaud.
You need me in the practical application created... Here I am powerless to do anything and, therefore, can not make unfounded statements.
For 20 years I have not managed to get a penny of investment, and therefore I have to rely on the results achieved by others, obtained through my patented discoveries. For 20 years I have not managed to get a penny of investment...
I'm sorry, I hurried. It turns out "Yes", I can! WSJ this morning published "AI Surveillance Systems Can Work Without Facial Recognition":
"Liberty Defense Technologies Inc., based near Atlanta, and Evolv Technology Inc. of Waltham, Mass., sell systems that bounce energy waves off peoples’ clothing and bags, generating shapes and contours that artificial intelligence, based on machine learning from thousands of examples, can interpret as hidden guns, knives or bombs."
Biting the hand that feeds IT © 1998–2019