Re: Just plain embarrassing
(It's Xerox, not Zerox)
Sod it. I do know that. It's late down here in Oz. Mea culpa.
395 posts • joined 28 Apr 2007
How old are you? Clearly not old enough to understand what was actually going on when the Lisa came out.
The article trots out silliness like fanbois and built in obsolescence with little no reason, and clearly no idea.
Some research might have helped. Maybe find out how much stuff cost back then, and what the competitor machines were. And no, the IBM PC was not the competition. There were many many other companies, and a lot of machines about.
Try systems like the Zerox Star, ICL Perq, the LMI and Symbolics lisp machines. You might also look at the embyonic days of Sun, and just what a Sun-1 was.
An article about the Lisa that just tries to make a case that this was Apple trying to rip of the faithful is something that could only be written by some one who was not even alive at the time. It so wildly misses the mark that it is embarrassing.
Where are the FTTC numbers and forecasts? I have FTTC sitting in the pit outside my house. I know the fibre is lit. Current service availability date is in about 6 weeks. There is a whole slab of the city with FTTC in build. I also know there are whole slabs with FTTN built, but not yet being made available. Some with nodes installed and cabled mid last year slated to become available in October. Somewhere in all of this NBNCo are lowballing.
It would be worth digging into the numbers more.
I would say that the NBN are in the mode of under-promising and getting ready for over-delivering right now. Looking at the current maps for in-build here in SA there are huge swaths of FTTC in build that are slated to start to be available in May - areas where there the ducts and pits are right now installed ready to go. Further massive amounts of FTTN slated for October - many places where there are already node boxes installed, ducted and powered, ready in the streets.
I am suspicious that the NBN is managing expectations versus delivery in a way that will start to make them look really good in a politically useful time-frame. If a large fraction of the populace suddenly start getting FTTC with really good performance this year, much political capital will be made by all involved.
I was one who was rather peeved about the failed app. Especially as I live in a high risk bushfire area.
The app did used to be pretty reasonable. Although I thought the GUI design was actually rather B grade (the icons used to denote the events made very poor use of space and were difficult to distinguish.) I was rather surprised to discover that the whole system is little more than a data scraper and a server that pushes the scraped data out to the app. The alert data is scraped from various government sources and aggregated. That is it.
The web site for the company gives you the sinking feeling that it is a tiny one or two person startup. They probably pitched the app to SA and Victorian state governments and asked $250,000 a year for it. If they had made it stick they would have been on to a good little earner. No doubt hoping to get the other states in as well in time, getting to the point where they might be pulling in say $3 mill a year. And really, not for much. But, as noted above, the update to the app last year made it perform worse - they lost my user settings when they did it, and it seems they simply never had the server resources set up or the required resiliency in the system to justify the price they asked. Altogether a bit sad, but also a very common story.
Oh they can do much better that that. They still have the audio, and the have the "telesnaps". Every 10 seconds, or so, a still image taken from the screen.
It is a bit of balance between cost and artistic intent how far into the CGI you would go. The reproductions done so far - such as The Moonbase, The Power of the Daleks, The Invasion, seem to have taken a good line. It isn't unreasonable to imagine we will eventually get the entire missing episodes back in a watch-able form. I saw all of these when they were broadcast, and the reproductions are not too shaby really. (I was of course quite young, but those it I remember are nicely captured even in the low cost animations used.) The most critical thing is the sound anyway. More of the story is told with the audio than we realise. That the sound is intact is the key.
Seriously, the vast majority of the comments clearly have no clue as to what the device is, or how it works. Everyone seems to be thinking of tech they read about decades ago. First up, look at the name of the technology. "Digital radio frequency memory". There is a clue here.
Radar works by the missile sending a pulse out, and listening to the return. In modern systems that pulse shape can be very smart, and radars can see a lot about the target from the pulse return. Not just signal strength, but doppler shift and a host of second order clues. The point of a DRFM system is to spoof the missiles radar, knowing that this is the sort of thing it is going to be trying to do. So how? In principle it is easy. Receive the pulse, record it (the "memory" bit of the name) slam it into a custom bit of seriously fast custom silicon or FPGA, and work out what the return pulse needs to look like to make the missile think what you would like it to. It isn't necessarily a matter of making the can look like your plane directly. The can can spoof the doppler profile expected from your plane. Even if the missile changes its pulse profile, the DFRM system will continue to work, as it records and replays each pulse as needed. This is not your grandmother's radar jammer.
As to optical missile and guidance systems. Active laser systems are already available with the ability to blind incoming missiles. There will always be a mix of radio and optical threats, the presence of one does not mean you discard managing the other.
It is exactly an arms race. Missiles can have upgraded software to try to work past DFRM systems, and those same DFRM systems can have upgraded software to cope with that.
As described it was never going to be a product. That was just a mish mash of stuff with no clear use case and more importantly - no software to take advantage of it.
IMHO the big problem was that the processors had no sensible architectural support for making use of a world with huge addressable persistent memory. My personal crusade is for tagged memory. HP are one of the few companies that could conceivably create a commercially useful architecture with this. IBM being the obvious other. Tags for data type, if a pointer, access protection, and concurrency control, at a minimum would make life vastly more interesting. You can tie concurrently control and pointer identification into your memory network control. Suddenly lots of optimisations are available at a hardware level, and you eliminate a whole raft of crud from software.
None of this is exactly new. IBM's AS 400 was on the way, and it was a commercial success. And there were many other small volume and research systems built. But the ubiquity of x86, Windows, and Linux ensures that the barrier to entry of a properly new paradigm is very high.
"What can you do?"
Hinted at in above comments. Simply tell the user exactly what is happening, and provide choice.
A. Actively listening and identifying what is happening.
B. Passively buffering the last five seconds of sound so that it can avoid missing the music.
C. Off. Not doing anything.
Easy. They could even add a config option "remember x seconds of sound" for B. And add the usual disclaimers "app does not retain any sound longer than the xxx seconds. Doing so may reduce battery life." Nobody would care and indeed many people would probably turn the buffer up to whatever the maximum is.
If you read the report they are taking such elements into account. Rather than trying to work out where the bits individually went, they use ensemble simulations that are showing where the bit can't reasonably have started. No debris found on the Oz coast - eliminates a whole set of possible starting positions. And so on. It looks very much as if they need to start looking north of the area they searched, not by much but some.
Overall I'm impressed (and a teeny bit proud) at very solid work done.
As noted above. This isn't noise cancellation. Noise cancellation on open spaces is astonishingly difficult, and it only works in headphones because the distances involved and the acoustic environment is so small. (And why the heck does everyone think Bose invented this either?) Once you have a reverberant field you are pretty much sunk.
You have been able to buy noise blanketing systems forever. Before they were electronic, a wire brush rotating inside a metal cylinder made for a very effective white noise generator.
All this offering is is an expensive way of making the noise source an IoT device that looks cool. There is no real technical innovation from the actual acoustics point of view over something you could buy 50 years ago.
You will notice that they don't use the word "normal" in describing launches. They use the word "nominal". Nominal means you have nominated how it will fly. Normal is what usually happens. The goal of engineering is to make the two coincident. Sadly in pretty much the early part of any rocket system development "normal" means catastrophic failure.
Really? You would trust a parachute that has never been used versus one that has proven it has no defects in manufacture? Never come across the term "infant mortality" in failure analysis? The most common time for something to fail is when it is used first.
Somehow there is this strange idea that rockets are intrinsically single use. Yet the problem with re-use isn't their design, it is that it is hard to get them back economically. Aircraft are not single use, nor is your car. Why should a rocket be?
Title says it all.
Why exactly the snide comments now?
Flight proven is precisely correct. Until the hardware has flown a mission you don't know it if it has any residual flaws that only come to light when a full mission is flown. How many would be happy taking off in an airliner if told it has actually been delivered on a truck from the factory and this journey would be the first time it had ever taken off?
This is going to be a self inflicted wound of mortal proportions.
The absolute last thing any purveyor of ads ever wants its customers to know, is how much that customer's adverts did for them. Because the answer is - nowhere near as much as the money spent would justify. Ad effectiveness is the big secret. Letting that particular cat out of the bag could ruin Facebook and Google overnight.
This is probably a good thing.
More likely, the system will quietly be decommissioned when the first round of deeply embarrassing results start to appear.
Lordy, what a crock this article is. As noted above, Apple talk about first owners, not product lifetimes. And more importantly, this is information about environmental impact. Apple are being hard on themselves here. It would be in their interests to suggest longer times, and thus improve the environmental rating of their products. Had they suggested significant;y longer times they could (and should) have been taken to task for trying to soft pedal the impact of manufacturing their products.
As any owner knows, Apple stuff tends to be very well built, and outlasts kit from just about anyone else. Further, it keep its value way longer than other kit. You can argue why it might, but the reality is that it does. As products that are viewed from the standpoint of environmental footprint Apple do very well.
Zero chance the FBI or anyone else has the signing keys. That simply isn't way a proper signing system works. Nobody has access to the keys. Not even anyone within Apple. The idea that there is a safe somewhere with the keys written on a piece of paper is naive. The keys will live in a set of dedicated secure key devices - devices that erase their contents if tampered with, and the signing operation will be a matter of submitting the code to signed to the secure key system. It gives you the signature back. The key is untouched by human hand. Even if the supreme court ordered Apple to hand the key over there is no technical mechanism to do so. All that can every be done is to continue to sign things using the system. The key devices are maybe subject to sophisticated technical attack - so if they were all shipped to the NSA, maybe, just maybe, after a few years of effort the keys might be recovered. But the common ideas of bribery, disgruntled employees or espionage finding the keys is fanciful.
Indeed. There is a lot of ignorance about the technical details here.
There is only one critical secret piece of information that matters. The FBI could create the needed firmware themselves. There is nothing special about it. What they can't do is sign it. Only Apple has that ability. The FBI were demanding that Apple do the slog work in writing the special version of the OS (which was going to include code that ensured it only ran on that one phone.) No matter who wrote the code, the only thing that really mattered was forcing Apple into signing it. Once signed the code is immutable, hence if it contains code to target only one phone by its unique ID, it can't be modified to run on any other phone without being signed again.
All of the questions of exploits, hacking, who writes what, or the "weaponising of the code" come down to one thing. Apple must sign it. There were rumours that the FBI would consider escalating the case to demand Apple hand over the signing key. Now THAT would be bad. Apple fought the orders to write the special OS as once written it becomes too easy for new demands for that same code to be re-signed on demand for new cases.
Letting the signing key out of Apple would be a disaster waiting to happen. One would assume that it is kept in a set of appropriate secure key devices, and is actually not known to anyone.
The value in cooling the fuels is in the Tsiolkovsky rocket equation. The delta velocity of a rocket is proportional to the natural log of the ratio of total mass to dry mass. Anything at all that adds to the dry mass rips performance from the system mercilessly. Eventually every kilogram of dry mass added is a kilogram of payload not reaching orbit (or at least staging). Just making booster tanks bigger does not yield performance improvements as well as one might wish. Adding more fuel (and hence total mass) without any penalty in dry mass is a bigger win than one might expect. The higher effective fuel flow possible can translate to higher trust - which helps with that additional total mass on liftoff, something that matters a lot when most of the thrust is simply vanishing into balancing gravity. An even marginal change in thrust can have big knock on effects. Overall the ultra cold fuel is as close to a free lunch as one might wish for.
The trouble with C is that it is little more than a glorified assembler. The language lacks a significant number of structures that we take for granted in more modern languages, structures that make life easier when building more complex code.
But you can convince it to do things that are useful. Often with gruesome distorted bits of code that at first sight make little sense. The problem isn't that you do this, it is if these bits of code are not rigorously codified and documented, and wrapped up in a clear and maintainable manner. Macros can be one way of doing do (with a lot of care). But clear and enforced coding rules for their use is the key. There should never be a time when anyone looking at the code is in any doubt about what is being done and why. If the idiom is useful, I would expect it to be used everywhere as the standard mechanism for such condition handling. Personal idiosyncratic use of such features is the problem, and speaks of deep problems in the project.
The failure in the SSL code is not the use of the code, but rather that it is blandly written with no apparent consistency with the rest of the source. Just a one time use of an idiom. That suggests almost zero care, and is deeply worrying as a pointer to the quality of the rest of the code.
The problem with the latest Falcon 9 and its fuels is that they are now using a much colder than normal fuel. Both the LOX and RP-2 are chilled down significantly colder than other boosters. That allows the same tank structure to hold more fuel, and thus extend lift capability. Quite significantly it turns out. The down side is that keeping the fuel cold is a much more critical and difficult task than it has been in the past. They have to load it much later in the countdown, and there is much less wiggle room with delays.
The other sad truth is that both the clinicians and police seem to be utterly naive. How do they think they were caught? In an age where the popular press is full of how your every move can be tracked with your phone, exactly what logs do they think might be kept about their accessing of confidential records? Logs that won't go away, and can be reviewed way after the event if there is the need.
For all the hand wringing about MMH, one should remember that the venerable F-16 powers its APU with MMH. The ground crew need to refuel the planes if there has been an in-flight problem where the APU is needed. And accidents do happen refuelling. Apparently MMH bleaches shirts and stings eyes and skin. Then results is a week of daily hospital checkups.
Imagining that simply changing a rules in HR will fix an endemic multi-decade problem is naivety at its highest.
CSIRO has been a mess for decades. Something of a sacred cow many governments have resisted the clear need make the needed sweeping changes, and also wipe out the deadwood. Much of the deadwood is at the senior levels anyway. A depressingly large part of CSIRO regard their most important task as ensuring that their department and jobs are not threatened.
The right answer would be to shut the entire thing down and start from scratch. But given that that is impossible, significant structural change is needed.
The insane matrix management capability silo structure inflicted on them in one round of restructuring essentially crippled CSIRO's ability to do anything useful. Whether a new round can do any better is difficult. History says no, but if there is seriously strong leadership and someone who isn't just managing by just parrotiing the latest idiotic fad, there might be some hope.
The entrenched interests at CSIRO will kick up, and you can be sure they will look for as much press as they can, and cast the incumbent government as the villain.
Who doesn't remember sketching such inventions in their exercise books during a really dull afternoon at school? Sure, once you reach puberty idle thoughts turn to more base topics. But as a seven year old such inventions were part of life. I mean, there was this fabulous documentary on TV all about such ideas, let me think, yes, it was called Thunderbirds.
One of the problems with criticisms of the design of the car systems is that it doesn't fit the mindset of the car engineers, and places a model over the car that actually doesn't exist in computers either.
Last I saw your average PC was just about as open a trainwreck as the cars we are criticising. There are a huge number of separate processors, many interconnection buses, and zero security. A PC typically has a number of high speed buses (SATA for a start) talking to subsystems with their own embedded operating systems. Then there are the slow buses for trivial stuff (USB 1.1 devices) and faster USB for things like WiFi and Bluetooth. Every one of these device controllers has embedded processors, many with subvertable hardware, and known attack vectors.
I don't hear pious moaning that it should be trivial to add firewalls to all the buses inside a PC. Yet it is essentially the same problem. There are hacks that can pwn a hard disk drive (many of which run say three separate ARM processors and a full multitasking OS). Not to mention hacks that can subvert your ethernet controller or WiFi controller to take over your PC. We all know not to plug an unknown USB device into a PC - but I bet that is a rule more observed in the breach. It isn't trivial autorun exploits we have to defend against now.
Yes, car system security is a big deal. But don't pretend that somehow the mainstream computer industry has trod these tracks long ago and it is the car engineers that are dolts. Everything is built to a price, and when there isn't a clear driver for change, change doesn't happen.
The care taken in car system where the issues are understood makes the mainstream computer industry look like a bunch of idiots blindly walking into walls. These are hard real time systems, and they are tested and simulated to clock edge and instruction boundary precision. But like so many stories of security in the history of computing, nobody even thought it was an issue. (Like the Morris worm, when the first message that went out had the point that was along the lines of - "we all knew this was possible, we just didn't think anyone would be stupid enough to do it.")
If you have enough time and money - sure.
But we don't.
If we had some bacon we could have some bacon and eggs - if we had some eggs.
We don't have the time, the money, or the required number of skilled people. Nor will we ever. It is the nature of the world. So it simply isn't a solution by itself.
So you need better tools. Researching better tools is probably a good idea. Researching better tools seems to have provided some useful benefits in the past. It might be a good policy to stay on it. Rust may not be the answer, but C++ most assuredly isn't, as is demanding impossible funding and time.
A segv is when you are lucky, and the access went somewhere protected. Program crashes right there and you get some clue as to why. A buffer overrun is when you run into the next variable allocated in memory and no protection violation is caught. Your program doesn't crash, behaves badly (anything up to your system being pwned) and you never know.
An indexing exception is what you expect from any sensible language. A segv tells you that the language or its implementation has a bug. Any language that allows you to generate a segv is insecure. It is all well and good to have the nice new abstractions. But why then does the language retain the insecure ones? If they are bad practice, remove them.
Last I saw C++ did not support garbage collection (they pulled it from the spec.) Dynamic memory management is not the same as automatic memory management. C++ still provides all the tools you need to utterly subvert the memory abstractions - especially when you start calling libraries that were not written with the same abstractions. You both can and must be able to find and manufacture naked pointers and use them. Once the language allows this you are dead.
Pointers are not the problem. It is any language that allows pointers to be manufactured or modified directly by user code that is the problem. Languages where you write your own loop control and indexing are part of this. That is where the buffer overflows, stack smashing, and a large number of other amusing problems come from.
Any language where type casting is allowed, or languages, that support pointer arithmetic are intrinsically insecure. It matters not that C++ has some nicer abstractions available. You can still write utterly insecure code in it. Until the language prohibits insecure code its security is only a matter of coding with a set of rules next to you. And you can do that in assembler. C++ remains syntactic sugar over the top of C for any questions of security. Unless the language stops you writing insecure code, it does not support security intrinsically.
If you want high performance numeric code you want a language that allows you to specify the mathematics of what you are doing properly, and then allows you to identify the parts where there are issues worth addressing (preferably with pragma like hints.) Modern Fortrans are actually pretty good at this. They still allow insecure code to be written, but it would not be a huge stretch to modify them to remove most of the old insecure legacy bits. Coding with forall and where clauses can capture the mathematics better, allow for better optimisation, and does not require indexes and pointers to be exposed.
"The launch is also taking place in California, rather than Florida. This means the rocket can't do the most efficient eastward route,"
No. It is launching a polar orbiting satellite. An Eastward route is not more efficient for a polar orbit, in fact it is impossible to reach a polar orbit insertion launching to the east. The whole point is that the launch MUST launch to the west in order to wipe off the entire rotational speed of the Earth. You need the orbital motion to go over the poles, not over the equator. It launches to the west from the west coast for the same reason eastward launches launch from the east coast. To avoid flying over populated areas.
"Which begs the question why is SpaceX doing it?"
"begging the question" has nothing to do with asking a question in response to something. It means to assume the answer to the question. It is a contraction of "beggaring the question". "Begets" the question perhaps, but "begs", is just plain wrong.
The article pretty much fails to take account of energy.
Given an arbitrary lump of mater floating about in space - say some planetoid, there is only going to be a limited amount of recoverable useful energy resource available for it. That new energy is what will limit the number of useful droids the exponential manufacturing can create. However you need to subtract the energy needed to mine that energy. Unless you have some form of matter converter to create energy directly from matter. The existence of which would be a game changer that would make droid armies the least of your plot worries.
Also, an asteroid travelling at speed is identically efficient as an energy beam. Both carry energy. It is all well and good to point out that a large rock travelling at relativistic speeds carries insane energy. But how did it get to those speeds? The energy it carries has to be imparted to it from something, and the energy must balance. If your rock carries a 100 zillion joules of energy because it is moving at 0.9C you need technology that can usefully direct at least 100 zillion joules into it. Why not just deliver that 100 zillion joules directly? It will have the same effect.
No, it is clear that the vulnerability was introduced into the source, it wasn't added as a hack to a binary image. The clue is in the password string. It is one of two things.
1. An intentionally coded backdoor with a password deliberately made to look like a legitimate printf format, so that simple strings analysis of the binary would not suggest it was anything special to any potential attacker.
1a. Actually is a legitimate printf format string that has been reused for an intentionally coded backdoor.
2. It is a legitimate printf format, and someone has tweaked the source code to make it work as a backdoor password by introducing a small but critical flaw in the program.
The difference is that option 1 should show up in a code review. Option 2 may be very hard to pick up. Languages like C and C++ contain a great many ways of burying such exploits in ways that take considerable care and expertise to notice, let alone figure out. Indeed both languages seem to encourage coding habits that make such things hard to detect.
It could be as simple as an extra * in the right place, or the difference between 1 and I in a carefully chosen spot.
Exactly - there is existing history for security breaches that are deliberately hard to pick in code reviews, and when very well done are plausibly deniable as a simple slip of the keyboard, and not actually done with malicious intent.
Now in its eighth year - http://www.underhanded-c.org/
The point was - the US isn't going to stop the Paris attacks. France might, but the US won't. Hillary allowing the FBI to decrypt US communications does not help stop ISIS wreak havoc half way across the planet, despite the implication it does. Indeed, they don't need to use encryption. Like I wrote, a note passed hand to hand will do. Or if they really are worried, a one time pad, either for the note, or for an electronic communication.
Its pretty clean the encryption they are worried about is communications. Data on disks is a sideline in comparison. Next, although everyone talks ISIS, the reality is, and the FBI and the rest well understand, they have just as many threats from homegrown Christian or just pain nutter terrorists as external ones. This will, and always will, be about the local population. It isn't about stopping the next Paris attack.
We already have the ironic spectacle of one part of the government inventing and popularising a secure and untraceable communication system to further its operations, and another spending great effort to subvert it again.
In the end, real terrorists resort to notes passed from hand to hand, and one time pads. No Manhattan project can solve a one-time-pad. Demands for weakened or backdoor'ed encryption are a solution to a problem that only uses existing encryption because of convenience. If it is not possible to use common encrypted channels operationally, terrorists simply move to other methods. Methods for which current meta-data analysis probably have less traction - making the job of the security agencies harder, rather than easier.
Been watching too many movies. The problem with diamonds is that they are implicitly an illicit commodity. Outside of the controlled market of deBeers - where the prices are artificially high, diamonds are dirty from the outset. So you are already dealing with other criminals, or shady to black markets. This is not a sensible place to be if you are trying to be discreet and move money about to fund anti-state activities. What is needed is proper first class fungible assets. You can buy anything with a fist full of Euros or Dollars. The first thing you need to do with diamonds is to try to convert them into Euros or Dollars. It is at this point you need a market that wishes to buy your diamonds - for which you can expect a highly discounted price, and you will be selling them to an untrusted, implicitly criminal, buyer. Assuming it isn't a sting run by the local security agencies. A sack of greasy Euros won't attract attention on the black market. Diamonds most certainly would.
"LOX/H2 or LOX/KEROSENE are pretty ok."
You are welcome to drink a cup of either.
The cannonical book of liquid rocket fuels is Ignition! An Informal History of Liquid Rocket Propellants by John D. Clark.
If you want insane rocket fuel oxidisers - try Chlorine trifluoride. It will burn asbestos, sand and concrete. Glass ignites on contact. It isn't clear how you would die if you had some spilled on you, it would be a matter of which of a number of horrific and painful mechanisms got you first.
One of the three remaining actually. All three are on display at various NASA facilities. They are a mix of flight and test-article parts. Apollo 18 and 19 were cancelled. But the hardware was all ready. Skylab used the first two stages of a Saturn to fly, the Skylab itself being a refitted third stage. Add in the Dynamic test article (which whilst not considered flightworthy was built identically to the flying ones) and you get one at KSC, one at JSC, and one at the US Space and Rocket Centre.
A visit to building 30 at the JSC in Houston is also an absolute must. You walk up the same stairs that Gene Kranz and his fellow mission controllers took each day. They have refitted one of the control rooms back with all the Apollo era gear. Little beats seeing that.
One of the most common studio headphones is the Sony MDR 7506. Whilst not the absolute best sounding, or the cheapest, they are pretty good, and a known good standard that is still made, robust, and has a service backup via Sony's pro distribution and service network.
But for artists to monitor sound as they perform there are many other phones commonly used. One of the key points about these is that they don't leak sound back into the recording.
I really hope you mean Johann Strauss and the other members of his family. Richard Strauss has no relationship to them, and did not compose sickly sweet waltzes and dance music for polite Viennese society. Richard Strauss wrote seriously good powerful music. Sunrise from Also sprach Zarathustra is of course very well known from 2001. But Salomé, Electra, his Last Four Songs - just to pick a few high points. Richard Strauss was a giant.
"I can't believe that many people are going to be swayed into buying one just because it was in a movie... "
What people forget is that it wasn't all that long ago that Aston Martin were really hurting, were selling not all that many cars at all, and those that they did were heavily based upon Jag bits. The cars were not very good, and people with money went elsewhere. Ford poured silly money into turning the company around, but even then, Aston had very little visibility. It doesn't matter that 99.99% of the movie goers will never buy an Aston. The remainder represent a very tidy fraction of the very few that do buy Astons. Enough that for a very niche maker of very low production volume cars that it makes perfect sense to build on the Bond franchise like this. There is almost no equivalent to Bond.
If you want, the boat he sailed in Casino Royale is currently for sale. It will cost rather more than an Aston.
Biting the hand that feeds IT © 1998–2019