* Posts by BobC

20 posts • joined 23 Nov 2009

Sorry, we haven't ACLU what happened in sealed 'Facebook decryption' case, but let's find out

BobC

ACLU as "a clue"

Perhaps I'm the only one not to have previously seen the above spin on ACLU being pronounced as "a clue". I'm ashamed to admit my brain went briefly into lock-up as I repeatedly re-read the headline until I slowed down enough to interpret it correctly. Sigh.

In any event, this is more evidence for how much I rely on El Reg to keep my word-play wits sharp, and to fill in the gaps when needed.

30-up: You know what? Those really weren't the days

BobC

Gray Hair Everywhere...

So nice to see my fellow fossils reminiscing. Thought I'd toss in my own $0.02.

I wrote my first program while in high school in 1972. On a "retired in place" IBM 1404. In FORTRAN IV. Using punched cards (still have a couple of the decks). Lots of blinkenlights.

Joined the US Navy after high school. Learned to program the venerable CP-642B. In machine code. By pushing front panel buttons. To. Load. Each. Instruction. Until we wrote a program to access the paper tape reader. Then we were able to enter machine code on paper tape. The punchings from the paper tape fell into the "Bit Bucket". S'truth!

Got my first PC in 1979, an Apple ][ Plus. With the Language Card and UCSD Pascal. And a 300 baud Hayes modem. Soon upgraded the 5.25" floppies to massive 8" drives. I may have filled one side of an 8" floppy. Maybe.

Was so impressed with Pascal (compared to BASIC) I left the Navy in 1981 and went to UCSD. Hacked on 4.1BSD, primarily on the brand-new networking stack, particularly sockets. In minor ways I helped convince folks to skip 4.2BSD and go straight to 4.3BSD. l also got to work on quorum-based distributed filesystems. Spent lots of time on the Arpanet (before and during the Milnet/Internet split). Graduated in 1986.

Entered industry and immediately specialized in embedded real-time systems: Instruments, sensors, and control systems. Generally avoided systems that interacted with humans, focusing more on M2M. Devices for things like nuclear power plants, nuclear subs, aircraft, UAVs, ultra-high-speed digital video cameras, satellites, and much more. 8-bits at the start, multi-core 64-bits now. Boxes are smaller and don't get so hot any more. Haven't burned a finger in over a decade.

Wonderful toys, each and every one of them. Couldn't imagine having more fun. And I get paid to do it!

My only career goal has been to stay out of management. Mostly successful at that, but not always.

I'll turn 62 next month, and I have no plans to retire. I'll keep doing this until they pry the keyboard from my cold, dead hands.

Do not adjust your set, er, browser: This is our new page-one design

BobC

Some preferences for reading El Reg...

1. No stock photos, please. Only use photos if El Reg or the story source takes them. Consider providing an "opt-out" for all images on the front page (keeping them only in the full story, or shrinking them to icon-size).

2. No space wasted on white space: Use only the minimum needed to keep stories apart (one or two "emm"s should do, perhaps with a thin line).

Your real "value added" isn't just what you originate or gather (many do that), but how you share it, including both spin and snark. I work in an area about as far removed from IT as one can get, yet I continue to read El Reg for the 10% of stories that matter to me directly, and the simple delight of reading the El Reg presentation of the other 90%.

In particular, I think El Reg should revive and expand its space program.

HPE supercomputer is still crunching numbers in space after 340 days

BobC

Using COTS instead of rad-hard devices.

Electronic components hardened to tolerate radiation exposure are unbelievably expensive. Even cheap "rad-hard" parts can easily cost 20x their commercial relatives. And 1000x is not at all uncommon!

There have been immense efforts to find ways to make COTS (commercial off-the-shelf) parts and equipment better tolerate the rigors of use in space. This has been going on ever since the start of the world's space programs, especially after the Van Allen radiation belts were discovered in 1958.

I was fortunate to have participated in one small project in 1999-2000 to design dirt-cheap avionics for a small set of tiny (1 kg) disposable short-lived satellites. I'll describe a few highlights of that process.

First, it is important to understand how radiation affects electronics. There are two basic kinds of damage radiation can cause: Bit-flips and punch-throughs (I'm intentionally not using the technical terms for such faults). First bit-flips: If you have ECC (error-correcting) memory, which many conventional servers do, bit-flips there can be automatically detected and fixed. However, if a bit-flip occurs in bulk logic or CPU registers, a software fault can result. The "fix" here is to have at least 3 processors running identical code, then performing multi-way comparison "voting" of the results. If a bit-flip is found or suspected, a system reset will generally clear it.

Then there are the punch-throughs, where radiation creates an ionized path through multiple silicon layers that becomes a short-circuit between power and ground, quickly leading to overheating and the Release of the Sacred Black Smoke. The fix here is to add current monitoring to each major chip (especially the MCU) and also to collections of smaller chips. This circuitry is analog, which is inherently less sensitive to radiation than digital circuits. When an abnormal current spike is detected, the power supply will be temporally turned off long enough to let the ionized area recombine (20ms-100ms), after which power may then be safely restored and the system restarted.

Second, we must know the specific kinds of radiation we need to worry about. In LEO (Low Earth Orbit), where our satellites would be, our biggest concern was Cosmic Rays, particles racing at near light-speed with immense energies, easily capable of creating punch-throughs. (The Van Allen belts shields LEO from most other radiation.) We also need to worry about less energetic radiation, but it's less than the level of a dental X-Ray.

With that information in hand, next came system design and part selection. Since part selection influences the design (and vice-versa), these phases occur in parallel. However, CPU selection came first, since so many other parts depend on the specific CPU being used.

Here is where a little bit of sleuthing saved us tons of time and money. We first built a list of all the rad-hard processors in production, then looked at their certification documents to learn on which semiconductor production lines they were being produced. We then looked to see what other components were also produced on those lines, and checked if any of them were commercial microprocessors.

We lucked out, and found one processor that not only had great odds of being what we called "rad-hard-ish" (soon shortened to "radish"), but also met all our other mission needs! We did a quick system circuit design, and found sources for most of our other chips that were also "radish". We had further luck when it turned out half of them were available from our processor vendor!

Then we got stupid-lucky when the vendor's eval board for that processor also included many of those parts. Amazingly good fortune. Never before or since have I worked on a project having so much luck.

Still, having parts we hoped were "radish" didn't mean they actually were. We had to do some real-world radiation testing. Cosmic Rays were the only show-stopper: Unfortunately, science has yet to find a way to create Cosmic Rays on Earth! Fortunately, heavy ions accelerated to extreme speeds can serve as stand-ins for Cosmic Rays. But they can't easily pass through the plastic chip package, so we had to remove the top (a process called "de-lidding") to expose the IC beneath.

Then we had to find a source of fast, heavy ions. Which the Brookhaven National Labs on Long Island happens to possess in their Tandem Van de Graaf facility (https://en.wikipedia.org/wiki/Tandem_Van_de_Graaff). We were AGAIN fantastically lucky to arrange to get some "piggy-back' time on another company's experiments so we could put our de-lidded eval boards into the vacuum chamber for exposure to the beam. Unfortunately, this time was between 2 and 4 AM.

Whatever - we don't look gift horses in the mouth. Especially when we're having so much luck.

I wrote test software that exercised all parts of the board and exchanged its results with an identical eval board that was running outside the beam. Whenever the results differed (meaning a bit-flip error was detected), both processors would be reset (to keep them in sync). We also monitored the power used by each eval board, and briefly interrupted power when the current consumption differed by a specific margin.

The tests showed the processor wasn't very rad-hard. In fact, it was kind of marginal, despite being far better than any other COTS processor we were aware of. Statistically, in the worst case we could expect to see Cosmic Ray no more than once per second during the duration of the mission. Our software application needed to complete its main loop AT LEAST once every second, and in the worst case took 600 ms to run. But a power trip took 100 ms, and a reboot took 500 ms. We were 200 ms short! Missing even a single processing loop iteration could cause the satellite to lose critical information, enough to jeopardize the mission.

All was not lost! I was able to use a bunch of embedded programming tricks to get the cold boot time down to less than 100 ms. The first and most important "trick" was to eliminate the operating system and program to the "bare metal": I wrote a nano-RTOS that provided only the few OS services the application needed.

When the PCBs were made, the name "Radish" was prominently displayed in the top copper layer. We chose to keep the source of the name confidential, and invented an alternate history for it.

Then we found we had badly overshot our weight/volume budget (which ALWAYS happens during satellite design), and wouldn't have room for three instances of the processor board. A small hardware and software change allowed us to go with just a single processor board with only a very small risk increase for the mission. Yes, we got lucky yet again.

I forgot to mention the project had even more outrageous luck right from the start: We had a free ride to space! We were to be ejected from a Russian resupply vehicle after it left the space station.

Unfortunately, the space station involved was Mir (not ISS), and Mir was deactivated early when Russia was encouraged to shift all its focus to the ISS. The US frowned on "free" rides to the ISS, and was certainly not going to allow any uncertified micro-satellites developed by a tiny team on an infinitesimal budget (compared to "real" satellites) anywhere near the ISS.

We lost our ride just before we started building the prototype satellite, so not much hardware existed when our ride (and thus the project) was canceled. I still have a bunch of those eval boards in my closet: I can't seem to let them go!

It's been 18 years, and I wonder if it would be worth reviving the project and asking Elon Musk or Jeff Bezos for a ride...

Anyhow, I'm not at all surprised a massively parallel COTS computer would endure in LEO.

Mobileye's autonomous cars are heading to California. But they're not going to kill anyone. At least not on purpose

BobC

Avionics is not a good comparison.

A prior comment stated that autonomous vehicle software should meet the same standards as avionics. This opinion is wrong on at least two separate levels.

First, in the US, the FAA has two certification paths for avionics hardware and software. 1. Prove it is accurate and reliable (typically via formal methods), then test enough to validate that proof. 2. Test the hell out of it, at a level 5x to 10x that done for the more formal path.

Small companies are pretty much forced to use the second path more than the first. Where I worked, we relied on the second path and aspired to the first. We had awesome lab and flight test regimens that the FAA frequently referred to as "best practices" for the second path.

Second, the risk of death due to an avionics failure (per incident) is massively higher than it is in cars, especially given the ever-increasing level of passenger safety measures present in modern vehicles. The fact that aviation death counts are so low is due more to the relatively tiny number of vehicles involved compared to cars (on the order of ~100K cars to each plane).

Autopilots are fundamentally simpler than autonomous driving: Fully functional autopilots have existed for well over half a century (the L-1011 was the first regular commercial aircraft to do an entire flight autonomously, including takeoff and landing). The primary reason for this achievement is the large distances between planes. In-air collisions simply don't happen outside of air shows.

The massively greater complexity of the driving environment (separate from the vehicle itself) forces the use of statistical methods (machine learning), rather than relying solely on formal, provable rules. If it isn't clear already, this means that autonomous driving systems will be forced to use the second path to certification: Exhaustive testing.

Most of that testing must occur in the real world, not in a simulator, because we simply don't yet know how to construct a good enough simulator. The simulator will always miss things that exist in the real world. One goal of ALL real-world self-driving tests MUST be to gather data for use by future simulators! Just because simulators are hard is no excuse to avoid building them. We just can't rely on them alone.

That said, all such on-road tests must be done with a highly trained technician behind the wheel. It is VERY tough to remain vigilant while monitoring an autonomous system. Been there, done that, got the T-shirt, hated every minute. In my case it was operating a military nuclear reactor. Boring as hell. Terribly unforgiving of mistakes. Yet it is done every minute of the day with extreme safely.

I'd focus on the test drivers more than the vehicles or their technology. Get that right, and we'll earn the trust needed to improve the technology under real world conditions.

Donald Trump jumps on anti-tech bandwagon, gets everything wrong

BobC

SOSDD

Same Old Shit, Different Day.

I'm a solid Centrist. I favor minimal taxes, but as high as needed to fully fund government commitments. I also favor parts of the Liberal Agenda, but only if the gains are solid AND we can afford to pay for them.

I consciously try my best to avoid the "Echo Chambers" of the Extreme Right and the Extreme Left. Both make far more errors than valid points.

But the PoTUS Tweet stream is beneath dignity on all levels. A true travesty, no matter which side of the aisle you are on, especially so for me in the middle.

The first and finest service Twitter could do for the USA would be to delete @realDonaldTrump.

'WHAT THE F*CK IS GOING ON?' Linus Torvalds explodes at Intel spinning Spectre fix as a security feature

BobC

Why we need faster MEMORY!

Our deep, many-layered memory architectures and the presence of branch prediction and speculative execution on modern processors is simply because the CPU can be idle over 50% of the time just waiting for work to do while other work is being completed. CPU cycles have become staggeringly efficient primarily due to the deep and wide processor pipelines in current architectures.

The central problem is that transistors used for storage (cache and RAM) are far more "expensive" than transistors used for logic. A billion transistors can give you a whopping CPU, but not really all that much fast storage. This is why additional RAM architectures are needed, ones that use fewer transistors and take up less space while yielding CPU-level speeds.

If all of RAM could somehow be accessed at the speed of a register and at the cost of a spinning disk, then all CPUs would instantly become vastly simpler. That was the inspiration for RISC, when CISC processors failed to keep memory buses saturated at a time when transistors were still quite expensive; Cache made more sense than logic.

This is also a motivation for moving processors into the RAM itself, rather than hanging ever more memory onto ever more complex buses connected to many-core/many-thread CPUs. Why not put cores right in the middle of every DRAM chip? That one change would greatly reduce off-chip accesses, the major cause of speed loss. Let the DRAM be dual-ported, with one interface optimized for CPU access, and the other optimized for streaming to/from storage and other peripherals.

This yet another problem illustrating just how much trouble we still have building things this complex.

It's time to "add simplicity".

Death to the North Bridge!

Another way to avoid eye contact: 4G on the Tube expected 'in 2019'

BobC

I'm Too Old: I Thought The Headline Meant 4G in My TV.

Sigh. The "boob tube" hasn't had tubes for ages.

Arm isn't saying IoT firmware sucks but it's writing a free secure BIOS for device makers

BobC

IoT Ain't There Without A VM.

I'm a real-time/embedded developer who was brought on-board to tackle late-breaking cybersec issues on a new military system that was due to be delivered ASAP.

We weren't allowed trust our external routers and firewalls, so we had to configure local firewalls. No problem. We created fully stateful firewalls that understood our M2M protocols. Again, no big problem. We fuzzed our firewalls for months of machine time. No problems. We thought we had things locked down.

Then I learned the local firewall couldn't share ANYTHING with the local application. Which, as it turns out, was what pushed for the external firewall boxes in the first place. Houston, we have a problem!

We had a low-power multi-core x86 CPU, so we added a thin hypervisor, partitioned the memory and hardware, put the firewall and app on separate cores, and begged permission to ship what we had. Permission was granted, but only after many tedious hours of phone conferences with the Powers that Be.

Turned out to be an elegant and powerful solution. One I think should be generalized to all IoT, in that the application must not be permitted any direct network access and must run in a VM. A second VM should contain the firewall and the network hardware. The hypervisor should be a dumb as possible.

So, when will ARM ship M-family processors with VM hardware & instruction support? Is TrustZone fully equivalent?

Secure microkernel in a KVM switch offers spy-grade app virtualization

BobC

After blitzing through the paper linked above, this looks mostly like a "virtual monitor" system combined with "smart" keyboard + mouse + clipboard sharing.

At its simplest, virtual monitors are commonly used in CCTV systems to map normally independent systems (each with its own monitor) onto a single monitor. This system goes a bit beyond that to 1) permit multiple desktops to overlap, and 2) extract individual windows from the desktops for display on the shared monitor (mainly to declutter the display to remove redundant desktop pixels, and adding an identification overlay).

Nothing that users of X-windows systems haven't been used to for well over 30 years. And, like X-windows, the trick is sending the window meta-data along with the content (be that pixels or graphics primitives). This information is normally sent out-of-band, such as via a separate stream, but this new system has only KVM-like connections, and so instead must use embedded pixel data to encode and convey the metadata. (BTW: This data could be vulnerable to a MITM attack or Tempest-like snooping.)

Sharing a single keyboard, mouse and clipboard across multiple PCs has been done for many years in many ways, with Synergy currently being the best-known example. The Synergy protocol is straightforward, as is the data routing. On each PC, a thin shim is used to route the single physical KM to the appropriate PC's KM input layer.

So, we are left just a pair of protocols needing to be processed, some contextual rules guiding the properties and restrictions of the overall functionality. Not really all that much functionality, though the ability to handle KM inputs and pixel-level video switching are needed, but that's mainly simple hardware with simple drivers.

Taken as a whole, an OS isn't really needed at all. Not even a kernel: This could easily be run on bare hardware. But the need for the protocols to handle security restrictions demands some of the code, specifically the clipboard code (and, perhaps a security classification overlay), be executed in a trusted and protected manner. Not much of a kernel is required (the minimum needed to provide protection and separation of one shared agent and one agent for each connected system), so a small and proven minimalist kernel would seem to be just the ticket.

Place your bets: How long will 1TFLOPS HPE box last in space without proper rad hardening

BobC

It's really Cosmic, Ray.

On Earth we routinely simulate much of the space environment with one massively significant exception: Cosmic Rays, relativistic particles with extreme theory-breaking energies and unknown origin. We have some reasonable approximations that are a PITA to use at all, and impossible to use on whole systems, as they require de-lidding chips and exposing the naked silicon to heavy ion beams.

Cosmic Rays don't care about the van Allen Belts or Earth's magnetic field. But, thankfully, they are filtered quite nicely by Earth's atmosphere, converting into cascades of other relativistic particles that include muons and pions. These particles themselves have vanishingly short lifetimes when observed in the Lab, yet when coming from a Cosmic Ray cascade, they manage to live long enough to reach the Earth's surface, all due to their startlingly high relativistic speeds.

Cosmic Rays are the The Hulk of radiation, and since we have no clue how to make them on Earth, if you want to expose your equipment to Cosmic Rays, you need to send that equipment above the Earth's atmosphere.

And not far above it either! LEO does just fine.

Factories counter-punch Qualcomm in the gut as Apple eggs them on

BobC

Licensing as a Business

I'm in San Diego, home to QC, and have been closely watching the company for three decades.

For many reasons, QC consciously and explicitly chose to aggressively balance its fabless physical chip business against its patent portfolio. The mobile market was growing faster than QC could ever handle, and becoming a one-stop-shop for phone IP and premium chipsets would sustain QC's truly enormous R&D expenditures.

QC has created much of what makes today's mobile bleeding-edge possible. Not just the modem technology, but also all the other hardware and software needed to make it all work together as a seamless whole in the customer's hand. QC doesn't just sell parts and license patents, it provides entire solutions that help phone makers get the latest tech to market sooner.

It is important to note that QC pushes prices this hard mainly for its bleeding-edge tech. Their prior-generation and lower-end solutions are competitive enough on a cost basis to keep other players fighting on the margins. The main complaints against QC center on this practice: Charging through the ceiling for the latest tech in order to subsidize the other stuff.

Have you noticed the recent batch of mid-tier phones with QC chips? My own Moto G5 Plus is a splendid example of this, with its Snapdragon 635 providing 80% of the overall performance of a bleeding-edge phone for 1/3 the price. Why pay more?

I'll bet this is what REALLY pisses Apple off. Killing QC, and it's entire approach to business, is the only way for Apple to maintain it's insane margins while also moving into the mid-market.

This is a very intentional business practice on the part of QC, one that would instantly evaporate if their R&D ever fell behind. Every year they bet their entire future on their own continued ability to out-innovate the rest of the planet. And when they gain that edge, they cling to it using every available means.

From one perspective, QC's chip business is simply convenient packaging for their IP, more than it is a desire to sell silicon.

If Apple, or anyone else, was truly upset with QC's aggressive practices, they could, like any good Capitalist, choose to shop elsewhere. MediaTek and others make phone chipsets that are only slightly behind the bleeding-edge set by QC, and have less costly terms. But they are just parts, not complete solutions.

Did you know QC has more Android engineers than Google? QC does major R&D at all levels of phone tech. QC doesn't have more IOS engineers than Apple, but it has plenty of them too.

ALL first-tier phone makers repeatedly choose QC for their flagship phones. Because it's simply the best way to get the latest & greatest phone tech to market soonest.

Apple simply wants to pay less for it. And rather than compete with QC on an R&D basis, they instead attack it on a financial/legal basis.

Apple has made so much money shipping phones with QC chips that it seems insane for it to choose to do otherwise. So it clearly wants to "break" QC. And it needs to do so NOW, before QC IP gets established in 5G, and before the mid-tier hurts them any more.

This is not "really" about FRAND or similar issues: It's about killing QC before 5G. This is a fairly narrow window.

If you review history, this is not a new tactic. It seems QC attracts lawsuits whenever it prepares to roll out new IP. Right after the phone makers have a ton of cash from profits built on selling phones with QC chips.

Strange pattern, that.

QC has repeatedly proven itself to be the best at rapidly developing new phone tech and bringing it to market. Why isn't Apple, with it's QUARTER TRILLION dollars in cash, funding R&D competition itself?

Why doesn't Apple simply buy QC and dictate the terms it wants?

Or, more to the point, why doesn't Apple simply pass on the latest QC tech and stick with older and less expensive solutions?

No, Apple has decided that the courts are the cheapest way for it to improve its bottom line.

It's that simple.

QC is not innocent in any of this: They are aggressive because they have done the R&D needed to stay in front of all competition, to set the bleeding edge. Their practices are illegal in many countries. But those countries don't create phone IP, and they don't make phone chips, so to QC they simply don't matter.

Look at what happened in India when QC chips were (briefly) banned: The entire industry revolted, and the ban quickly was replaced with a fine that was never paid. QC knows how to leverage its tech, its customers, and its entire market.

QC has encountered rough times before, and they've ALWAYS fought their way back by sustaining and leveraging their R&D prowess.

If I were QC, I'd make their next-gen chips available to everyone BUT Apple, and see what happens to Apple's suit.

Sure, QC will play hard in the courts. But they will also double-down on their R&D golden goose to get that next Golden Egg out there and into the hands of all the OTHER top-tier makers, probably right about when the iPhone 8 starts to ship. To make the iPhone 8 obsolete before it ships in volume.

Apple's tactics are strictly short-term: QC has the better long-term vision, if they can survive the short-term squeeze.

Linux 4.11 delayed for a week by NVMe glitches and 'oops fixes'

BobC

Re: It's a trap!

The whole reason I haven't done any Linux kernel contributions is the extreme abrasiveness of the process. Linus isn't at all bad, but he can verbally match what the process itself feels like.

It's MUCH easier (and less traumatic) to make a bug report (with code, mainly for drivers) than to attempt a pull request.

Sure, I don't get my name on the commit. But I'd want that only if it were important for my CV. Which may happen someday, but not today. (For now, I can trace my bug report to the commit, which is good enough.)

Revealed: Blueprints to Google's AI FPU aka the Tensor Processing Unit

BobC

NVIDIA FTW?

Though I have great hopes for ASICs like the TPU, and for the many FPGA-based ANN accelerators, as well as for upcoming 1K+ core designs like Adapteva's, the bottom line is support for the common ANN packages and OpenCL.

In that regard, the GPU will reign supreme until one of the other hardware solutions achieves broad support. Only AMD and NVIDIA GPUs provide "serious" OpenCL support, and between the two, NVIDIA is preferred in the ANN world.

An important previously-mentioned caveat is that most of the ASIC and FPGA accelerators aren't much help during training. For now, you're still going to need CPU/GPU-based ANN support.

Speaking in Tech: Elon Musk and the AI apocalypse

BobC

Vegemite - FTW!

When I first visited Australia in '87 I was introduced to Vegemite by Aussies looking for a laugh. Much to my and everyone's surprise, it was love at first taste, triggering a deep craving I never knew I had. On my own I can go thorough a 220g jar in a week. It goes in everything!

Spotted: Bizarre SpaceX rocket-snatching machine that looks like it belongs on Robot Wars

BobC

Re: Let's look at what's there...

Geez, I saw the treads, but assumed they weren't mobile enough to permit the robot to get into position precisely enough. Clearly, they're what's needed to carry the load.

The marks on the barge deck indicate wheels are in use somewhere, but perhaps not on this robot, though there isn't anything else on the deck (during the photo, at least) that could make those marks.

Perhaps small (hidden) wheels to move and position the robot, with treads to move when loaded?

BobC

Let's look at what's there...

1. There are 4 pistons, which can only engage with the 4 landing legs on the Falcon 9 core.

2. There are exposed cables on the robot, so it's not for "hot" use: The core must at least be vented (perhaps a minute after touchdown).

3. The robot must be mobile. That may seem obvious from the umbilical, but it's wheels (or treads) aren't visible. It likely moves very slowly.

Add it all up, and it seems the robot's purpose is to move freshly-landed cores.

But move them where? Why do this?

For the trip to port, it would seem best to have the core at the precise center of the barge to minimize combined pitch and roll motion. So the robot could be used to center-up a core after an off-center landing. But there have been no cores toppling over on the way home after an off-center landing, so while this use seems possible, it can't be the primary use of the robot.

As others have said, it makes sense to use a robot if another core is on its way to the barge. A robot is far cheaper than building (and managing, maintaining, operating) additional barges! Plus, landed cores are quite light: Shifting them to the end of the barge won't significantly affect it's trim, and I suspect the barge has floodable compartments to manage trim with high accuracy.

The key complication is if the second core landing fails: Two cores could be lost instead of one. So, to me, the robot indicates SpaceX's very high confidence in nailing every single landing, no matter how crowded the barge may be with previously landed cores.

Even if the cores aren't from the same Falcon Heavy mission! What if both SpaceX pads have flights on the same or sequential days? It would make huge sense to keep the barge out there either until it is full, or there is a break in the launch schedule.

Remember, SpaceX production plans allow for at least two launches per week. And that number EXCLUDES reflights, which could increase the launch rate by at least 50%. If we assume most/all are at Canaveral, then three launches per week with most cores being recovered is way more than a single barge can handle, unless that single barge can handle multiple landings before returning to port.

Now, let's look again at the case of handling a pair of Falcon Heavy cores. I think this scenario is less likely due to the time needed to permit a core to cool and vent prior to being moved. Nobody wants to be shuttling armed bombs across the deck! Even a minor mishap could take the barge out of commission for the next Falcon Heavy core, which is likely less than a minute behind the first.

The robot's most likely use seems to be to support multiple recoveries for multiple missions over a period of days, perhaps up to a week.

Zuckerberg thinks he's cyber-Jesus – and publishes a 6,000-word world-saving manifesto

BobC
Pint

Thanks!

Never before has an online publication taken such a bullet for me, rescuing me from the meanderings of a tech-gazillionaire who clearly took no liberal arts in college.

Love you all. Pints all around if I ever make it to Blightly.

LightSquared blasts GPS naysayers in FCC letter

BobC

There's always GLONASS and Galileo...

'Nuf said.

eBooks: What to read on which reader

BobC

You only scratched the surface!

My motivation for getting an eBook reader was to permit me to spend less time at my computer reading. I do a ton of reading...

I made my choice based primarily on wanting a "minimalist" device (no WiFi or cell networking) that would be rugged, long-lasting, easily fit in a cargo pants pocket, and would accept an SD card for additional storage. I especially like the current crop of 5" eInk-based readers, since they have the same 800x600 resolution as many larger readers, providing sharper text in a smaller form factor.

I'm using the Astak EZReader PRO, which understands just about every known non-DRM eBook format, in addition to Adobe Editions. My 80 year old mother purchased the same model!

The Astak EZReader PRO is a rebranded Jinke Hanlin V5, which is also sold by several others, such as Endless Horizon's BeBook Mini, and the IBook V5. Other eBook readers include: Bookeen's Cybook and ECTACO's jetBook. And the list goes on...

There are also many FREE sources of eBooks, and Project Gutenberg is only the start: Google for "free eBooks". Beyond that list, many authors and publishers provide free eBooks for their titles that are no longer in print. Crazy publishers, such as Bean (one of the largest publishers of science fiction), make nearly their entire catalog freely available as eBooks.

More authors are placing their books under a Creative Commons license, and make them available in eBook form from their web sites. Find them by searching for "Creative Commons", or simply start here: http://wiki.creativecommons.org/Books Short works such as scientific papers and whitepapers are also readily available from the authors and/or journal publishers (including Arxiv, PLoS and others).

Finally, you can always make your own eBooks! For example, the Calibre program, among its many other features, has the ability to periodically download any site's RSS feed, convert it to your preferred eBook format, then automatically transfer the latest news to your eBook reader whenever it is plugged in.

Yes, you can even get The Register on your eBook reader!

Biting the hand that feeds IT © 1998–2018