TAIOMA - that's could actually work! Maybe an artsy app of some sort? Or the name of my boat ... :D
248 posts • joined 27 Jan 2009
This would be a good time to regale the audience with My Most Embarrassing Moment (at least in public.)
Long ago I worked in a mainframe software group in a Fortune 500 company. I was tasked with analyzing the customer utilization of the available software distribution formats - various tape formats, mostly.
I was giving my slide presentation in front of two corporate VPs and thirty other managers, and on one slide as I described and showed the data the audience suddenly laughed. I had no idea why, and continued on.
Only after the presentation a friend showed me what was so funny. My slide showed that "15% of users had floppy diks"!
Actually that's been proposed. Arguments as to feasibility and utility are unresolved.
Essentially, the idea is that the first stage of a typical launcher expends 1/2 its fuel getting to Mach 1, IIRC 70%-80% getting to Max Q. This of course varies.
So a Maglev that goes up a mountain could replace mist of the first stage, taking the vehicle to 18000 feet or higher and Mach 2 or Mach 3. This could provide most of the first stage lift, at the cost of a bit of electricity.
There are complications. The second stage would have to be a bit bigger. The handoff at the end of the Maglev could be complicated. The Maglev is very long. If there is a carrier, you need a way to decelerate it. Most notably, you need a mountain, it needs to be just west of an ocean, and the launch direction is fixed - you can only launch in one direction.
Equatorial launches would be the most useful. Candidates include the Andes of Ecuador - but most of South America is downrange; and Mount Kilimanjaro - but it is "holy" and not quite on the ocean. No doubt there are others.
The Arab world had pretty good science and engineering 1000 years ago. The words Algebra and Algorithm came from there. But in about 1200 the encouragement of studying the Holy Books (all three Abrahamic religions) was apparently stopped, and replaced by simple unquestioning memorization of the Koran and rejection of the other books. Little scientific progress occurred in that region after that as far as I know (though I don't know much).
Today, there is probably a bit if snobbism about having a British staff. But also I wonder if the schools and culture simply don't support the way of thinking required to do modern engineering. This needs to be taught and lived at an early age to build the mental habits.
An old friend did some engineering for the King of Jordan, using entirely foreign staff except for laborers. The king told him that his own people were completely unable to operate or maintain the equipment. I think that's an overstatement, as several Arab countries have pretty good high tech air forces, which do not run themselves.
Capitalism will always be continually changing - just like democracies and unlike centrally planned systems. Every system will have times and areas of 'badness' - that's my technical term :). But only dynamic 'edge of chaos' systems - democracy, free enterprise, like natural ecosystems - are self-adapting.
My case in point us that right now under our noses an entirely new capital ecosystem is arising, which is based on the advent of strong community-based infrastructures in support of startups and entrepreneurship. This wave of new companies is replacing the loss within large corporations of internal R&D divisions. It is cheaper for the big corporations to buy out these startups than to develop their own, and they get a much broader more creative result. And the startups are often explicitly designed to be bought. This will evolve rapidly as the big corporations will become more like utilities.
There are two key problems today. 1) in pursuit of the overly-emphasized and often illusory or mal-applied concept of the efficiency of scale, the laws and regulations have made it too easy to get big by acquisition if competitors. Rule of thumb - if any company controls over about 20% of a market the market is a frozen oligopoly with no effective competition. A truly free market will have big companies falling apart to compost about the same rate as mergers. 2) Closely related - government's enthusiasm for intrusive regulation dramatically encourages the demise of small organizations. For example, Warren's Consumer Financial Protection Agency has driven nearly every small and medium bank out of existence. Only the megabanks have the resources to comply with the mountain of compliance data required.
Note, one if about 25 top level executives in_ both_ government and commerce are identifiable as psychopaths. This naturally includes some of our Congress critters, as well as wall Street. But most people at the top level are just trying to do a good job, in both business and government.
In fairness, by all accounts and from my own experience some time ago, Oracle treats their customers the same way. And I was once told that the original founders were also screwed by the team that invested & then took over the company.
OTOH Ellison / Oracle does cool boats, so there's that! :D
Medical and legal professions are almost perfect examples of how self-regulation can be used to prevent competition and reduce consumer power and information availability. Cases in point: the AMA was originally created by and the AMA President's salary paid by the drug industry. For decades and even today alternative therapies that don't involve prescription drugs are actively prevented from being used by doctors or nonmedical personnel. In the 1930s and 1940s this practice extended to active legal efforts to jail advocates of competing technologies. And the Blue Cross 'non-profit' system was created by the hospitals to guarantee a steady flow of customers. It was structured from the beginning to prevent consumers from knowing, much less influencing, the costs of medical treatment.
and from systems science work I did a while back, related to neural networks and other decision systems like ecosystems, if any one entity has more than about 20% of a market, there is an effective oligopoly and the competitive system breaks down. This is closely related to the application of the inverse power law distributions that show up all over living systems.
For this purpose it's not really necessary to have a rectangular view.
There's actually a pretty good alternative, which could be built although it would require an alternative fab facility, which is not going to happen soon.
Back in the mid-1980s a company whose name I forget, in the Denver area, was working on an interesting scanning and display architecture that merged the vectorizing/recognition task and the image capture and compression task. It was based on a fractal tiling based on hexagonal cells. At the display resolutions available at that time it was not a very good fit with the common rectangular viewports - vertical lines were represented by slightly wiggly lines. But today that's no longer a problem.
The cool thing about it (without going in to detail) was that the breakdown of the image into hexagon-based fractal trees accomplished a substantial compression of the data and _simultaneously_ a first order vectorization of the image. It was also demonstrated that a similar image reconstruction algorithm was as fast or faster than a pure rectangular bit map.
As a bonus of sorts, it mapped well to many other non-rectangular display - or scan - viewports, such as facial simuations - essentially it was better at every 'natural' object than rectangular viewports.
While it may not seem to be important yet, if/when habitats, labs, and colonies are established in space, the long data transfer times based on lightspeed will need to be accommodated. In that environment transfer lags of minutes, hours, or even days may become common. Future proofing NTP will be important.
Of course it is _possible_ that a future solar-system-wide time reference, possibly based on the proposed absolute positioning system using galactic markers, and known orbital coordinates of solar system bodies, might provide a useful backup timestamp, down to some basic time resolution.
I once read an interview with the then-manager of Excel development. He said they basically didn’t bother fixing bugs - “nobody will stop using Excel because of a bug, and they’ve already bought it.”. But if even one person asked for a feature they figured there were hundreds of others that also wanted it. So all of their development effort was adding features, almost none fixing the _many_ (as we all know) bugs.
It would be interesting if Ecuador were to engage a helicopter and a ship (do they have a navy?) - fly Assange off the roof of the building to a waiting ship offshore. Then the Brits would be faced with the prospect of a military confrontation to prevent the ship from leaving territorial waters (actually if they're 12 miles out, even that prospect goes away once he lands on the ship).
This could certainly be done. Even if the building is not set up for helo landings, they could use the rescue basket method. Of course there are certain issues with the UK air force as well. But would UK actually shoot down an Ecuadorian helo, just to catch this meatball?
As a bonus, the movie rights to the rescue would probably fund Wikileaks (or Assange's new Ecuador hideaway mansion) for decades.
From my days of teaching Software Quality Assurance, over 70% of bugs in shipping production code were built into the design at the beginning. IIRC the intent of methods like Extreme Programming was to help catch many of these design flaws by including representatives of the “customer” in the design team and using iterative design.
There is no reason to expect that the hugely complex chip design process is very different, even though they must of necessity be much more rigorous in their design process and re-use existing modules extensively. This latter allows each module to be debugged independently over time. But the interactions between modules that is a critical performance factor in the modern chip designs must be extremely difficult to 7nderstand, much less account for in the higher level design process. And the chip design process today looks a lot more like software than hardware. Designers must depend on their CAD system (another beast of high com0lexity and its own bugs!) to correctly manage the low level interactions.
For regular software, the statistics show that if you are using reasonable design methodologies, in shipped production software there is roughly one bug in every 200 lines of code, regardless of the language. (The difference between low level and high level languages was strictly in the impact of a given bug, not the probability.) But about 10-15 years ago MS mentioned they apron about one in 70, I suspect due to their practice of hiring young hotshot SW people who had not learned defensive programming. And again, mod5 of the remains bugs in shipping code came from the design.
Perhaps most scary: less than 50% of the remaining bugs were likely to be discovered in black box testing.
First prize is a week in Seattle. Second prize is two weeks in Seattle ... in February!
Sorry Seattleites - I’m a refugee from the cold, dark, damp, rainy, cloudy, grey, depressing Northwest winters. There’s a reason why coffee is so popular there. The weather reports should include a “damp chill index” just like the wind chill. 40 degrees in drizzle & mist feels like -10 degrees in sunny dry.
Seattle is famous for that special type of rain called What Rain, a drizzly mist, as in when a miserable visitor asks, “How can you stand out here in this rain?” To which the Seattleite replies, “What rain???”
Seattle has more definitions for types of rain than any other place:
- sunny (rare to unheard of except for two days in August)
- sog (where you think your to sunny but it’s not)
- drain, or “light rain”
- hard rain (rare)
- downpour (very rare)
Funny you should mention SAP - a global company I used to work for budgeted $200 million and one year to move their US operations to SAP, and planned to move the rest of their operations in Phase 2. Phase 1 took three years and $700 million. They cancelled Phase 2. SAP stock dropped - iIIRC 20% - immediately.
The point being, if they want to run SAP they _really_ don’t understand their costs. There are even open source competitors for SAP that are more amenable to adjustmentbtontheir way of doing business. The big cost of the SAP catastrophe at my company was due to the requirement to completely revise every aspect of their existing business model to fit the SAP way of doing things. But pointy hair bosses, especially in government, are usually clueless about such things, and the communications difficulties between IT professionals and MBAs or Public Administration degree holders magnify the problem.
Imagine if the city fathers actually contributed money and/or software engineer hours to fixing every issue they have with the open source products they are using! That could make a huge difference n those projects, provide employment for local people instead of sending money to the US, and give them exactly the software they need, at 1/10 the cost!
Except for certain TLDs such as .us, every registrar I know of offers privacy at extra cost. This works by providing a special contact cod3 that can be used by law enforcement with proper court papers to identify the real person. So worst case, it seems to me that this system could be made free, with or without default. Why is this insufficient?
I probably still have the IBM 1130 assembler code that sent signals to the big old Winchester “washing machine” disk drives at different frequencies. The program took input in the form “AABBC+” etc. to form musical output. Output was generated by setting a transistor radio on the console above where the channel wires were routed. The signals in the wires were powerful enough to generate sounds on an A.M. radio nearby.
Needless to say this was probably not good for the Winchester drive!
In today's world of old logos and brands being bought and revivified, it could happen. The original Pan Am was a much more entrepreneurial company than I ever realized. Pan Am was very much a startup when it talked Boeing into building the original 314 Clippers, promising to buy them if built. (They actually only bought six of the original and six more of the 314A. https://en.wikipedia.org/wiki/Boeing_314_Clipper)
I don't know who owns the trademark today, but I could see Jeff Bezos buying the brand for a hypothetical service using Blue Origin launch vehicles for its competitor to Musk's SpaceX suborbital flight service, or even an orbital shuttle service to space stations and such.
We tried growing some pumpkins from some Giant Pumpkin seeds. We didn't work hard at it, no special treatment other than removing most of the excess fruits early on. We got a good sized pumpkin - 50 lbs. or so. But IMHO it was as close to inedible as pumpkin can get. So I wouldn't recommend using for pie without an excessive amount of spices and sugar!
The classic problem for pre-internet advertisers was, "I know I'm wasting my money on 1/2 of the advertising I buy. I just don't know which half." The Internet fixed that problem to a great extent, and made much of the Internet more akin to the entry of a department store, where just looking at men's ties quickly brought a tie sakesperson over to "help". Unfortunately while even thstbwas too far for mostbif yes, the vendors wanted, and took, even more of our privacy.
When I leave a store (whether I bought something or not) I don't want the salesperson to follow me down the street and continue harassing me. And I don't want them to sell the information that i touched the running shoes on the way out of the store to the shoe store down the mall.
From my understanding and nonzero experience, every machine learning solution us domain/applications cation specific so far. That is the very lim Ted state of the art. Yes you could likely build a system that could gradually improve itself. But that is what it would be good for.
Long ago I argued that a good compiler should gradually learn to be a better compiler. AFAIK that has not happened yet. But all of these possibilities domlay before us.
I've told several people that the next generation of computer "programmers" will be more like teachers, helping baby AI to learn how to solve the problem(s). This is radically different from classic imperative or functional programming but still requires the special ability to understand machine processing from the ground up.
The original hypertext 'xanadu' system proposed by Ted Nelson included several features that would have made a lot of sense - transclusion and micropayments being two of the most useful.
I am not going to subscribe to 50 different publishers and pay each of them an annual or monthly subscription rate. This would cost $1000s per year. But I'm willing to pay the same amount they presently get through advertising via an anonymized service that worked with all or most publishers.
There are two long-standing models of this - YouTube used to be more or less, and maybe is going that way with their premium service - but are they still tracking? And ASCAP and BMI music services have worked with radio stations and others to automatically pay musicians and composers standard rates for songs played. This is not a complicated issue.
If I could subscribe to an general inclusive subscription service, for perhaps $10/month up to maybe $50/month, bumping up a dollar at a time depending on how much reading I want to do, that simply paid publishers for articles that I read (_not_ just clicked on by accident), and eliminated all the tracking by all the publishers that joined the service and just gave ad-free content, I would totally subscribe to that.
I'd like to know what the average revenue publishers receive on one page view, based on clickthroughs and whatever else. It can't be that much.
Interesting. Mercurial handles the rename nicely ("hg rename"), but it doesn't go in and edit all the places where it's referred to. But (using PHP) the autoinclude system handles that if you use a filename that matches the pattern for the class name. The autoinclude system works out the filename from the class name. So rename the class in the source file and wherever it's invoked, rename the file, you're done. I don't use git (or C, C++) so IDK other systems.
I think you are thinking of coilguns, which use magnetic fields to accelerate - like maglev trains and the Hyperloop. A railgun uses huge current going across the sabot that carries the projectile. Even just using the magnets to keep the pieces separated, I suspect that the current density is so high that it would quench any known superconductor.
The going-up-the-mountain launcher was a coilgun IIRC, not a rail gun. Big difference. A coilgun is technically similar to a maglev train, using sequential magnetic fields to accelerate a vehicle. A railgun pushes huge doses of current through the projectile (actually a sabot that carries the projectile. Railguns can accelerate much faster. A coilgun/maglev in an evacuated tube (see also Hyperloop), going about 45 degrees upslope to above 20,000 feet and about 100 km long could replace most or all of the first stage of a launcher.
The biggest issues, beyond the sheer building of the machine, are the sudden insertion into atmosphere (albeit less than 50%) when the thin plastic barrier at the top is breached at Mach something, and the survival of the vehicle in hypervelocity travel through the remains of the atmosphere. But it is probably doable, and if/when space launches become more than a daily occurrence, the economics might start to look pretty good.
Another issue - such a thing can only launch into one orbital plane, and it takes significant energy to change inclination.
Back in my Systems Science / Machine Learning period, one useful definition was that AI was defined by techniques that had not been figured out yet. At one time "Expert Systems" were considered AI. Later what we now call Machine Learning was AI. And Neural Networks, Genetic Algorithms, Cellular Automata were all considered AI according to some. Once we've taken an AI concept and turned it into a methodology, it loses the mystique and becomes just another computer tool.
I can argue that AI is thus an infinitely regressing goal, defined by the very fact that there is some aspect that we can still identify as being "not quite" real intelligence. Maybe we will always be able to distinguish between an AI and an RI due to certain mannerisms, preferences, etc. - much like we distinguish between people from different locales or ethnicities by linguistic differences.
It's a real pity that more energy, publicity and money isn't being invested in truly revolutionary work like Reaction Engines
They seem to be progressing reasonably well, having just received another chunk o' cash after successful testing of the SABRE engine concept. Something I read a couple of weeks ago gave me pause - apparently the timetable for the Skylon spaceplane is being pushed back, because certain military types are taking an interest in using SABRE for military purposes. This tells me that the military are convinced the thing actually works.
So that's what killed SSTs - that they might be economic if you give them away, but there is no business case for development and building. We'll see if Beardo's people can do things differently - I suspect they'll find that they are no more able to overcome the technical and certification issues than Airbus or Boeing.
This may be one area with a big difference in costs. The original SSTs were designed by hordes of engineers with slide rules, some primitive computer calculations, and paper drawings. (I read once that Boeing had actual full size "proof" drawings for the final design of the original 747 wings, with a rolling catwalk above that the engineers could ride on while they confirmed clearances and added last minute changes.)
Today a very few engineers with advanced CAD software can design, build, and even test the entire airframe in the computer including routing of cables and hoses, in a small fraction of the time. And the wing shaping, stress management and other mechanical elements can all be optimized on the computer and the fabrication tooling automatically designed to go with it. The skins can be shaped in almost arbitrary ways, that could not even be contemplated back then.
So I'm guessing that development costs will be one tenth of what the 1960s SST designs cost to get to manufacturing. And certification will be easier, as the computer data will be available for analysis as well.
Most people aren't aware of some of the latest technology, notably NanoRAM. NanoRAM is an almost ideal memory/logic technology, except for one teeny tiny detail. It's actually been used in several military satellites, but it's still expensive and difficult to make.
As I understand it, a NanoRAM 'switch' is a bent bit of carbon nanotube, which is connected to one side of a circuit, plus a 'landing zone' which is connected to the other side. The nanotube can be bent (magnetically? I forget) to either bend over and connect to the landing zone, or to straighten and disconnect. In either state it is completely static, needs nothing to maintain the state. The only time it is sensitive to radiation is in the nanosecond during which the switching is in progress. Its switching time is much faster than silicon, the density is much higher, and the switching power is much less, and the power required to maintain state is zero.
From what I've heard and read, the problem of making consistent, repeatable nanotubes has been the real issue, which has prevented this technology from becoming a common replacement for both dynamic and static RAM in computers. But its value in satellites may be unsurpassed.
I for one would like to know if making the nanotubes might be easier in microgravity. If so, then this might be a technology that both enables and depends on space development. Caveat: I only know what I've read in Wikipedia and online articles, and discussions with folks who know a little more than I do.
F-104 Starfighter - I recall that the only "airplane" with a worse glide ratio was the Space Shuttle. Was it 1:3? I recall that they used starfighters with wheels down as escorts on early landings of the shuttle. It was the only plane that could fly both fast enough and badly enough to stay with it.
HME will probably always require 100-1000 times as much CPU power and possibly data space as unencrypted computation. But it will be an essential tool for maintaining the internal privacy and security of "agent" systems traversing the internet. Without it, while data at rest may be encrypted it is still in plain text while in memory for processing. Since an agent has no way to predict or restrict what processors it is being run on - in fact not even whether it is on a real processor or a virtual one, those processors may be on compromised services that could be reading that memory and tracking the computation.
The only way that has been proposed to protect such agents from compromise is homomorphic encryption, which allows the entire data collection that represents the agent can be kept in encrypted form at all times, even when it is running its computation processes. (In fact I would expect a higher degree of encryption for the data at rest, and a less-secure simulacrum used for the processing phase. This may be a necessary compromise.)
IOW, if you have uploaded your mind and personality to the Net, that "evil" processor could be reading your mind and even erasing your memories and substituting new memories. But agents have many other practical purposes.
The two most important factors in preservation of the internal integrity - identity - of any system are privacy, and protection against undesired or unnoticed modification from external forces. Only HME has this capability.
As a side effect, this requirement will drive another wave to higher performance and capacity. An individual encrypted agent might require from one to 100 petabytes of storage and equivalent increases in computing and network traffic, within this century.
I think I said something to this effect before on the original announcement: "Moz-people, have you lost your fricken minds!" This is a pure example of the triumph of technically ignorant marketing nerds over the boring reality of how things work. Yes, it does "speak to the essence of what Mozilla does", much in the same way that parking your bicycle in the middle of the freeway speaks to its "essential spirit of transportation", with (one hopes) perhaps not quite equally bad results.
Please, Mozilla, please. Undo this very bad idea. It has as much style and attractiveness as lipstick on a pig, and (as we see) completely confuses the parsing systems on applications and services spread out all over the Internet. If you like square wheels on a Porsche, you'll love this new trademark. And, since the Moz-people involved seem to be the mechanically-uninclined type for which this warning is relevant, please also stop using pliers on your wheel nuts and trying to unscrew Phillips screws with a straight screwdriver.
I'm very sorry to hear about Lester's sudden demise. We exchanged emails and some other things in late 2015. I had subscribed to the LOHAN Kickstarter, and I had suggested he write about the Integrated Space Plan (http://thespaceplan.com), which he did in time to (no doubt) help with our own Kickstarter for the ISP poster. I sent him a copy of the poster when it came out, and I received by coincidence the LOHAN mug and the ISP mug in the same postal mail!
We are beginning to put together plans for a new edition of the poster, and I would gladly have sent him a copy when it comes out. I had hoped someday to meet him, but alas that will never be. But I'm sure he's now flown higher than LOHAN ever could!
Perhaps the solution is to build for the market. Go out and find what features and price point would be competitive for sale to other countries, get a few letters of intent, or better actual pre-orders. Then add the UK order to the list. Incorporate into the design some flexibility and/or feature models ("white sidewalls, leather seats, sea-to-air missiles, ...").
Require the design to be buildable in pieces at multiple shipyards, accurately (I've seen videos of the Koreans building large container vessels where the pieces fit together with tolerances under a centimeter, it's doable.)
Use fixed price contracts and make sure the design is complete enough to minimize change orders. If possible contract 1/2 the order to each of two vendors, and require all modules to be interchangeable. This is common practice in the auto industry, admittedly at higher volumes. But how much is the overall hull shape going to change? Building to the market will go a long way to preventing Lockheed-ization and gilded designs.
> Same in New York City - witness the ship under the WTC. But I believe they don't have a problem with reaching bedrock.
NYC is an interesting and useful comparison. I recently learned that, looking at a map of Manhattan, all of the high rise buildings are in two fairly small areas of the island. Buildings on the rest of the island are limited to five or 10 stories. This is because only those two areas have solid bedrock. The city does not allow super high rises on those other areas, with or without piles.
These tall skinny buildings are a special problem. A cathedral is tall, but not that tall in proportion to its footprint - the height is maybe three or four times the width. So building on a lesser foundation may cause settling, cracking, etc. but is unlikely to result in the entire structure tipping over. But a 60 story building (maybe 700 feet tall) on a 60 or 70 foot wide lot has a lot of leverage, so a tiny bit of tilt will quickly start to escalate as the weight gets concentrated on one side.
In a city that is known for its earthquake hazard, and that is built on landfill, failing to require the piles for this building to extend all the way to bedrock is an engineering failure on the part of the city's building department as well as the developer. Landfill, especially near a large body of water that can provide lubrication, has a tendency to liquify under earthquake stress. There are videos online of the dirt flowing up through sidewalk cracks during earthquakes.
Both the developers and the city are in deep doo doo. And should be.
Biting the hand that feeds IT © 1998–2020