Re: California, well known for its tectonic stability
Besides, doesn't Japan's Shinkansen have to be built to negotiate earthquakes since it's in the Ring of Fire?
5146 posts • joined 10 Jun 2009
Besides, doesn't Japan's Shinkansen have to be built to negotiate earthquakes since it's in the Ring of Fire?
Two reasons. One, Steam has its own content distribution system separate and apart from any Linux package manager (and well predates Steam on Linux, for that matter). Second, game updates can be very piecemeal, particularly when the update concerns game content rather than program code, so Valve recently updated its content system to reflect this. It reduces update package sizes most of the time: a kind consideration for people with limited bandwidth allotments.
What goes wrong is that then the smaller communities can't get wired up since this is usually the only way they CAN get wired up, due to the poor return on investment.
IOW, better crappy Internet than NO Internet.
You're not bargain hunting enough if $160 is the lowest you're getting for a used 360 with 250GB hard drive. A little bargain hunting showed me price points closer to $100. Plus you've focused on two games with relatively simple graphics. If you were to give your systems something more demanding like, say, FF13 or GTA5, I think the differences in architecture will probably become more profound. Meanwhile, my point still stands in regards to the current generation. And just to be sure, I also checked Steam Machines, none of which can match the price/spec combination of either the PS4 or XB1.
"When I bought my current 760GTX 18odd months ago it was over £300, now they retail for £200 (for the 4GB version). Since the AC I originally trolled replied to mentioned "the next 8 years" of the PS4 pwning the XBox One because of a better GPU, I'm pretty sure that less than 8 years will be required to be able to build a gaming PC using a 980GTX (or equivalent) for $400; $100 for the GPU, $200 for the CPU, mobo, RAM and case, and $100 for the Windows license."
But by then the PS4 will ALSO cost much less. PS3's started at $500 and are now around $200 depending on the model. Similarly, a PS4 will always undercut the PC, meaning my statement still stands. A $400 budget today would reduce to $200 in four years time, and even today a $200 budget is tricky just to get a decent mobo/CPU/RAM combo, let alone the video card (and I've checked).
Trouble is, just about any place you could put a TOR exit node has a snoopy government ready to demand access. If it isn't the US or UK, it's Russia, China, or whomever else is in charge.
The exit nodes would probably be legally covered under the auspices of a Sting Operation. Much as undercover cops are allowed to handle cases of drugs and even child porn so as to facilitate Sting Operations.
"For the supersimian, OpenVPN is still free if memory serves and has a free Android & IOS app"
Any host not owned by you is likely to be backdoored by whoever government runs the country the server's hosted in. As for making your own, that can be tricky. I'd love to use the one built into my home router, but it only supports TAP mode, and TAP support on Android 4 and up is only possible through a convoluted method that, frankly, doesn't work yet with the router.
That's what I'm talking about. Verizon could easily do the same thing for any https request that goes through its network, allowing them to MITM the connection and still insert the supercookie, again at a point beyond your control.
Unless, of course, Verizon MITM's everything that goes through its network, meaning you're screwed no matter what you do. As I understand it, the injection occurs at their which is why you can't remove it (since it occurs at an upstream point beyond your control). The only reasons tunnelled connections aren't tagged is because Verizon's servers can't MITM them and recognize them for what they are.
JAMMING is against FCC rules. SHUNTING (which is what the Faraday cage does) is not.
The difference is that the former is an active method that involves flooding the airwaves with garbage. Since that has inevitable knock-on effects, doing that has been considered bad radio behavior since the tuned circuit was invented. And the FCC takes a firm stance that you're not allowed to interfere with anyone else's radio business without government sanction (and they usually reserve those for emergencies).
The latter is completely passive and, so long as it's only applied on a person's property, reflects a stance of the owner and doesn't usually affect anything outside the shunt. About the only exception I could see is if such a shunt stands between you and the transmitter.
Probably because the municipal government wouldn't agree to an exclusive agreement. That's part of the problem.
Those fat pipes incur continual costs that make it a crap shoot. The smaller the location, the less likely it can pan out. Some of those supposed locations that are out of the way yet have fat pipes usually experienced some lucky break. Olds, Alberta and Grant County, Washington both attracted data centers because their northern location reduces cooling costs, a prospect that's less likely in, say, Tuscon, Arizona.
It's a very basic question. Braodband is great and all...but who foots the bill? Not just for the initial infrastructure, which is significant, but also for the continual upkeep costs in a community without a lot of people to spread it around?
Olds is lucky. The rollout was supported by the Alberta government (a C$2.5m grant for starters) plus they didn't have to worry about the trunk access because of the Alberta Supernet, another project being developed independently by the province.
I can show a related story in Grant County, WA. Supposedly a municipal effort built by forward-thinking municipal authorities in a permissive state (Washington has laws allowing municipal authorities to build wholesale trunk lines). Still, it begs a question. For one, why here and not nearby Seattle with its millions of people...not to mention tech-heavy Redmond? For another, how did they get high speeds up the line at the trunk providers? Last I read, Grant County got lucky because being first and being in the cold North meant they attracted datacenters that were willing to pay top dollar for fat data pipes. So you wonder if a similar setup can still be profitable for a strictly home-based community.
In one of the BIGGEST countries in the world, you may note. Geography affects rollout costs, and the heart of the US isn't exactly teeming with people. I'm having trouble finding ANY country of a comparable size that has fared any better with universal rollout.
I thought part of the problem wasn't regulations but contracts imposed by ISPs simply for getting the service to these rural communities. Given the terrible RoI on rural hookups, many ISPs won't do it without exclusivity agreements (guaranteed RoI, IOW). How would any new regulation get around basic contract law?
"This solution just seems to systemically embed the generally ambivalent prevailing social attitude towards genuine privacy and security. People don't value what they have in meatspace and aren't bothered about ensuring it online."
IOW, you hit the meatbag problem, "How do you educate people who don't care yet can threaten you with their imcompetence?"
That's probably because coal mines aren't exactly the cleanest or most neighborly of industries. Just ask Buffalo Creek, West Virginia.
I've looked into the algal oil experiments. According to estimates, the current limit of the technology is 1,000 gallons per acre per year. Your typical fighter jet, variables depending, can easily burn up over 2,000 gallons of jet fuel per sortie. Which leaves me concerned about the long-term viability of this technology given how active the USAF and USN are with their jets.
Unless those alternatives are ALSO poor in EROEI since with EROEI you have to look at the ENTIRE production chain, including mining, extracting, manufacturing, maintenance, and any regulatory cleanups inherent with the associated processes.
Mainframe computing's mostly moved to cluster computing. Instead of a big, honking piece of customized hardware, you can throw a bunch of commodity or at least standardized units at a problem. Granted, sometimes even that doesn't give you the performance you need, but the solutions Google and such provide against the PC atmosphere are less revolutionary and more evolutionary.
"Soviet fleet in Straits of Hormuz?"
OK, how would they explain a Russian fleet patrolling the Atlantic? Fracking exports from the Western Hemisphere would be tough for the Russians to block without looking even more awkward.
That's the point. The more potential sources of oil there are, the less likely any one power can corner the market.
It still applies in broad. If you can get oil from an assortment of disparate locations, then what happens in one part of the world isn't likely to affect things in the other locations. A geographical version of diversification, if you will.
Mr. Putin cannot turn off a non-Russian tap. Plus if shale reserves are as diverse as hoped, that could reduce the transport issue as well.
They are to an extent, but if more players enter the game, it becomes harder for politics to game the price.
Sure it was. IBM served the PC industry around 1980 what OPEC does now...or the DeBeers diamond cartel. Thing is, all three faced or are facing disruption from suppliers they can't control.
But many of them have international agreements attached to them if not outright treaties. Those CAN'T be changed without international repercussions. For example, if England wants to access records in a country where the data MUST be encrypted in order to be exported, they're stuck.
"And all it takes to defeat this surveillance is for the terrorists to make their plans face to face, rather than via text message."
And recall the Al Queda was properly paranoid in that respect. They met indoors to avoid satellites and face to face to avoid eavesdropping. About the only way we got to bin Laden was by subverting the nigh-unbreakable inner circle. IOW, we just got lucky.
"I have studied the trial of Galileo but I do not see the connection."
Simple: Can you REALLY stop people testing conventional wisdom? Even when Galileo was shut up, his knowledge simply moved into Protestant territory where the Church had no sway.
"If you ask me, for AI to earn the 'I', it must be able to 'understand' and handle situations and objects of which it has no prior experience or specific rules. As humans, we do this by analysing parts that we recognise but haven't necessarily seen together and weigh up whether what we know about one object (e.g. the behaviour of a person) is more important that what we know about another object (e.g. the location). We make a 'judgement call'. Or, we try to understand a situation or object be analogy with another situation or object we are familiar with."
But like with the end of 2001, what happens when the AI, which would likely have less experience to draw from than an adult human, encounters something totally outside our realm of understanding? Indeed, what happens when WE encounter the same: something for which NOTHING in our experience and knowledge can prepare us.
Or on a similar note, paradoxical instructions. In our case, we have to take conflicting instructions on a case by case basis, determining (sometimes by intuition, something AIs would probably lack) which if any of the conflicting rules apply. Example: You're told to put stuff in the corner of a circular room (meaning no corners), and there's no one around to clarify. What do we expect an AI to do when it receives a paradoxical or conflicting instruction?
Which then asks an interesting question: given that customers need money to buy stuff, and without jobs they don't make the money they need to buy stuff, when you have AIs running everything, who's going to buy the stuff made by the machines these AIs run?
Question is, what if 90 days isn't enough for ASAP? Suppose the big is intertwined such that fixing it is like untying a Gordian knot?
That would've been a deal-breaker for the carriers, especially when Android was just getting on its feet.
But what happens when you get hit with a "drop everything" Ultra Critical? How much time can you REALLY spare then?
You want to know how quickly they can REALLY turn out a patch? See them react to an Ultra Critical bug already in the wild.
"Charles - logical and virtual partitioning is fine, but a physical partition, or one that is fixed at system compile time (or SYSGENs as this horrible mechanism was called on IBM) is a very bad idea. Modern computing moved on from that. You mean Android really goes backwards and does that (aside from Dalvik making registers visible to programmers - a really bad, but common idea)."
Yes, they do, for security reasons. Mainly, /system, under normal operation, is mounted read-only. This is one of the chief reasons hacks need root access: to be able to remount /system read/write so as to make the necessary changes. As for moving on, we haven't. A physical partition is still limited to the size of the media. Now, logical partitions can work around it, but that normally takes level of sophistication not present (or needed) in your average mobile device.
"Partitioning is static and is known to waste resources, particularly memory. Customers aren't happy when their disk space runs out and there is heaps of free wasted space because the OS can't make use of it."
Then you should've heard some of the howls of protest when the S4 came out. Take mine: a baseline 16GB. About 2.5GB of it is partitioned /system, some of it /data, /cache, etc. Leaving us about 9GB free. I get around it with an SDXC card, but Apple users don't have that luxury. Basically, partitioning, especially on mobile devices, is a necessary evil. Trying to monkey with them is considered high-risk since the architectures involved are rather sensitive to where things are stored. So, if you have to partition to fixed sizes, why not segregate the OS onto another chip with additional safeguards and so on? To Android and the system at large, it shouldn't be able to tell the difference.
"Your PS is confusing two things. Programs are loaded into the same RAM memory as data. This is the von Neumann model. However, a program should not be treated as data itself which can be overwritten (except in a virtual environment like LISP and descendants).
If a program can overwrite program code, that results in all sorts of security breaches."
It's also sometimes the only way to achieve some kinds of speed optimizations. Treating programs as data is the basic remit of the compiler, and for a JIT compiler, it has to be able to compile it and then run it in the same context. Sure, there's the risk of security breaches, but that's the tradeoff of using self-modifying code.
I'm simply saying that the von Neumann argument isn't part of the discussion. We're discussing partitioning and the reservation of OS space such that it's not included as part of the advertised space, not the segregation of code and data.
"Partitioning is a bad thing (I believe IBM still sets up their systems this way - separate partitions for programs in main memory - when I asked someone last year. I'd hope he is wrong).
So that idea of separate partition for OS is not practical of flexible. We like to keep flexibility in computing, even if it means overhead."
You may be interested to know Android's /system directory (where all the critical OS stuff normally is including the system apps) is normally housed in a separate logical partition from the /data directory (which is where all the user apps and data go), and this is in turn kept in a separate partition from the rest of the internal memory that's normally left to the user.
If they can be kept on a separate logical partition, they can be kept on a separate physical partition just as easily.
PS. The von Neumann vs. Harvard argument was the idea of separating code and data. von Neumann won because of the realization that code itself can be considered data (self-modifying code and JIT compilers spring to mind—neither are possible in a Harvard architecture).
"Software that takes up non-volatile storage space is the norm. It is just too complex to measure it another way."
Well, whatever happened to TWO nonvolatile stores: one for the OS that ISN'T counted, and one for the user space which IS counted? Over-provision the OS space by say 50% and it should have plenty of space to handle enough updates to survive its working life. And given how tiny Micro SD cards are, I don't buy the lack of space argument, which is the only practical one there is.
If there are known biological effects attributable to Tetra specifically, then perhaps you can cite us the peer-reviewed clinical studies that can prove these effects.
As is "Wilco" which is radio shorthand for "Will Comply" and basically means "I will comply with the instructions/orders just given."
These days, I use external USB battery packs. They're not constrained by the size of the phone (meaning you now have bricks in the neighborhood of 20Ah) and, best of all, don't require the phone to be turned off to use.
I'm more curious about the spectrum auctions in the 1.7GHz range. AFAICT, the natures of these bands seem to preclude opening up LTE Band III, the most-internationally-consistent band, because the first auction is just below the range while the second is within, and nothing is mentioned of the 1.8GHz band needed for the other part of the FDD pair. Does this seem consistent with you?
Grant County, WA is a testbed community. It only has high-speed broadband because iFiber Communications chose to use that area to deploy an experimental fiber network. Most likely, they're trying something that may not pan out in a denser area; otherwise, they could've easily gone just a little bit west and deployed in the Seattle metropolitan area which happens to include tech-oriented Redmond.
Maybe not with SD in its current physical dimensions, but perhaps some successor specification, thicker and perhaps a little larger to accommodate 3D Flash and slightly larger chips.
"It's a bad idea to use one of these cards by itself for storage."
Most savvy users realize this. The SD card is meant as a transport medium, not a storage medium, though one exception is phones and tablets, where Micro SD becomes a storage medium for noncritical or backup data.
In any event, the idea is the SD card is only used as a temporary hold for a recording/shooting session. In my case, when I get back to "base," one of the first things I do is take out the card and insert it into my laptop's SD slot, whereupon I offload the contents to a more-permanent storage device. I organize simply by dumping each session into a folder with the date on it. Once it's done and verified, I can slap the card back in the camera, wipe it, and be ready for the next session. And just in case one wears out, I keep a second one as a fallback. By the time it wears out, I'll have already bought a replacement.
I say with 3D flash on the cards, there's a likelihood of SDXC hitting the 2TB capacity limit in a few years. At which point SD will need to figure out which letter to use next for the next capacity specification. And let's hope this time they settle for a less-encumbered filesystem (though for lack of ubiquitous alternatives, my money for now's on NTFS--any other format and Windows will need a filesystem driver).
Based on the stats and the article, it's somewhere in between "more performance for the same power consumption" and "the same performance for less power consumption". It has somewhat better performance than before while also using less power than its predecessor.
No, they got it right. They're saying it would take the reading of a total of 450MB of sequential data for the device to consume 1W of power and 250MB for writing. Meaning it's probably able to selectively power storage chips up and down as needed. A random operation would require more chips to be online at a time, reducing the power efficiency somewhat, but perhaps you get the picture now.
It's not like Samsung isn't prepping something for phone applications. I believe that's where their 3D Flash efforts will end up. It may not be uber-fast, but it will be compact.
I wish to elaborate some of the details overlooked in the writeup:
The antibiotic works in a novel way by bonding not to proteins but to lipids: namely, two lipids vital to building bacterial cell walls. This is the mechanism that makes it so resistance-resistant, as cell walls are much more complex things. Trying to evolve around it is much more likely to result in side effects resulting in evolutionary dead ends. So a bacterium that tries to work around it is pretty likely to die in the attempt. Furthermore, this represents a potential new branch of antibiotic research, meaning this may well be only the beginning. It is also a vindication of the technique used to culture the substance: one that requires the specific envorinmental conditions present in soil as opposed to a culture.
This only works on Gram-positive bacteria. The outer membrane that makes Gram-negative bacteria not accept the violet Gram stain also allows it to repel teixobactin. While staph is Gram-positive, E.coli and salmonella are Gram-negative.
While this new novel substance seems safe to mammallian cells, human tests are still some time off. Furthermore, there is still no guarantee some mutation down the road can't beat the odds and produce a teixobactin-immune cell wall that is still viable. There is also the question of whether or not this can be defended by other bacterial defenses such as biofilms (which have become notable as being able to survive exposure to concentrated bleach and even gasses).