Re: And the software licenses will cost how much?
Terabytes, not gigabytes.
E7 systems with 3TB physical RAM do not exist.
1.6x performance gain in a system with 3x the RAM, everything else approximately unchanged, is quite plausible.
67 posts • joined 14 Jul 2007
Terabytes, not gigabytes.
E7 systems with 3TB physical RAM do not exist.
1.6x performance gain in a system with 3x the RAM, everything else approximately unchanged, is quite plausible.
Indeed, I would be happy to use chip-and-pin-and-fingerprint. But the banks would never go for it (maybe some day in the midst of a fraud epidemic?)
I want to specifically congratulate _Sorry that handle is already taken_ for the magnificent pun. I was halfway to scolding about the mistaken use of "pendant" for "pedant", when it swung around and hit me.
Recheck again. Archaeoboffins have most recently decided that not only were Brontosaurs real and separate from Apatosaurs, but they in fact rate their own genus comprising three species!
Um. There are approximately 50 window managers in the Ubuntu repositories.
This returns 57 matches:
$ ### sudo apt install aptitude ### if necessary
$ aptitude search '?and(!~ri386,~Pwindow-manager)'
-- of which half a dozen are essentially duplicates.
$ aptitude search '?and(!~ri386,~Psession-manager)'
returns 7 non-duplicates.
Neither of those are a good comprehensive list of "desktop environments" available in Ubuntu. That would be some sort of matrix of window manager x session x who knows what else, numbering in the thousands of possibilities.
The problem isn't that you're stuck on Unity, but that there's an overwhelming sea of possibilities with nary a map in sight.
-4 years from now: raspberrypi.org/forums/viewtopic.php?t=7552
The author's Notably short fuse makes me wonder whether the chaebol once blew up his dog...
My guess: some crucial security fix which is just too difficult to backport to the ancient 11.2 code base. For this reason to make sense, there also has to be at least one gigantic paying customer or strategic partner who firmly insist on continued Linux support. This was already a necessary condition for the previous ongoing 11.2 patching, but now we know that the insisting customer is even bigger or more strategic than we might previously have imagined...
Operating system initialization is extremely CPU- and chipset-specific. Showing that your OS tests successfully in 32-bit mode on a modern 64-bit x86 CPU is not at all the same as showing that it actually works on real 32-bit hardware. I'm talking about differences in page table setup, various control registers, workarounds for ancient bugs like "f00f" and the FP divide bug, etc.
(Aside: a good rule of thumb about OS testing is: if you haven't tested it, IT DOESN'T WORK. This isn't an actual identity, but it's close. Changes *here* have unanticipated effects *there*, so it's really necessary to test every supported scenario against every release.)
So, don't make any changes to 32-bit init code? But huge swaths of the x86 code is shared between 32- and 64-bit paths. An attempt to freeze the 32-bit init code would involve changes all over the shared x86 arch part of the kernel. This would be potentially disruptive of the 64-bit path, thus nearly impossible to get merged into the mainline kernel. So now you're asking distros to maintain a forked kernel for arbitrarily long.
Init code isn't the only pain point. Even if you get the OS to boot, you'll eventually find other subtle issues leading to data corruption, panics, etc., unless you are rigorously testing on real 32-bit hardware.
Meanwhile, old hardware can continue to run the same old software that it already runs. Nothing a distro does is going to reach out and retroactively destroy existing x86_32-supporting OS releases.
Embedded users shouldn't be too bothered since they rarely use "full fat" desktop versions of any distro. Many embedded systems build their entire userland themselves and don't really rely on a "distro" at all. It is irrelevant that they can't acquire things like modern browsers able to handle the latest web site tricks. (For most embedded applications, it would be a serious security problem if it was even *possible* to install a full browser...)
Actually http://theregister.co.uk/2015/11/06/blackberry_priv_review becomes http://theregister.co.uk/2015/11/06/blackberry_priv_review/print.html which redirects to http://theregister.co.uk/Print/2015/11/06/blackberry_priv_review and works; but including both /Print/ and /print.html redirects a second time, to (404) http://theregister.co.uk/Print/Print/2015/11/06/blackberry_priv_review
(Sloppy Crapmonster lives up to the moniker :)
@jake, thank Roger for Kentucky Fried Lizard Partes -- not Harry.
But indeed, RIP to both.
The article fails to mention the single worst feature of the PCjr -- at least the early version which was inflicted upon me.
Real IBM PCs had 15 characters worth of typeahead: if it was busy while you were typing, what you had typed was stored in a little buffer and played back later, when the next prompt arrived. If you typed too much (the 16th and subsequent chars), it would BEEP! to let you know that the extra chars were being ignored.
PCjr? Oh my.
There was still typehead on the PCjr. There was also still a beep. The semantic interaction between these, however, had been diabolically redesigned.
For some reason, the PCjr wasn't always able to receive a typed character while it was busy. Someone once claimed this was because of its lack of DMA; I never learned why. In any case, it *did* apparently have some inkling that it had lost a character.
The PCjr's somewhat more modest "bip!" therefore meant "I lost the one character you just typed".
At least that was the theory. Unfortunately, even the signal telling it that it had lost a character was flaky. What the sound actually meant was "I MIGHT have just lost a character".
Which meant that as soon as you'd typed 1-2, maybe even 3 chars, you got an audible signal meaning "give up, you have no idea what's in the input buffer now".
This stuff should be ideal for tinfoil hats (tip o'mine to JustaKOS who already obliqued this joke)
It has to be a good sign that this venture already has its own special dedicated RegTrollTard(tm).
Rather bummed that this was not a harrowing tale of survival at a Texas rootin' tootin' chiles-and-others-spices show...
Now a Valero (with an Arco [BP] across the street in the southwest corner)...
This is obviously an outrageous attack on OSS by Microsoft! They deliberately slowed the rotation of the Earth in order to insert a leap second, thus sabotaging dozens of services relying on Linux.
Android apps are mostly Dalvik (cough*Java*cough) bytecode; they should run just as "fast" on x86 as on ARM.
Presumably where there's ARM code, the phone uses some sort of JIT ARM-to-x86 compiler. This stuff used to be terribly slow (10-100x penalty). These days there is no technological reason it should cost more than about 2:1. That is, *if* they cared to develop or buy the very best, the penalty shouldn't be too bad. If they just slapped something naive together then it's probably back to 10:1 or worse... Benchmarks will eventually tell the real story.
And presumably popular apps which use native ARM code will eventually be recompiled as fat binaries or separate x86 packages.
My guess is that the current generation of Atom SoCs will prove to be perfectly adequate also-rans in the cell phone CPU arena. They will not compare successfully against the latest multicore ARMs like Tegra 3, QualComm S4, etc. Atom is only barely touching the compute-per-watt range of the newer ARMs.
When writing an app for a tightly controlled platform that has only one screen size, you can be forgiven for designing to the size.
On a platform with two screen sizes, you would be sort of stupid to do so, but many developers could be expected to be on that side of the line.
Android cell phones collectively have at least a dozen different screen dimensions. Add tablets and you're up to at least 20. Coding Android apps to care greatly about screen size is just plain stupid.
Desktop apps have a resize control in the corner of the window. Web apps get fed into browsers on all size screens, which live in windows with resize controls. Any strong sensitivity to window size is idiotic.
BTW there are a lot of idiotic pages on the web. This does not excuse them...
Kudos to Chris Watson -- and giant kudos to (ahem) dog pizzle if you set that up on purpose!
How do you think "50 million petabytes a year" gets reduced to "15PB" (a factor of about 3 million:1)? They're already compressing it incredibly.
Gen8-alias-G8 -- yep, that all works out...
They don't want confusion around WP8, so they don't announce their intentions, they just release a swirl of contradictory rumors.
Ah, I have now *earned* the "commentard" moniker. My Mysteriously Missing Missives were there all along -- waaay up higher in the discussion, threaded under what I was replying to. Duhhh.
I think I'll go do something useful now, like load the dishwasher...
Perhaps... "accepted" but being held by the moderator while he comments on them? That's cheating, you know :)
All three of my recent set of blather read "Accepted by moderator at [time stamp]" on my posts page. Plus my three from yesterday. It seems like the 6th, at least, should have qualified easily under the "5 happy posts in 3 months" rule.
Therefore, apparently it prints "Accepted by moderator" whether it's referring to a human or an automated system.
I'd prefer if it said "Accepted by automoderation" or something like that. Perhaps with a nice link to the guidelines anchored on "automoderation".
Now thoroughly confused. My last two posts suddenly both appeared at the same time in "my posts" page; but neither have yet shown up here?!?
[I thought I posted this but can't find it in either the forum or "my posts"... going senile...]
I wasn't asking about how moderators handle anon posts, but whether the <i>system</i> retains knowledge about who posted each anon post and whether the resulting scores accrue to the real poster. Then I decided you probably had to retain authorship information for various legal reasons; and it really would make sense to charge people for their anonymous misbehavior. So I probably answered my own question, but still seek confirmation.
Plus I get to check myself for HTML Super Powers...
I wasn't asking how the moderators handle anon posts, but how the scoring system does.
When an anon post is accepted, rejected, or removed after acceptance, that's a scoring action that *could* accrue to the actual commentard account that created the post. If the database keeps track of that, etc.
Or anon posts could be truly anon (at least in that regard), i.e. their ownership could be completely whitewashed as soon as they were injected into the review queue, leaving no way for the system to accrue the score.
I guess for liability reasons, if nothing else, you probably need to hold onto who posted what, even anonymously. So I'll venture a guess that anon posts do accrue to your score...?
Seems like it should be more sophisticated than that. I seem to have posted 57 times since April 2007, so that's what, 58 months, almost exactly one a month. Sporadically, of course.
You should either have a "lifetime achievement flag", or do it in terms of good:bad ratio over the commentard's entire posting life span.
Hmph. Commentard (and hmph) not in Opera's dictionary.
by posting a stream of inoffensive low level drivel, just to keep above the 5-per-3-mo line.
Some random questions along those lines:
- If you post AC and it's accepted, does that accrue to your account's total?
- If you post AC and it's accepted, then flagged/reported by a bunch of users and eventually removed, does *that* come out of your account's hide?
- Finally, if you have posted several messages before the moderator got to any of them; and one of those causes you to reach the 5-per-3 threshold, do the rest of your queued posts suddenly self-moderate?
I don't post that often, I guess I'll be throttled. Meh.
They shouldn't have changed the whole island. Leave a narrow strip on the beach in UTC-11 (or whatever it is). Then they can still advertise "last island to see the sunset" PLUS "walk back and forth across the international date line" (time travel the easy way...)
I live under the cloud of PG&E (Pacific Gas & Electric -- northern California). So when I went to investigate my meter I ran across some city of SF documents addressing these concerns (sorry, didn't save URL).
In sum, from memory: the system rolled out in SF uses 2.4GHz but not WIFI. Each per-meter unit emits 4 packets a day, each packet is some number of milliseconds (<100 I think). Transmission power is <1W. Transmission power and length are hard-limited by running the transmitter off of a slow-charge capacitor. Several hundred thousand per-dwelling transmitters. The receivers are on towers (existing power or phone poles), 77 of them in the city. They receive the individual transmissions and also send (at 2W) a once daily time sync packet. Collected data is transmitted over a cellular radio, not particularly different from a random person talking on a cell phone, except it's 20' in the air; data transmission could run for as much as 4hr/day per receiver, though that's a worst-case-in-many-ways calculation.
So, nothing to worry about *here*. Which is not to say that designs elsewhere couldn't be much worse.
Oh, and that's nothing to worry about in regards to interference, bandwidth use, personal irradiation etc. Feel free to freak out about whether they're reading your usage accurately or are all part of a Big Plot...
Speaking as an RF newbie:
What equipment or other tools would I need to investigate this in regard to my own house's smart meter (in a different utility's clutches)? I suppose I should start by checking whether it has an FCC ID printed on the case.
Links to helpful do-it-yerself FAQs etc.?
I begin to wonder if some of my in-house WIFI flakiness is induced, not just inherent in the protocol...
I haven't watched the video or searched elsewhere, but ... it sounds entirely plausible that these sensors would be deployed on a sheet of siliconE, which is a stretchy material which can be made into thin sheets. Silicon, the element, isn't so stretchy.
I see the article itself has been patched to read "silicon". Which is probably wrong.
> At some point you will hold the compute power and memory storage of a Cray Y-MP in your pocket.
I believe that point would be Today.
Newer smartphones have 1GiB RAM. A common SoC implementation, nVidia Tegra 250 T20, has >5 GFLOPS in its GPU and two 1GHz integer cores.
According to wikipedia, the original Y-MP series topped out at 8 processors of 333 MFLOPS each (total 2.7 GFLOPS); and a princely 512MiB of RAM. The minimum configuration had 128MiB RAM and 666 MFLOPS.
So you can certainly have the power of *a* Y-MP, and arguably as much power as the biggest configuration you could order when Y-MP was announced. Not to mention a whole cluster of Cray-1's (4MiB RAM!, 250 MFLOPS if you really push it).
Later derivative models (which tended to drift away from the "Y-MP" designation) may eventually have gotten as powerful as a throwaway desktop available today, e.g. $900 Lenovo Ideacentre 7727-5DU with 3.4GHz quad-core i7-2600K (+ 3.8GHz turbo + hypothreading), 200 GFLOPS video chip (Radeon 6450), 12GiB RAM, 1.5TiB disk.
Yes I know GPU GFLOPS are talking about single precision and they're only about 1/4 as fast at double precision. So if your desire for a Y-MP includes double precision floating point vector processing, you'll still need to drag around a wagonload of Cray hardware to (slightly) beat your smartphone.
The Cray probably blows the socks off the desktop, not to mention the phone, in I/O bandwidth. Or maybe not. It didn't have a bunch of USB & firewire ports...
Better refresh is a solvable software problem. Keep track of the last N screens (since last full refresh, if any). Watch for pixels which have been toggled back and forth (or whatever it is that makes them blurry). After drawing the new content, go back and reinforce the color states of pixels which have state histories most likely to be blurred.
IOW, do a full refresh but only do it to pixels likely to need it.
You do the page flip first so user experience is "instantaneous"; then go back and correct the few pixels that need correction. Or -- if the amount of pixels needing fixing tends to be small, do the pixel reinforcing inline with the regular page draw.
"Likely to need" is a heuristic which presumably can fail. So provide a user action to do a full refresh -- which they will hopefully never need to use.
Ultimately this action should be happening inside the e-ink display itself: each pixel remembering 2-3 past states and self-reinforcing when it likely needs to. Vaguely like having a data separator (ancient floppy & hard drive tech...) built into every pixel.
The article says "up to 70 per cent less energy per bit", so in theory it should be a bit easier to cool than current tech. At same density, anyway. And if their fantasy numbers come true...
Why must they publish a new spec for this thing? Use 2^n layers (2^1 initially, I suppose) and just use some of the high address bits as the layer selector. Or some of the low address bits -- whichever arrangement performs better.
Yes, there might be some extra performance to be eked out if the memory controller is more specifically aware of the new arrangement. So OK, bake in some new out-of-band signal a newfangled controller can use to access new info, but keep it within existing signaling so the same memory can be used on old systems.
"slimline form factor (114 x 65 x 11m)"?!
Let's assume the screen's 90% of the width, so ~58.5m. Divide by 480 and we see that pixels are about 12.2cm across -- about the size of a standard CD.
Sorry, won't fit my pockets... not even in this...
PC tinkering ended within 1-2 years. What?!?
8086, segmented 286, expanded memory, extended memory, 386 protected mode, virtual 8086, SMI, PAE, AMD64;
x86, Weitek, 8087, MMX, 3DNow!, SSE, SSE2, SSEinfinity, AVX, FireStream, CUDA, OpenCL;
ISA, EISA, VLB, PCI, PCI-X, InfiniBand, PCIe;
ST506, ESDI, SCSI, ATA, SATA, SAS;
RS232, IRDA, USB, FireWire, BlueTooth, Thunderbolt;
UHCI, OHCI, EHCI, XHCI;
CGA, EGA, Hercules, VGA, 8514, XGA, ... ... ... nVidia vs. AMD;
Shall I continue?
"4 years?! In less than one year with Ubuntu 9.10 my notebook has required countless post-update restarts."
No, you had "countless" updates where it _said_ you should reboot. You didn't have to unless there were kernel changes you urgently wanted to activate.
With Windows Update (XP at least) you _cannot_ continue normally after an update. Windows Update will not run again until you reboot: you can't install further updates or check your update status.
The reboot advisory on Ubuntu is just an advisory. If you ignore it, you can still do further updates and the system works normally -- you just lack the kernel portion of the update. Since the kernel updates usually have nothing relevant to me, I see no reason to disrupt my system.
I've been running Ubuntu for 4 years now and I can't remember an update when I actually needed to reboot.
Wait, scratch that -- when I upgrade from one entire release to the next, I reboot.
For just about anything else... if it's a daemon, it gets restarted. If it's an important library, daemons linked against it get restarted. If it's the kernel -- well, I read the kernel update logs, I rarely see anything that makes me want to reboot immediately. Sure, there are a couple of useful fixes that I'll enable some day by rebooting.
I've found the Debian apt/dpkg installation & packaging system to be amazingly reliable, leaps and bounds beyond anything else I've used.
Certainly in this case about being able to boot your other OS, there's no point in rebooting Ubuntu! Just keep using it until, in the natural course of things, it's time for you to boot the other OS. (Ok, there is one reason: to confirm that the update actually fixed the problem. Which is entirely _your_ choice. No forced reboot. You'll find out naturally without having to boot prematurely.)
I'd say my reboots are about evenly distributed between: reboot to absorb a new OS release; I screwed something up; power failed for long enough to drain laptop or UPS batteries; or some sort of crash. Yeah, it does crash maybe a couple times a year.
I contrast that with the monthly reboot shoved down your throat if you're running some sort of Windows. Yecchhh.
No home automation needed. Put a valve on a pressure-feed return. User turns on the valve to rotate the cold water in the pipe back to the heater, switches it over to the outflow tap when it's hot enough (user will know from experience how long that takes, after the first couple of days; or can probably feel the heat in the fixtures).
There's a bit of cost in running the pressure to cycle the cold water back, but it's less than the cost of continually cycling. You also come out ahead thermodynamically. Continuous circulation means you're always exposing the hottest water to the coldest environment, offering a steep slope for heat to escape.
Also design-in cost to have the valves, pressure system & return pipes from each hot water tap.
The low-tech equivalent: keep a bucket at each tap. Run the hot water tap into it until it's as hot as you want. Set bucket aside, make merry with hot water. Eventually carry the bucket back to the hot water tank, pour it in through some sort of manual valve. But: you lose more heat that way unless you have a slave to take the bucket back immediately; and heck, we know that modern humans are too lazy to do something like that...
I downloaded Opera 3.62 (2000-02-27) just for laughs. It has a "disable scripting languages" setting that might apply to both. So your information is somewhere between 9 and 10 years out of date. (Actually it has separate "Enable Plugins" and "Enable Scripting Languages" settings, and it used a Java plugin, so I think even 10 years ago it had separate killswitches -- though it's true that killing Java would kill any other plugins as well.)
systemd'oh! DNS lib underscore bug bites everyone's favorite init tool, blanks Netflix
Biting the hand that feeds IT © 1998–2017