Re: Google admits there are INFINITY MILLION bugs in Chrome!
The writer of the story doesn't get it, ∞ clearly is a peanut.
Now if they'd offer ∞ ∞ , that'd really draw the monkeys !
120 posts • joined 7 Aug 2009
The writer of the story doesn't get it, ∞ clearly is a peanut.
Now if they'd offer ∞ ∞ , that'd really draw the monkeys !
Last time I said that, some business boffins gave me a dozen downvotes.
Well, maybe because I couldn't hide my belief (which I can't be bothered in the slightest to back up by data) that the main usage of data mining in business is to justify decisions. Make a decision, then look for data to provide a justification.
Not the scientific method. But apparently very practical in business and politics.
Well, more like, sticking to their way of making phones - development teams "competing" with each other for ever more lofty and remote goals, using SymbianOS versions obsolete by the time Nokia set the project up, then burdening it with irrational half-way-wrencharound managerial politicing ...
... and they held on, after the enemies closed in, surrounded them, attacked from all sides, till the last bullet, the last drop of blood, the final breath ...
[ that said, Godwin's law ... let them and the thread rest in piece ]
You mean, the new Oracle salesman pitch will be:
"Would you want some free hot chips with your beeleeon-$$$-DB ?"
Oracle is doing something right ... Sun used to give software for free as long as you bought enough hardware. This will be the first one where you get the hardware for free as long as you shell out for the full DB license.
Both gzip and bzip2 parallelize well only for compression - because that's a "blocked" operation, i.e. a fixed-size input block is transformed into a hopefully-(much-)smaller output chunk. The latter are then concatenated into the output stream. Since the output is stream, there's no "seek index table" at the beginning though and hence one cannot parallelize reverse in the same way. You only know where the next block starts once you've done the decompression and know how far the "current" one extends. While one can "offload" some side-tasks, the main decompression job is singlethreaded with both the abovementioned implementations.
One can, though, obviously compress/decompress multiple streams (files) at the same time. That's what ZFS uses, for example - every data block is compressed separately, and hence compression/decompression on ZFS nicely scales with the number of CPU cores.
[ morale: better use a compressing filesystem than compress files in a 1980's filesystem ? ]
There's T-series/M-series CPUs - which are all Oracle SPARC.
Then there's T-series systems - which are all Oracle, using Oracle T4/T5 CPUs in the systems of the same name.
And there's systems colloquially termned "M-Series".
Of which only the M5/M6 (and M7 to come, unless Oracle chooses to rename the system before launch) are Oracle, and use Oracle SPARC CPUs of the same name.
The older Mx000 and current M10 systems, though, are designed by Fujitsu's, and use Fujitsu's SPARC64-series CPUs (in the M10 series, the SPARC64-IX - the "commercial spawn" of the current K Super). On HotChips, Fujitsu also presented on the SPARC64-XI - to go into the post-K-Super, and possibly later into (an update of) the M10 series of systems.
Noone quote me on all these names and numbers please - refer to the vendors' marketeeting departments for the canonical incomprehensible advice instead, and to their legal departments for even more incomprehensible guidance on trademark usage.
Haven't used the boffin icon for no reason. Couldn't help the inner scientist when someone talks about strange things in the context of quarks, as if the stuff stuff is made of isn't strange enough already. I'm sure Zaphod would approve.
For an early morning comment on particle physics, that really tops it. Couldn't you find something more charming to say ? Commenting clearly reached a bottom, but I don't dare to predict whether things are going down further, or finally up again !
One more quark, that's when it'll get really interesting :-)
I'm almost tempted to say "QED" ... as I've quoted without context to make my point more obvious. The mere mention of "cause" in the same context [ sentence ] as "statistical correlation" is huge b*llsh*t honeypot. People have gotten it into their heads somehow that statistics prove causation, and the explicit mention (as you did) about first needing a testable theory with a claim of causation and a prediction of measurable changes - that's so conveniently left out.
I do wonder who started down that slippery slope ... when I retire [ never ... ] I may do a PhD on the history of politics/economics to find out :-)
If even the author of the article makes that basic mistake, then there's nothing left to conclude but you're correct - the purpose of statistics is to prove someone's point / back up someone's claim, not to ... learn anything from the data.
Go ... [massage] figure[s] !
makes Evils into Elvis. No doubt Oracle impersonates one or the other.
I need Friday-beer-drinking-grammar-nazi combi icon.
that how fairy tales start, vulture-turns-swan ...
(troll icon for schmoozing up to El Reg ... can't not do it)
Intel hasn't always been the big name in computers and servers. In the 80's, there were microcomputer companies (anyone remembers names there other than DEC VAX ?), and a big bunch of mainframe peddlers (anyone remembers names there other than IBM ?). In the early 90s, the various UNIX vendors replaced the microcomputer ones, and the mainframe bunch shrunk. Late 90s / early naughties, [LW]Intel dug their way into the server space, and "UNIX proper" shrunk. Intel today isn't more dominant in the server space than IBM was in the 70s (huge but not without alternative nor without competition), nor do they (Itanic, wink !) always succeed in big-iron projects they start.
The memorial halls of the computer industry are littered with former "industry kingpins" that missed the next key trend, or invested too much money into the wrong projects.
"See my works, ye mighty, and despair !" - the last one is self-referential, in the end, always.
... you mean, they really bother creating a no-op installer, something that just pretends to install java but doesn't ?
(can imagine Oracle charging for that one ...)
Apples and oranges. Even if either came for free, only one can turn into orange juice.
It's not a bad thing at all that it isn't GPLv2 licensed; but that it's not licensed GPLv2-compatible (as BSD or LGPL would've been), and/or not dual-licensed, that limits applicability.
The result is that there's a great opensource filesystem with far far less traction than it deserves. If Sun wanted to make Linux developers envious by dangling all these nice technology carrots, they surely succeeded to a degree. But if you want to motivate contribution and/or use, inciting envy is more likely to result in the opposite.
Not called you any names, really, apologies if you misunderstood me there. Nor have I ever claimed Linux scales to "ludicrous" (in the spaceballs sense) "numbers of CPUs" (measured by whichever gauge). 32 CPU cores on Linux are "pretty much ordinary" these days, and such systems perform "good enough".
I also agree both that there are workloads which exceed what you can do with "whitebox HW running Linux", as well as there being hardware/software vendors providing systems capable of running such workloads. Interesting where you need it - if you need it. Technically fascinating ? You bet !
I've merely made observation that the "good enough" frontier has continued to steadily encroach into that terrain. The pond full of "stuff Linux / x86 hw doesn't do [ well ]" hasn't dried out completely yet, altough, neither, have I personally seen signs of the water level there rising.
I apologize to the readers for not having given the proper icon first way round. After all, the orig comment I replied to was about 32 CPUs, not 32 CPU sockets ... in any case, I agree that "more" of some sort will give you bragging rights amongst a certain audience.
Highly profitable the high end may still be, but I'd stand by the assertion that the number of sharks in that pond hasn't changed much, while the pond is drying out. And those who dip their feet in there are either very brave, very foolish, or so desperate as to have no other choice.
Refresh your tech knowledge. There's quite a few options in the x86 space these days that offer 16 or 32 CPU cores. Even if you count chips / sockets, you have a range of choice in the 8-socket space (remind me there, how many CPU sockets did a T5-8 have, again ... ?).
Yes, there's not that many x86 servers out there that can have 32 CPU sockets. If that's what counts for you, go IBM / Fujitsu / Oracle.
No, there's not exactly a huge number of usecases for iron of that size. A bunch of two-socket boxes behind a load balancer does, these days, often beat the "big fat box" approach not just on latency/throughput but much more so on price, and/or price/performance.
Or, to put it differently, CPU-performance-wise, what a Sun E10k did in 1997, a Samsung Galaxy 4 smartphone chip does today. Probably more. You just no longer need 64 CPUs for this.
Enterprise workloads have grown nowhere near as much as have the abilities of "cheap" (x86) hardware.
That's the bane of the non-x86 vendors; same number of sharks in an ever-shrinking pond. Are those "big" boxes faster / can they process more data in shorter-a-time ? All yes, but the deciding question really is: "what box do I need for my workload ?".
There's the first mass-market available product with a 64bit ARMv8 processor, yet noone considers it a breakthrough ... on the other hand, like cars, having a sports car with a flat-12 engine doesn't mean acceleration / speed are any different from the competitor using a v6 one. To be proven ...
Must admit that I'm otherwise sort of underwhelmed as well; good design/looks and well-though-out usability are selling points for sure. Apple made their brand on that for decades, and haven't failed on that front with the new iPhone either.
But visible, in-your-face-cool features which weren't available in either Apple's own or its competitor's older models, where are those ? Apple hasn't even played catch-up here, much less leap-frog. And there are even unique Apple facilities the new baby could've used, like, how about a thunderbolt interface to connect the phone to accessories, like, gasp, external storage ... How about a geeky change to the camera app to do manual focus / aperture, say, using the volume control buttons ? Or camera RAW, another first (?) in a smartphone, definitely if you did it for HD video using your own new codec ? Maybe a well-working OCR app that'd allow me to create my own ebooks from the old paper collection ?
It's an expensive development board to get your hands on ARMv8, if that's what you want ...
... and there was me thinking HTML5 succeeded killing it finally, once and for all.
Foiled again ! The spectre never dies ...
Time to buy Microsoft stock. They must've plastered that path with patents thrice over.
... let's see if a proper flamewar on units of measurement and/or the value of welsh oppression measured in metric will prove once and for all that nerdy engineering people have just as strong feelings as everyone else !
Does the paper at the very least provide properly error-corrected measurements for emotional suppression in units of mmHg ?
Reading the comments, I can't help but wonder - what has happened to honest, outright, direct communication ? Can one no longer name a turd for what it is, and call out stuff that stinks for its smell ?
I don't get this sensitivity thing. Being bullied behind your back is far worse than being called names outright and face-to-face. Yes, conversations can turn into shoutfests, but with those, at least the release of anguish and emotion prevents the buildup of resentment and desire for revenge.
What usually happens, though, is that the project manager you p*ssed off will stop talking to you, but give devastating feedback to your boss' boss, and your boss, and nine months later you'll be told that for all your great work, you need to improve on your interpersonal/communication skills before you can "reach the next level" (read: no extra peanuts for you, monkey, and definitely no promotion to chimp). Of course, all feedback is confidential, so good chance you won't even know who the guy was that blackmailed you.
Sod it. It's Monday, and I'm thinking of beer. Or read this: "Go, Linus !".
You can't assign anything to Torvalds, so for correctness, this has to be:
BOFH = Torvalds;
remember, C requires statements to be terminated by semicolons.
Don't know your perl operators ?
"." doesn't multiply but concatenate. If anything, "6 x 9" is 666666666.
(is that in the 0.5% usefulness or in the 99.5% "stuff" ?)
... is rarely found in textbooks, from my experience.
Why is that so ? Because textbook code is, by virtue of its objective - teaching coding - too simple and too small for the issues associated with paid-for / commercial software development to pop up. It avoids covering issues encountered in developing software not strictly associated with the technical act of coding itself.
Amongst those, in no particular order:
Teamwork - the absolute necessity to either work with other programmers or work with project managers, business requirements, your boss and her/his budget - is usually absent from textbooks. If no framework for this exists in the software project at all, it'll end up with interesting "warts" - think of massive code diffs in series of subsequent commits for no other reason than two developers having set their IDE's to "auto-indent" with different rules. Or imagine the odd little key component in an otherwise-pure-Java project pulling in Jython plus a bunch of 50 python modules including a few binary components written in C++, because one developer got away with writing a 50-line piece in Python using those modules since that was so much simpler and faster than the 500 lines it'd have taken in Java. Think about the "write to spec" warts that occur because specs and tests are done first for "validate known-to-be-valid input only", the code is written that way, and half a year someone using it is terribly surprised about the security holes blown into it by obviously-invalid input ... never spec'ed, never tested, never requested ... and no two-way communication anywhere between requirements mgmt, technical architects, project managers, testers and coders. And where in a textbook have you ever seen an example covered so big that you _couldn't_ implement it on your own in a month (Knuth's level 3.5+ exercises notwithstanding) ?
Complexity - textbooks often are extremely brief on this one, and C.Hoare's "The Emperor's old clothes" is quoted nowhere near as often as it should be. Abstraction, Encapsulation, Specialization, Generalization and other "layering" patterns are often covered extensively from the "how to do it" perspective but warnings about the dangers of extensive layering, respectively guidelines on "good usage" (do you really need to subclass if all you want is to have an object instance that differs from the default-constructed one in a single property ?), or, to put it differently, "when not to use it", are often absent. The coverage of interfacing with pre-existing libraries is often scant. As a consequence, have bunch of new interface layers here, a new generalization there counteracted by an additional specialization elsewhere, and have multiple sequences of conversions resulting a chinese whisper chain of data transfers. Every layer on its own might be a great, well-written, perfectly documented and admired piece of code ... but the combination screams at you just as much as the result on the floor of a night drinking great wine after a michelin-starred ten course meal.
Libraries - face it, you're not going to write everything from scratch. But which Java programmer knows even a third of the Java 7 classes by name ? Which C++ programmer can use a third of Boost without the manual ? Which C/UNIX programmer knows how to use a third of Linux syscalls ? And that's only _standard_ libraries. Not to talk about the vast worlds of things like Perl or Python's component archives. Effective programming often means effective research into what existing library already-in-use-elsewhere in the project will provide the functionality that you've been considering reimplementing from scratch. Which book teaches this ?
Legacy - textbooks like to advertise the latest-and-greatest; of course it's a necessary of work life to keep your skills up, and to follow changes and enhancements in the technologies touching your area of expertise, but even more so is it important to develop "code archaeology skills". That is, go backward in time and check out what was considered good practice in 1990, what was cutting edge in 1995, which code patterns and styles were considered best in 2000. Go to the library and read a book on J2EE from 2002 as well as the newest one on Java 7, then grab a 1992 and 2012 Stroupstrup. Read and compare the Documentation/ subdirectory of the Linux kernel sources in the 2.2 and 3.4 versions. Much of this reading will trigger "duh" moments of the same sort that you'll inevitably encounter when you start to look at existing code. Nonetheless, textbooks (or, even more so, lessons/lectures) that emphasize how mistakes are identified, rectified and avoided in future projects are barely existant, else F.Brooks "mythical man month" wouldn't still be so popular more than 40 years after it was written.
Evolution - coding, and I object to adherents of "software design" here - is never intelligently designed. In the beginning, it's created, and from that point on it evolves. It grows warts, protrusions, cancers that are hindering its adaptibility to the tasks it's set to just as much as new nimble limbs and telepathic powers to help it. A wart for one usecase can be the essential sixth finger for another. Code Veterinary can be a dirty job, and the experience of humility involved with dragging the newborn out past the cr*p of the parent is not something you're prepared for by reading about it, because books don't smell. Often enough there'll be the revolting ethical challenge of becoming a Code Frankenstein as well - when you'll stich a few dead protrusions dug out of a code graveyard by someone onto the already-mistreated beast. It's like textbooks that tell you about how to have fun making babies but not about how to change their nappies or how to deal with teenagers having a fit.
All of these one can learn to live with; none of these are necessarily perpetuated ad saeculum saeculorum.
Thing is, "textbook perfect" isn't the same as "beautiful" and that isn't the same as "best for the usecase". The grimy bit about software development is that a really beautiful solution can be built from frankensteinish code, and to develop the skills as well as the thick skin to be able to do this, no textbook I've seen will prepare you for.
(I still like coding, but sometimes I think I should'be become a vet)
Educatedly guessing there, but it might be that the 386 was the first one manufactured at structure widths on the order of magnitude visible light wavelengths (i.e. not significantly more than a micron). That'd give color effects because the structures will work like diffraction gratings then, and the whole die looks like areas of color. Larger structure sizes don't cause this effect, at least not at close to perpendicular angles of incidence, so they would look largely grey - apart from intrinsic coloring of the material used.
If you look closely enough, you'll notice some parts look reddish on the 4004/8008 (probably copper contacts), and the 286 one has that little red coil-like structure on the right edge. I'd contest all these pictures are color.
configure / autoconf doesn't make me wonder - it makes me curse, swear and use sewer language of the worst kind. Nuking it from orbit is too kind a death for it.
It's not a tool, it's a non-tool. Full agreement with the *BSD ranter there - noone bothers understanding autoconf input / setting it up properly; it gets copied-from-somewhere and hacked-to-compile; if "development toolkits" provide/create autoconf files for you, they're usually such that they check-and-test for the world and kitchen sink plus the ability to use the food waste shredder both ways.
The result is that most autoconf'ed sources these days achieve the opposite of the intent of autoconf. Instead of configuring / compiling on many UN*X systems, you're lucky today if the stuff still compiles when you try on a different Linux distro than the one the autocr*p setup was created on.
It had its reasons in 1992, but the UN*X wars are over; these days, if your Makefile is fine on Linux it's likely to be fine on Solaris or the *BSDs as well. Why bother with autoconf ? Usually one of: "because we've always done it that way, because we've never done it otherwise, and by the way, who are you to tell us !"
... given they sold 2.9M WinPhone-Lumias but 6.3M smartphones in total, does that really mean more than 50% of what Nokia sells as "smartphone" are still Symbian devices ?
Wow. I'm truly impressed. All the backstabbing, burying-(half-)alive, torching, butchering, burning, throwing-off-platforms etc. of Symbian, but the zombie just won't go away.
Somehow, other ex-Symbian-licensees like Samsung, Sony, LG, ... didn't need two years to get their product line to over to Android. Wishing all (ex-)Nokianites good luck!
That man begs to rant. Granted, there might be differences hidden in the sheer amount of drivel. I usually tire before I find them though. But then, Tomi Ahonen writes like a real Nokian (Nokianiac ? Nokiate ?)
when Larry says "just double it", then consider that doubling the length of a yacht won't double its speed, but will more than double its cost.
It all depends on what exactly you double, and there are many ways of doubling "performance". As there are even more benchmarketeering ways of measuring doubled performance.
Sun sailed that course for many years with this idea of "sum(more cores) > sum(fast cores)". It seems Larry got converted, like if there's not much else to show then at least show you can double "it" - find a suitable "it".
I don't doubt there's some market for these, just it's shrinking not growing; what you can do these days with a 20k$ x86-based server you couldn't do with a cluster of ten Sun E10k's fifteen years ago. "High-end" computing is becoming a commodity and that's not a trend which will reverse any time soon.
Solaris, on the other hand ... give me more. Site licenses for Solaris, for example, would be a great way of knocking crimson headwear peddlers out in places. Relay that to Larry if you would ? Something like, we'd be happy to more than double our use of Solaris - though, if and only if we can cap the license costs ...
well, "you don't get that from either Intel or IBM" - not quite so. IBM maybe, but Intel's roadmaps as far as one of their tick-tocks ahead are usually very well published (by Intel, in fact) and much talked about. Who cares that you can't buy Haswell-based Xeons till mid-2014 ? Everyone knows they're coming, the instruction set enhancements as well as throughput/latency figures for the CPU as well as some chipset details are out there, not just in leaked NDA presentation slides but pretty much all over the tech press.
On the other hand, maybe you meant "Itanium" when you said "Intel" ?
... when my brother-in-law used to ask me "how much exactly again did you spend on that astroboffinry stuff ?" I only asked him back "how much exactly are you putting into your Harley-Davidson fund?".
I don't think for my astro-spending I could've gotten a Harley quite yet, nor an Audi TT instead of a VW Golf, or a luxury Mauritius holiday, or even, gasp, a pint a day for a decade. Still, had a lot of enjoyable nights out, of the other kind. Everyone to their own.
Why is it that whenever the topic of "Agile development" comes up, someone feels compelled to state that it needs to be done properly ?
The answer, I guess, must be that doing it properly is a) hard and b) anything but obvious ?
Or maybe it's that it's much easier to specify "Agile is not ..." than "Agile is ..." ?
whenever someone comes up with this "prodded with a finger [ from orbit ]" thing, the image of the opening scene for "The last remake of Beau Geste" is conjured up with my mind. Is it just me ?
(Icon for obvious reasons, at least once you've seen the scene ... not downvoting anything here)
If anything, they're "copying" (shall I say: "improve on", because the Samsung thingy does the photography stuff right, good lens, optical zoom) the form factor of the Nokia PureView / N808. I'd be quite interested in an image quality comparison between the two.
<quote>(protip: Java is an ex-SUN asset)</quote>
Need to correct you there. Java is an ex-Sun liability. It might've been an asset for Oracle and/or IBM. Never really for Sun ...
you mean a good dirty dozen like they used to have in their S60/Symbian days ? One "flagship" and some 25 satellites of odd colors, shapes and sizes ? And possibly fries with it ?
I'm not so sure that Nokia did itself a favor with that "variety" (rather, varieté).
Sometimes, less is more. Even if they're not going all the way to Fruitycorp-style product line clarity. The "dozens and dozens" Nokia still has in their S40 lines.
I've done both - applied to some jobs where I've met all the criteria, and applied to some where I didn't. In cases where I got the job, I ended utterly bored out whenever I seemed "the perfect fit", but if there was an element of the unknown in it, I enjoyed and stayed in the job for many years.
Can only speak for myself there, but I guess the boost from someone else saying "we believe you can do it" is better for my motivation and willingness-to-strive-for-it than the intrinsic "I believe I can do it" thing. Whether this externally/internally-induced motivation thing is gender-biased I don't know, though.
Otto Hahn got his Nobel prize (on the discovery of uranium fission) for chemistry not physics. "Nuclear chemistry", alchemy's finest hour. Well, until these "elementary" particle physicists came along and got all the spotlight ;-)
You mean like use the toilet paper to mop up a little mess, the wellies to remain clean if the mess is a bit deeper, and the gear to climb up phone mast once the mess level rise significantly ?
By those standards, right now, Nokia is probably three quarters up the phone mast. Toilet paper and wellies surely have outlasted their usefulness for Nokia. Seems a bit like Nokia's leadership have dreamt about building Icarus wings for too long ... and while the mess levels are still rising, the parching Sun is melting those winglets Nokia hoped to take off with.
As an ex-Sun(shiner) I take offense at the statement that the Solaris/Itanium port _failed_.
It ran just fine in the lab. Noone wanted to have it, not a single then-Sun and then-prospective-Sun customer asked for it (very different from "canning" Solaris/x86 a few years later).
Sun simply decided that there'd be nothing to sell, so it never was pricelisted / made available.
So are you or aren't you ?
You start your post with "I'm actually long on Nokia stock", but finish it with "Too early to jump in but I'm watching closely". Sounds like either a very quick reaction to a turn in the markets, like someone having gotten cold feet over the writing of a forum post, or maybe just an enormous amount of confusion about Nokia taking its toll.
Anyway, good luck, you'll need it.
@jake: Full ack on what you say, particularly on the two sentences "go with the cheep this quarter" (nice wordplay, cheep - sheep & cheap ...) and "youth & glitter" bits. That's what I've been trying to point out - it _does_ work that way ... as you say. As said, in full agreement that this isn't good, but also in agreement that it's happening, ok ?
Hint: It does work that way - and did work quite exactly that way in the SF Bay area as little as ~13 years ago.
If you were around in the .com boom phase, you'd have seen people hired on the basis that they pretended to know the difference between a keyboard, computer and screen. The hiring manager wouldn't quite admit it to that exact wording, but neither would (s)he admit to their superiors noone could be found for the job. You'll learn ... and if/while things are booming, employers don't mind the time that takes.
Table lookups on GPUs ... are supposedly highly effective by design - after all, texture mapping is nothing else but exactly that. The above isn't unlikely to still be fastest on a GPU (though there, it probably makes little difference whether you do integer or FP adds; depends whether interpolation is useful/desired or not).
He probably said:
Heuristic, Adaptive, In-Memory - Cash!
For him, anyway. Logical.
The "rebootless kernel updates" aren't a new technology - Sun shipped something akin under the name "Dynamic kernel updates" with Solaris 8, and later withdrew it because it was too cumbersome - both for customers to use as well as for Sun developers to create such a "dynamic patch". That was, like, a decade ago ? Someone must've thought it a good idea to spawn a few more patents in the area ...
The problem is complex not because code modification at runtime is hard (SystemTap, DTrace and various other frequently-used tracing/instrumentation/monitoring utilities and - gasp - all Hypervisors - do that all the time), but because different kernel versions combined with different module/driver revisions and possibly (a series of) compounded "hot" updates makes determining all necessary patch points / updates a very difficult exercise to get guaranteed-right.
Snapshot boot environments, even as simple as "patch cold side of mirror, reboot into that, if ok re-sync, else reboot again into old config", have a far more predictable behaviour. Hypervisor snapshots / system+app live migration allow you to live-split patched/unpatched envs if you really wanted to. Whatever I'd build my reliability proposal around these days, ksplice it ain't. I agree that hardware is cheap enough these days that reliability-by-redundancy ("the cloud") makes much more sense.
That might be different for "nine-nines" environments, which are still rumoured to exist, the use cases where a server is installed, configured, powered on and never rebooted till decommissioning five years later. Never worked with this, would love to hear more about it.
... isn't visible in any wavelength of the electromagnetic spectrum - the primordial fireball, as bright and impressive the term may sound, actually was 100% opaque, because the universe started in thermal equilibrium and whatever photons there were constantly got absorbed and reemitted - up to the time when radiation and matter decoupled; only after neutral hydrogen atoms finally formed and no longer were reionized since the temperature / average background radiation energy had dropped sufficiently, said radiation became free/visible - the cosmic background. It's considerably _younger_ (a few hundred thousands of years) than the universe itself.
The only way to look "through" that "wall of fire" are indirect methods (density waves imprinting themselves on the radiation) or, if it ever becomes possible to detect them at those low energies, the cosmic neutrino background (which decoupled a few minutes after the big bang).