Our company uses images of black monoliths with two-digit integers and the words "AUDIO ONLY" emblazoned in red.
I must get this for my videoconferencing avatar.
801 posts • joined 4 Jun 2007
Our company uses images of black monoliths with two-digit integers and the words "AUDIO ONLY" emblazoned in red.
I must get this for my videoconferencing avatar.
Successful companies also existed before electricity came along. Successful companies learned to use electricity; unsuccessful ones went out of business. Technology is an essential part of most businesses, one which reduces overall costs by automating repetitive manual tasks or which adds value by creating capabilities where none previously existed. Excel, these days, is more than just a replacement for paper spreadsheets, it's actually a development platform in its own right which enables business analysts to automate their own business logic. Arguably, this capability allows them to bypass IT, but who do they turn to when Excel crashes?
IT definitely does need to be service-driven (and most IT departments, in my experience, actually are), but saying that companies don't need IT is simply incorrect, by and large.
“They are not close to Microsoft or VMware, but it is pretty good if you are not trying to do dramatic things like moving virtual machines around.”
If by "dramatic" you mean "essential," this is an accurate statement.
My local servers are built with an eye towards application-layer redundancy such that, even if a major failure occurs, we should still have userland access available. There are certain cataclysm-grade incidents which could take our systems down, but the ensuing floods, cloud of fallout, horde of zombies, etc., would probably be of greater import than restoring services to the users (if my employers are reading this: I kid. As a loyal employee, I would, of course, place business continuity above protecting my own family from radioactive mutants.)
That said, the cloud is a very reasonable place to keep your work, assuming your work is not important or is easily duplicated.
Adobe's ColdFusion web development software is to blame for the downtime of the US Government's National Vulnerability Database.
The malware infected two servers . . .
ColdFusion has officially been classified as malware, apparently.
Ah, Matt, dripping as ever with the milk of human kindness, I see.
Ahem, I think you mean, "Where has the 'report errors' link gone?".
The appropriate abbreviation for "advertisement" is "ad;" the appropriate abbreviation for "advertisements" is "ads." "Add" and "adds" refer to mathematical operations.
This has been a note from your friendly neighborhood grammar nazi.
That's it, really.
Fortunately for web users the world over, the exploit "is not very reliable", the researchers write. In most cases, the payload fails to executive and leads to a JVM crash.
So, it's just normal Java code, then?
Well played sir, well played.
A commentard such as yourself should
Reprehensible behavior such as you did.
You must construct additional breweries.
Good job, kain, you failed to read the first line of the article you quoted:
BeOS is an operating system for personal computers which began development by Be Inc. in 1991. It was first written to run on BeBox hardware.
Or the paragraph right above your quote:
Initially designed to run on AT&T Hobbit-based hardware, BeOS was later modified to run on PowerPC-based processors: first Be's own systems, later Apple Inc.'s PowerPC Reference Platform and Common Hardware Reference Platform, with the hope that Apple would purchase or license BeOS as a replacement for its then aging Mac OS. Apple CEO Gil Amelio started negotiations to buy Be Inc., but negotiations stalled when Be CEO Jean-Louis Gassée wanted $200 million; Apple was unwilling to offer any more than $125 million. Apple's board of directors decided NeXTSTEP was a better choice and purchased NeXT in 1996 for $429 million, bringing back Apple co-founder Steve Jobs.
In fairness, I misremembered some of the history myself, such as the Hobbit, but your claim that BeOS was originally written for the Mac is clearly false.
@Gene: It wasn't F/OSS, no, but I'm not sure where the "closed as hell" comes from, in that is was no more closed than any other desktop operating system. Be definitely caught a lot of hell from the Linux fanboys, basically for not being Linux.
@kain: No, BeOS was designed for its own system, the BeBox, which happened to be based on the same chip as the Mac at the time, which meant that porting it to the Mac would have been much simpler than porting it to x86 was. Gassee tried to sell BeOS to Apple, who wanted to pay far less for it than he wanted, and, of course, Jobs was making his comeback and brought NextStep in instead. Be then made a move to the x86 platform and tried to position BeOS as a competitor to Windows, which failed in part due to Microsoft's efforts to keep OEMs from bundling any competing operating system with their computers.
The lack of apps was definitely an issue, so Be pitched the OS at specialist users such as graphic designers and sound engineers who could make use of the pervasive multithreading and high responsiveness of the UI, but it never really took off in that market. It was definitely unfortunate, because it was the most responsive and advanced OS, from a user perspective, available in the market at the time, but the company didn't really have a notion of how to sell it, especially against Microsoft's market power.
Back in the day, BeOS implemented a filesystem (called, natch, BeFS), which did all the things WinFS was supposed to do. It used metadata extensively with a database-like filesystem which allowed applications to access and store various data types in the filesystem without an intermediate store. It was also blazing fast due to the filesystem index being a built-in feature instead of an add-on.
Unfortunately, Be took on Microsoft at the height of its power and never really had a compelling story about why one might want to run BeOS instead of Windows, so it has vanished into the dustbin of history.
I'm reading it as a bug in one of the drivers provided by the VMware Tools package allowing privilege escalation in a Windows VM running the affected driver.
Anything that takes 5 years to become less difficult shouldn't have been released in the first place.
You mean like . . . Linux?
The previous code was just really horrendous," Meeks said. "Dialogs were constructed and drawn by hand – in fact, not even by hand. Programmers just sort of entered random numbers to lay them out, and it really looked awful.
This says it all. I believe this is the design philosophy behind all F/OSS and, indeed, all *nix GUI-oriented software.
If you had $30 million in VC money riding on what someone else thought of your attire, I'm willing to bet you'd learn to care.
I haven't used Server 2012, but I have used 2008 R2, and I've found it to be robust and stable, and much easier to configure and use than any version of *nix, so I'm guessing that Microsoft has done some good work enhancing those qualities with 2012.
Note that I say this as someone who has deployed various flavors of Linux, FreeBSD, OpenBSD, and Solaris over the years. I recall very well Microsoft's dirty tricks. Nonetheless, I'm willing to sing the praises of Windows as it now runs because it meets my needs and the needs of the business I support.
Finally, I'm entirely fed up with this knee-jerk fanboy mentality in the technology. Maybe you should try judging technology on its actual merits instead of engaging in childish my-sideism. Eadon, I'm looking at you.
I notice that, like most *nix zealots, you ignored the detailed post which addresses your points and chose to focus on the troll.
BTW, I'm not sure what FUDD is. FUD is Fear, Uncertainty, Doubt; FUDD is presumably Elmer Fudd's XBox gamertag, and I'm not sure how that's relevant.
. . . have no sense of humor that we're aware of.
I see what you did there, even if no one else did.
All that's happening is the next step in an ongoing evolutionary process. Over the past few decades, the number of intermediate steps between slow storage and fast compute has been growing, with on-die CPU cache, level 2 cache, level 3 cache, system RAM, HBA/controller caching, onboard flash cache, storage array cache, on-drive cache, and now array flash storage providing yet another layer designed to improve the speed of transfer from static storage to active compute. The slowest storage has essentially stagnated, from a speed perspective, merely growing in capacity. The next tier up, "fast" spinning disk, is itself turning into yet another intermediary layer for staging data.
All any of this means is that same as it always has: ultimately, the goal is to touch the disk as little as possible and keep the relatively small amount of data you're actually using somewhere else.
You think that mistyping slashes is a skill?
<FoghornLeghorn>It's a joke, son.</FoghornLeghorn>
The project manager, meanwhile - and this is a man who is known to have struggled for some minutes to find the main menu in the new FireFox - has written a Python program that interrogates his diary in Google Calendar and switches on the central heating in his holiday cottage in Wales so that everything is nice and toasty when he arrives for the weekend.
So, the typical Reg reader, then?
Hey, now, don't let logic, reason, and law get in the way of a perfectly good American Hate thread.
The idea of buying converged systems from a single supplier is often pooh-poohed as "proprietary", especially by suppliers who don't have the three technologies needed in-house, and the main three suppliers in that position are Cisco, EMC and NetApp. Both EMC and NetApp are trying to attract the attention of Cisco, the great converged stack prize, and hoping to be chosen as its preferred partner.
I think that might answer your question.
The problem with the "converged stack" theory is that it's the mainframe redux: you're locked into buying giant units of equipment from a single vendor. Virtualization ameliorates this issue somewhat, insofar as you can easily move your processing workloads elsewhere, but storage lock-in is especially pernicious since storage is the hardest resource to move away from. The discerning IT equipment purchaser will look for the opportunity to retain flexibility.
Also, take your Huawei shilling elsewhere.
So, Dave 126, what you're saying is that the HD 4000 is slightly less shitty than the HD 3000 but it still basically sucks. Thanks for backing me up!
I could see really getting a lot of use out of this class of device. I could even live with the piddling RAM and storage. But Intel graphics? REALLY? If there's one thing I don't want in a tablet, it's a graphics processor which is slow yet hot and hungry.
Both the US Department of Homeland Security (DHS) ICS-CERT, which normally deals with security issues involving industry control kit, and the US Food and Drug Administration (FDA) are reportedly taking an interest in the issue.
Clearly the problem is excessive regulation by the federal authorities. They shouldn't bother the poor manufacturer with their intrusive regulations and requirements; they should just let the free market sort it out!
(This is what some people actually believe.)
I would totally buy that if someone made an electronic version of BattleTech or Car Wars to run on it.
After using SCOM 2012 for a little while, I appreciate its power, but the workflow is still extremely awkward and inflexible. If Microsoft have made it possible to do hypervisor and VM management with the same ease and simplicity of vSphere, then this might be interesting. Any other commentards care to weigh in?
Also, in before Eadon's bitching and moaning about the evil of Microsoft and how there's some OSS version that is faster, cheaper, and more stable, and which will simultaneously ease all your virtualization woes while massaging your prostate.
As other commentards have noted, people buy what they need. The only place anyone cares about who makes the software is your mom's basement.
Yes, but then he'd have to leave the comfort and safety of his bridge.
I predict many anti-Facebook comments which amount to the above.
The thing is that Trevor is not just doing technical writing. If he were putting together white papers for prospective client or technical documentation, I would agree with your criticism, but he's writing blog-style articles for a publication renowned for its ironic or sarcastic tone, so an injection of personal perspective is absolutely called for. I don't personally find him to be a know-it-all; I get the impression that he's genuinely enthusiastic about the technology he uses and proud of the solutions he creates for his clients.
YMMV, of course.
Despite having a similar title, you work in a very different world with very different tools, and I'm always enlightened to hear about what other options are out there. Keep the articles coming!
My god, you're right! Cheap access to space will never benefit anyone! What possible use could it be to put things in space? And electric cars? Why even bother to innovate in that area? These rich guys should just take their money and spend it on giant impractical vanity yachts instead of trying to invest in the future!
Or they might become filthy, antisocial, neckbearded basement-dwellers with a propensity for condescension and self-righteousness. Then you'd have a peer group!
The main reason I can think of to continue the ban is to retain some kind of peace and quiet on the plane. Screaming babies are bad enough, but the last think I want to endure on a 10+ hour intercontinental flight is drunks yelling into their cell phones. "HEY BRO, GUESS WHERE I AM!"
If I wanted to endure that sort of behavior, I'd go to the movies more often.
"Why the down votes?!"
Because you've shown yourself to be utterly devoid of a sense of humor.
Ooh, look at that, you've earned the coveted Double Facepalm.
Noooooo . . . my point is that someone attempting to grab power via political machination is unlikely to forecast the fact that they're going to do so by admitting it, so a denial is meaningless. It is not confirmation of intent, per se. How seriously one takes the denial depends on how trustworthy one considers the ITU.
None of this should be construed as a statement of belief on my part that the ITU is in fact attempting any such thing. I was making a lighthearted off the cuff statement and have now driven this point as far into the ground as I can bear.
"Never mind that the ITU itself says no threat exists."
. . . they would say that, wouldn't they, especially if they were launching a power grab!
. . . the Microsoft Surface concept is exactly the sort of thing I would like to see--something with a thin tablet form factor and detachable keyboard (and mouse, ideally) which runs an OS that will run the same apps as my desktop (which the Surface RT won't, I realize, but the Surface Pro will). To my mind, that's less "confused" and more "functional."
Now, whether Microsoft's implementation of this concept is any good is something I have yet to investigate, but the fact that both Eric and Tim are willing to write it off without even trying it is a sign of a blind spot that Microsoft may be able to exploit.
The ELA is a side-effect of the lock-in. The lock-in is the effect of having data and logic tied in with a proprietary system, migration away from which is expensive and difficult. The ELA is a way of ensuring ongoing support and upgrades for the buyer, and it also provides a tidy little revenue stream for the vendor.
I agree that losing this revenue stream would be disastrous for the vendors who rely upon it, but CIOs and other decision-makers need to see that a) migration to a competing platform or product is less expensive over n years and b) that it yields sufficient tangible benefits to be worth the effort in the first place. It's likely that there's no one who doesn't want, on some level, to migrate away from Oracle, but justifying the time and expense may be challenging, regardless of the ELA.