1844 posts • joined 15 Jun 2007
@Vic - You seriously don't believe they want to and won't try?
In case you hadn't noticed, Microsoft have been doing a really good job of keeping Linux off the desktop. It is an uphill struggle to get traction with users in the home, education and business markets. Even with the likes of Ubuntu and it's derivatives, it's just not happening.
If they can go in to any procurement with organisations by starting with "Of course, you know that we MAY still sue users of Linux for use of OUR copyrighted material", then to the uninformed procurement officers, this will equate to "Better stay clear of Linux, just in case". Financial officers have been known to be sacked for taking decisions that include known financial risks, so they tend to be cautious.
I don't see this as being illegal under current unfair trading practices legislation, but it would be seriously unethical, and unfortunately probably effective.
Without commercial organisations being able to build a business around Linux because of an unfair poisoning of it's reputation, some of the major contributors could fall by the wayside. Without the likes of SuSE (which I believe will be wound down as a Linux developer anyway), Redhat and Cannonical, Linux will revert to a hobbyists OS, with some embedded systems maintained by the likes of Oracle and IBM. Any serious chance of being an accepted desktop OS will disappear.
Also, imagine how much "Get the facts" could have been enhanced by claims of UNIX copyright ownership.
With the potential of a Linux desktop reduced, Microsoft could then capitalise on their near monopoly position (along with a like-minded Apple), and engage on leveraging maximum return from their tied-in user base who have nowhere else to go.
Let me ask you. Do you think that you would pay, say, £100 a year per PC just for the right to use it? Microsoft have often stated that they would like to move to a subscription model for their software, and have also moved in positive ways to exclude (think DRM) Linux from the equation.
Of course, if you or your company have the resources to build, maintain and market a good Linux distribution, and keep it up to date with all the DRM and media licensing issues, then this won't be a problem, will it?
OK, maybe it would be the defining moment for Linux. Maybe I am just being a Luddite by today's standards because I think of myself as a UNIX specialist, with a (strong) sideline in Linux.
But think about this. If MS do get access to the UNIX copyright, then many claims the Linux community have about MS including GPL code from the Linux tree evaporate in a puff of smoke (at least for stuff in the TCP/IP stack and other code that overlaps with the kernel), as MS claim that they have replaced the GPL code with the UNIX equivalent.
With regard to SCO, a lot of their case started unravelling precisely because Novell had maintained ownership of the copyright. In this light, Novell would have been in a much better position to try to sue IBM than SCO, not that it would have tried. IBM's SVR2 source license is pretty much bulletproof for AIX. Not sure about the one they got from Sequent. If Microsoft now have that copyright, they might think differently, though, and might waive the royalties for SCO and give them the copyright to kickstart the case. The fact that the case still exists causes enough FUD.
Other things MS could do:
Start increasing any repeating license code fees for UNIX licensees.
Stop granting any more UNIX source code licenses (OK, I suppose you might say how many people actually want one that have not already got one)
Don't actually start suing anybody about UNIX IP, but they make statements about potential contamination to increase FUD. After all, that is what they have been doing for years.
@AC re. flawed analysis - You seem to suggest that SUN and Microsoft were in a pact. This was not the case. SUN's UNIX source license for Solaris was always bulletproof, especially as they were a co-developer of SVR4 with AT&T. What they offered was indemnity for licensees of Java and other technologies by paying for various licenses from SCO to cover IP in new products that may not have been covered by GPL and their original UNIX license. This was not SUN financing the SCO case directly, this was standard business practice.
People tend to cast SCO as a completely evil company, but you should really see them as a company that thought they owned some IP, with a suitable business model for licensing it, but who then stepped over the line in a serious way and lost the plot and did not know when to stop. This did not alter the legitimate business that they had, which is what SUN bought into.
@jonathanb - The fact that SCO publish their own Linux has little bearing on what Microsoft might do with the UNIX copyright.
@another AC - Oracle have no bother with forking Linux and keeping it in-house (as long as they keep to the GPL and other appropriate licenses). They probably no longer case about what happens to Solaris, this was not what they bought SUN for. So they are probably not worried what MS do as long as there is no litigation.
Maybe I'm just mournful of a passing age, and that this may be one of the last nails in the coffin for Genetic UNIX. I would have preferred IBM (originally a UNIX baddie, but reformed for 15 years or so) or Redhat to have bought that part of the business.
@Bugs - I'll try this again
If you can say this with sincerity, you don't understand what the implications would be. I hope that you have a big wallet, because you are going to need one to keep paying to use your computer if MS can kill alternative operating systems. It would be like printing money to them.
Now Apple appear to be in the cashing-in-on-their-user-base game as well, you can't even regard Windows as a monopoly.
To the moderator. I think I've removed everything that you could possibly have object to, so will you accept it this time? Thanks.
Yes, that was when Microsoft thought PCs were only good for the desktop, and commissioned a UNIX variant to act as the office hub. Microsoft didn't write it, however (what a surprise), but bought it from the Santa Cruz Operation, then a small development company.
Just what will CPTN Holdings get?
CPTN Holdings LLC, which was organized by Microsoft appears to be getting some intellectual property rights as part of this deal.
What might Novell have that would interest Microsoft?
Hmmmmmm. Maybe the copyright to the UNIX code that they did not get by backing SCO (I know it's conjecture, but Microsoft were an investor in Baystar when Baystar offered capital to SCO. Look it up, if you don't know). And, of course, they may take ownership of SuSE.
This could be verrrrrry bad news for the Open Source movement, if it means MS get a dagger to dangle over the head of small open source companies, in the same way that SCO tried.
Sounds like another round of FUD coming.
With SUN out of the way, it would leave IBM and Redhat to try to defend the shattered remnants of the UNIX legacy. I'm seriously worried.
Wow. 91 coders in a single company.
Such a statistically relevant sample.
He outlines the problem
and the general solution, but not something that could work at the detail level, IMHO.
The drawback of his solution is how you actually address the content that you want? If you look at the way P2P networks (which have been mentioned several times in the comments), you need to have some form of global directory service that can index content in some way so that it can be found.
Just how on earth do you do this? Google is currently the best general way of identifying named content (for, say, torrents) and just look at how big the infrastructure for this is.
If you are trying to de-dup the index (using an in-vogue term), you are going to have to have some form of hash, together with a fuzzy index of the human readable information to allow the content to be found. I'm sure it sounds interesting, but I cannot see it happening.
Anyway, this could be added as a content rich application enhancement over IP anyway, using something like multicast, especially if IP6 is adopted.
Don't suggest fining just the person who lost the stick
but also consider the IT/management team responsible for allowing unencrypted devices to be used, especially if the use is codified in a documented procedure.
It must be drummed in at all levels that this type of information stored on portable storage has to be encrypted.
This has always been the case with both ADSL and Cable. The upload speed is a small fraction of the download speed, and they have never claimed otherwise. That's what the "A (Asymmetric)" in ADSL stands for, and I'm sure it is described in the T&Cs for cable customers as well.
If you really want good upload speed, might I suggest that you invest in a leased line, but I think you will be shocked by the price.
When you get to these speeds
you have to look at all parts of the link between you and where your data is coming from. Just because you can receive 50Mb/S does not mean that the other end can or will send it at that speed, or that the shared inter-ISP links are not congested. Try selecting a service from your ISP, and measuring that.
I think it's about time that people start looking at this in a holistic manner, and expecting the Internet as a whole to have infinite bandwidth.
Your computer may well be able to work in French, but did it teach French to you?
I think that not even the most masochistic person would attempt to use the Windows message catalogue to learn a foreign language!
Still, point taken.
@James Picket - I disagree
I believe that one is a superset of the other. Training IS a type of education (in the broadest sense), but education is a much broader field than training.
I presume that you are comparing Training as in what-you-need-to-do-a-particular-job, with Education as in what-schools-and-colleges-provide.
But even with these definitions, you often find colleges offering vocational training, at least in the UK. When I worked in a UK Polytechnic (sadly a type of institution that no longer exists here), we were often approached by industry to provide training courses for particular subjects and fields. It seemed more natural at that time than approaching a commercial training organisation, and provided much needed cash to the Poly to help provide a better all round service.
Maybe I am looking through rose-tinted glasses, but I don't think so.
Once upon a time
the man pages were really the documentation. I'm talking Bell Labs UNIX version 7.
If the man pages were not enough, and there was nothing in the "UNIX Papers" documents (that were shipped as n/troff source files almost complete on every V7 tape), then you could resort to the source (which was also shipped, at least to educational customers).
I get tired nowadays of typing "man something", and being given a stub man page that suggests I type "info something", which gives no more information (I actually do not understand why info is supposed to be better than man, I always trip over the key bindings, even though I am an emacs user).
Now I know that something like sendmail or perl cannot be described in a <10 page man page, and that large packages like Open/Libre Office deserve their own books, but I really miss getting comprehensive documentation of at least the usage of a command through man.
@Myself re: documentation under GPL
As I found out when I looked, what you need for documentation, and I presume training material, is the GNU Free Documentation License, or GNU FDL. Should really learn the lesson of checking before posting rhetorical questions.
The argument about Open Source documentation and training material illustrates a circular problem, that of who actually pays to develop the documentation and training.
Training material costs money to develop, so people to do this sort of thing for a living expect to get something back to make it worth their while. Open source organisations can build a business model around this, but they have to get paid for the training and consultancy they provide. Free software does not mean free training.
Microsoft, on the other hand, can divert money paid to them by current customers into developing the training material to lock in the next generation of customers, who will then fund the next generation ad nauseam. And because they have an effective monopoly, they can browbeat the education departments with 'Of course your students need Microsoft Office skills, after all EVERYBODY uses our software'
I'm not saying that this is any different to other programs given to education by vendors, except that Microsoft can use this to reinforce their monopoly paid for by their already locked in customers, in a way that nobody else can.
Of course, in a perfect world, educational institutions would write their own material around open software, and in the spirit of the Open Source movement, contribute that material for other people to use without cost (can you publish training material under GPL, I wonder?)
Unfortunately, this is not an ideal world.
was a useful stop-gap until Acorn delivered my BEEB (ordered on the first day of them accepting orders).
It didn't crash, as I took a Tandy keyboard, re-wired it (or actually re-painted the tracks on the membrane with silver paint) and connected it through a long ribbon cable, adding a power switch to the keyboard. Once I did this, it became quite stable, even with a Quicksilver pass through audio board (with sound modulator to the TV) between the '81 and the RAMpack. Never needed to touch the computer itself. No problems until my homebrew power supply blew it's bridge rectifier.
It was also interesting re-mapping the 1K internal RAM to another memory location when the RAMpack was attached, and putting a second 1K of static RAM under the keyboard connected on the ULA side of the bus isolating resistors, allowing me to change the IR register, which was used as the base address of the character table! Yay, programmable character set and (if you worked hard) bit-mapped graphics.
I actually tried writing a Battlezone variant, but it was just too slow in Slow mode.
Funny, I've just been called a hacker for a completely different reason by my colleague in the next desk. I wonder why.
It's worse than that...
What if he wants to add another micro-channel card to the system. It is necessary to have the reference disk to add the ADF file for the new adapter!
As do I (almost no MS software in sight on MY, rather than the rest of my family's systems), but think how different it would have been if that had happened in 1982!
Some of us even remember when UNIX was new, and all microcomputers were 8 bit and minicomputers were 16 bit (and some mainframes had 24 or even 36 bit wordlengths).
Seriously, if (Intergalactic) Digital Research had persuaded everyone that MP/M (the multitasking variant of CP/M) was the OS to use, then desktop PCs would have been able to multitask from the word go. And that would have really changed history. But remember, even CP/M was not original and was superficially a rip off of RT11 on DEC PDP11s, even down to the device naming and PIP.
Still would have preferred a UNIX derivative on the desktop, even if it had to be crippled by needing to run from floppy disks (UNIX V6 on PDP11 circa 1975 - Kernel less that 56K, ran [just about] on systems with 128K memory, able to multitask, multiuser by default with recognisable permissions system). Definitely doable, although reliable multitasking without page level memory protection would have been a challenge.
The highest system on the list that looks like it is predominantly funded by a commercial organisation is the EDF research system at #37. You may like to suggest that this is partly funded by the French government (like #76), in which case it would probably be #78, which is just described as "IT service provider, Germany".
So yes, if you had the money, and the will to run the infrastructure, you could buy one and HP, IBM or Cray would be delighted to sell you one, but to be honest, what would you use it for! Even the Intel ones are not suited to run Crysis.
On reflection, if you go back 8 or 9 years, you can see a number of commercial organisations with systems in the top 100, including banks and telecoms providers. But this was when you could estimate the power of a system by adding the component parts, not by proving that it could run such jobs. The bank I used to work for had their SP/2 AIX server farms listed, because they were clusters. They were never actually used for any HPC type workload ever.
I will put my cards on the table
and openly admit that I do not play poker, either face-to-face or on-line, but it seems to me that in order for some people to keep winning, and for the house to take it's cut, it is obvious that someone has to keep loosing, and loosing a lot.
To me, this indicates that there is a significant group of mugs, self-renewing as each wave loose all their money, and replaced by others suckered in by the advertising that is EVERYWHERE it seems.
So there are people who keep playing because they are better than the newbies, and there are people who run bots that can do the same. Sounds like either of these two groups have nothing to complain about, as they will probably break even, otherwise they would stop.
So how many here admit that they've regularly lost money? I'm sure that if they do it will have been as an 'experiment' or 'just to try it out', not admitting that they're the mugs.
I think that the whole industry is unethical, and should contain greater safeguards for the uninformed. But the late night TV interactive game shows are allowed, where the odds are so obscured that you can't tell what the payback is, so I can't see on-line poker being stopped. I just feel sorry for the victims.
If you've ever used a DAB radio on non-rechargeable batteries, you will know how expensive they are to run....
I guess using AA's would allow you to put rechargeables in, but would you be able to charge them in situe?
Blah blah blah sound no good blah blah blah
It depends on the bitrate of the channel and the strength of the signal in your area. The lack of hiss is well worth it, and to be truthful, most people listening in kitchens, sheds cars etc. probably would not be able to tell the difference between 128Kb/S MP2 and 256Kb/S MP3. It's the dropout rate that is so bad.
ClassicFM and Radio3 (160Kb/S and 192Kb/S respectively) sound great in a good signal area, a good receiver and quiet conditions. Unfortunately, I don't live in a good signal area, and even though I have a Pure Highway specifically for cars, I can only get any DAB channels for about a third of my commute to work. But then, even FM drops out in one part of my journey.
DAB is a flawed service, I admit, but it is worth keeping unless it is replaced with something better, but even then many will whinge about having to buy new receivers (like me, I have 5 DAB radios).
If you ever see references to Dragonball processors, as used in PalmPilots, then these are low-power 68000 processors.
I'm sure I was poking about inside some consumer device (it may have been a Freeview TV box) recently, and came across a 68K based SoC being used as a micro-controller, which probably means that they are still being made.
The 68000 family should be regarded as one of the classic processor designs, alongside the IBM 370, the PDP/11, the MIPS-1 and possibly the 6502. Beats the hell out of the mire that Intel processors have become. Some people might also say that the NS32032 processor and maybe the ARM-1 should also be included in this list.
This is a first step
If someone can work out how to drive the thing, that information would be very useful to someone who might want to make a work-a-like, and deprive Microsoft of hardware sales. I'm not saying that Microsoft is correct in what it is doing, but the reason why they are doing it is not really that hard to see.
Of course, MS would be able to prevent importation to countries with valid patents, but that would not stop imports from China via Ebay or the like.
If you look now, you can see non-licensed Wiimote-a-likes, and they are cheaper than the Nintendo originals. The same would happen for Kinect.
I though that there were several precedents set for reverse engineering. Cases involving garage openers and inkjet cartridges spring to mind, and I believe that they all went against the company attempting to maintain the monopolies.
The original Xbox really was PC components. The 360 is quite a different beast, using PowerPC derived processors. I'm sure that it is running a Windows variant under the covers, but this is more likely to be WinCE than XP.
The rest of the hardware is pretty much generic, but what do you expect? Memory is memory, disks are disks, and even the wireless and controllers will be using off-the-shelf chips. This is the same with PS3s, Wiis and even Macs.
Shame on you
The BBC version has to be the definitive version. Especially if played with a 6502 2nd processor and a bitstick.
It just shows...
...that people ignore things that they don't understand.
And if he had started buying things on Amazon from their accounts, they would hold their hands in the air running around crazily, blaming Amazon, Starbucks, their ISP, the Government and everyone else but themselves.
I'm beginning to think that the Internet is too dangerous to let Joe Public loose on it! Maybe we need an Internet driving test before allowing them to connect.
As soon as
Fedora has an equivalent of an LTS release, I'll consider switching.
I just can't rework my primary systems each time a new release comes out. I don't have the time.
Mind you, my confidence with Ubuntu has been severely dented recently. I still haven't switched to Lucid yet, because I cannot get suspend/resume or sound working well enough on my trusty Thinkpad T30, things that just worked without problem on Hardy, and my netbook, running Jaunty tells me that there are no further updates for that release. Still, I'll put the netbook remix of Meerkat on that, just to see what it is like.
Don't think so,
and I would be interested to know whether they run at all!
I seem to remember that they weren't the most reliable of devices when they were current! And that strange offset flip up screen and fixed keyboard.
I think that the term used for these and similar devices was 'luggable computers'.
We all know there are problems with X. We know that the abstraction between client and server does not suite all types of application and can be an apparent performance barrier.
BUT (and this is a big but)
If you know X, and work in an environment with many networked systems all of which understand X, then the benefits of the abstraction are HUGE. Don't suggest that VNC can fill the gap, because unless you have a big fat network, the performance is crap compared to properly written X apps. If you've not worked in such an environment, you may not know this, but your view risks throwing the baby out with the bath water.
One of the problems as I see it is that because the font handling in X was based on the fonts that the server know about, rather than the fonts that a client wanted to use, some applications appeared to display poorly. To get round this, both KDE and Gnome introduced models that meant that font glyphs were effectively loaded by each client when the client started, often multiple times.
This increased the X server memory footprint and client startup time no end, and almost completely broke the efficient font model that X had (and still has!). In my view, the best way of handling non-standard fonts is to have a font server somewhere (either locally or on the network) and have a mechanism for font-picky applications to add the FreeType or Type1 scalable fonts to that server, either for the duration of the run, or permanently.
Similarly, the way that some application treat pixmaps (and Java is one of the worst culprits, wanting to do it's own graphic abstraction) mean that X performance is much worse than it needs to be for such apps.
X.org is making what I think is a very sensible move to OpenGL based rendering, especially if it has network abstraction built in (I've not checked). This should allow good 3D performance, and as we can see allows the gloss to be added. What we need now is a well recognised, resource controlled window manager and application framework. Whether this is Gnome or KDE, or something completely different remains to be seen, but introducing a new default must be backed up by allowing users who want to remain with what they like to do so.
I actually cannot stand the netbook remixes, even on systems with small screens. They are just too proscriptive, and just get in the way unless all you want to do is what they provide. I use Gnome on my EeePC 701 (800x480), and I only have a small number of times when windows fall off the screen, so I don't see the need for the netbook remix.
"Hooked up to a Beeb"
As long as the two channels were in phase and not skewed, and you could do without the motor control.
I had no end of trouble with stereo players and the Beeb. Ended up making a cable that would only connect the left or the right channel, but never both. Always recommended a good mono tape recorder to other people.
Beebs were remarkably tolerant of the tape speed. There was a tape deck you could buy that had an adjustable speed controller on the motor. You could speed it up by nearly 10% before you had any loading issues. Really made a difference for the longer games. Some game manufacturers advertised faster loading speeds by actually recording with a slower tape drive before duplication, so they were faster on normal players.
Ahhh. Gone are the days of *OPT 1,2 followed by *LOAD and then by swapping the tapes and a *SAVE with the correct parameters to copy tape games.
Someone has actually got a BBC micro user guide on the net at "http://central.kaserver5.org/Kasoft/Typeset/BBC/Intro.html". Bizarre, but welcome. I didn't have to risk opening my decrepit and fragile copy in order to refresh my memory of the *OPT numbers for extended info for CFS.
Tranny radio listening was a very popular communal pastime
Especially at 12:45 on Tuesday lunchtimes, when the BBC chart was announced. That ages me!
And yes, radio did promote conversation in kids. See how it is portrayed in "The Boat that Rocked" to get a feel of '60 and '70s radio culture. Because of the cost of record players and radios, music listening was a communal pastime. You would take your new LP around to a mates house to listen to, rather than giving them a copy, as happens now. A household would normally have one TV, one record player and a couple of battery powered radios at most.
The revolution started with the "Compact Cassette", which allowed you to tape your mates records, and continued through the Walkman era as cassette decks in Hi-Fis got better. In some respects, this paused a bit when CDs first came alone as they were originally a read-only medium.
I still remember the stir rare-earth magnets caused when they appeared in the headphones which were the other revolution of the time. Allowed music to actually sound passable at low voltages compared to the crap piezoelectric earphones for transistor radios or bulky and current hungry cans that were used on Hi-Fi systems.
@AC re: "authoriser" . That's not the point
The 1990 computer misuse act defines what is illegal as defined by legislation passed by Parliament.
An EULA falls into the category of a License (End User License Agreement), so is contract law rather than criminal law.
License agreements can be (and often are) challenged in the courts, and can be deemed unreasonable. I'm sure that I could if I looked hard enough find a precedent where just what you have described has been judged unreasonable, but you have to be careful of the jurisdiction of the court system looking at the case.
Even if a bad EULA were found reasonable, the penalty for infringing it would be a financial rather than custodial, and may not even be enforceable (for example, if the EULA is judged in Texas which is often where these things are tested, and you are in the UK, then so long as you didn't visit the US, it is unlikely that any action could be taken).
BTW. If you are a Windows user, stop and really read the conditions on the Microsoft EULA that you almost certainly agreed to when you 'accepted' (whatever that means for a pre-installed system) them without checking. I think you will be surprised (and maybe a bit frightened) by what you've signed up for!
Anybody any data
on how long PCs with SSD storage last before requiring the SSD to be replaced?
I know that flash memory is getting better, but even with data shuffling and sparing, I expect to see SSD needing to be replaced before a spinning disk. Any chance of such devices lasting 6 years of daily use?
Upward and downward. It depends
on whether the wing spars running through the fuselage are connected to each other and stiff. If they do not flex or kink, then the fuselage will be suspended from the wing when flying, but if they flex or are not connected together, then the inside ends of the spar will move in the opposite direction to the ends of the wings around the point where the wings enter the fuselage.
From what I can see, there is a join in the spars on the mid-line of the fuselage. This could potentially be a weak point, possibly allowing the spar to kink at the join. Adding bracing to prevent this happening looks like it is a very good idea. Would it not have been an idea to stagger the joins on the individual spars? Or have additional uncut straws to reinforce it so that the joins did not occur at the same point? Oh well, too late now!
I would worry about the weight distribution. I think it looks like it will be tail heavy. The design looks like the sort of thing you would use for a powered plane, which has significant weight at the front (the engine). Do you get the opportunity to see if it will glide before attaching it to the balloon?
I have been really upset by the tricks Google are using to make sure that you have your data connection turned on all the time. I must check whether there is an outgoing firewall app in the Android Marketplace.
that some critical comments got through. Mine, that was posted before any of these, was rejected, even though it says nothing worse than many that got through. I wonder how many from other people were rejected as well.
I'm still wanting a reason added for comment rejection. I know that this is a moderated forum, but I was not abusive or insulting, the grammar and spelling was at least fair to good, and I was commenting on the technical accuracy of the article, although I did accuse the article of being a barely disguised advert. But so did the first comment that got through.
Could we at least know for sure that the comments are not moderated by the author of the article? That would at least give us confidence that critical comments have a fair chance to get through.
"fitted to the operational carrier"
How is this going to work? I'm sure that removing and refitting will require both carriers to be out of service at the same time!
Does he think that it's like a child seat in a car, unstrap, move, and strap? If he does, I think he should go on board a carrier when a catapult is operating, and feel how much the ship is affected when a heavy jet is launched. It takes time to make the necessary heavy-duty attachments to keep a plane-flinger safe and no risk to the ship, aircraft and people.
In the processs
In case you hadn't noticed, his sysadmin blogs are already being carried as articles. Stirred up some interesting comments as well.
..I always put Simon down as an ale drinker.
Two mentions of that hideous brew called 'lager' in a single episode. I suppose I can forgive the last one, if it was an instrument of financial torture used on the PFY.
Wasted so much time
trying to get things to work, and have just given up.
I could, on the odd occasion, get audio to work, but I have never managed to get my Buffalo Linkstation Live, that is suppose to to be DLNA compliant, actually server any video, even that encoded to one of the supposed supported formats. This was to a number of clients including Xbox 360, Windows clients, and open source clients.
The whole concept of specifying the container and codecs required in the standard is just a cynical attempt at building in obsolescence into consumer electronic devices to guarantee future sales! It sucks, and anybody who says otherwise is either a marketing shill, or just does not understand.
Nowadays, where possible, I stream stdout to stdin using SSH as the transport and mplayer as the player. Not got the gloss of a nice GUI, but just works anywhere you have knocked a known port through the network. Don't even need to share anything. And I get to avoid running the hacker friendly protocol uPnP, which will advertise the complete capabilities of the systems on your network to anybody who can get snoop it.
Had exactly the same
Only with PowerPoint. Teacher would not even allow the presentation to be shown, even though the presentation style they had been taught was so simple (background and basic titles and bullet points only), it could have been done using ANYTHING!
This makes Office Student and Home edition the only piece of Microsoft software that I have purchased that did not come bundled with a computer for around 10 years! (I do not copy licensed software).
Not that easy
In order to hijack a range of IP addresses, you have to subvert a core ISP, or find some way of injecting false BGP (or whatever they use nowadays) information into the wider network. You have to be trusted, and in particular points in the network to have BGP info believed by your neighbours.
While I am not saying this is impossible, it is so fundamental in the operation of the Internet as a whole that if this is compromised, the operation of the whole Internet is at risk.
To El. Reg. To see whether an IP address is where you think it is, you can try to use traceroute (oh, sorry, tracert for windows users) to see where the packets appear to go. While it is not a sure-fire thing (traceroute can be blocked easily, and some routers do not respond), you may get sufficient clues from the names of the routers that have DNS entries to guess at the routing of the packets. If this does not work, you might try a ping -R (UNIX/Linux only?) to get the return path of the packets.
There are probably many better tools, but Dig (although I still use nslookup), traceroute, ping, netcat, telnet, nmap, wireshark and other tools such as nessus should all be in the metaphorical toolbox of people who want to diagnose network problems.
@AC re: @Me. Yes, I know. Bad me.
Since being able to directly reply to a comment, I have developed a bad habit of assuming that it will be obvious what I'm replying to. But this is not a threaded forum (thank goodness) so this is not always clear. This is not the first time this has happened, and I'm annoyed with myself for it.
I meant amanfrommars, of course. I may be reading the wrong comment trails, of course, but I don't think I've seen a comment by him for several weeks.
Where is he anyway.
I've missed him.
"allows you to view the screens of the hosted VMs and switch between them"
It's part of the design, and it requires some external hardware (so this may be enough for you to argue), but the PowerVM hypervisor on IBM Power systems is architected so it is possible (using the HMC or IVM partition) allows the console screen of all LPARS to be opened from the console.
The same was true for the daddy of all type 1 hypervisors, IBM VM, which also allows a single master terminal to display the consoles of each of the partitions (IIRC, been some years since I worked with it, and most customers used to want physical consoles anyway).
Yes, I know they are text only, but when you have OSs which do not rely on a GUI to run them, why do you need anything other than a text mode console? You have X11 and if you must, virtual consoles using VNC for that.
Where you are getting confused is believing that VMware and Citrix invented the type 1 hypervisor. They did not.
Of course, anyone in the know realises that there is really no such thing as a "Bare Metal" hypervisor, because, as has already been pointed out, VM is an operating system (used to be actually booted from disk!) and PowerVM is actually a locked down (an not even particularly cut down) Linux kernel in flash storage. But hey! It's convenient for the vendors to sell the hypervisor as a firmware "Black Box", that the customer needs to know nothing about, particularly when the security people come snooping.
For bog's sake. It's easy (although costly).
Zone your network using firewalls. Wireless access appears in one zone, which does NOT have any critical servers in it. Employ a capable network engineer or two, and let them achieve a working relationship to the security people.
Control the keys using the strongest authentication all your official devices can use, preferably based on something like RADIUS. Change any PSK keys that you have to have regularly, only circulate these changed keys to people with registered devices.
Query all devices using a device checker probe (something as simple as nmap or wireshark should be able to get most devices) and track down any unauthorized devices. Scan for unauthorized wireless networks in the vicinity, and attempt to identify whether it is the coffee shop downstairs, or a rogue access-point in the building (I'm serious, it happened somewhere I worked!). Make sure that all laptops physically attached to the wired network have wireless services turned off (including 3G 'dongles' and Bluetooth). Run regular security scans on laptops to check that this is the case.
Put simple services (like printing and possibly mail access) within the DMZ. Allow devices on the DMZ controlled access the Internet and then back in to your corporate gateways exactly the same as if they were coming in from the Internet. Knock specific holes controlled by the strongest access control you have in the inward looking firewall for any apps that absolutely have to be accessed from mobile devices. Argue the case for blocking every singe one, until you have been convinced that it is necessary and appropriate controls are in place.
Review these holes regularly, and have a strong procedures to track leavers and joiners. Ban, with the strongest penalties, sharing of ID's and revealing PSK's to non-authorized users. Lock services to specific ID's using strong authentication, preferably using one-shot password devices.
Be prepared to use VPN for any really critical services, especially those containing private or critical data. Select your approved devices carefully, to make sure that they meet all the security requirements. If there are vulnerabilities known on your mobile device of choice, make sure you have appropriate AV software deployed and updated.
If you are paranoid, consider using glass coatings on the windows to control the leakage of the WiFi signal out of the building, but if you are that worried, you should probably not use wireless services at all. Work out how far your wireless networks spread outside of your controlled space, using normal devices and focused antenna as well. Show the controlling managers this, and demonstrate it as well.
And above all, if you value your business, JUST DON'T USE WIRELESS SERVICES. This should include wireless keyboards, and any future wireless USB technology. If the MD objects, put a reasoned argument that the very business itself is at risk if the network is compromised. And if you are over-ruled, either be prepared to give in, lodging an "I Told You So" letter somewhere in the business, or to resign on principal.
It is clear that the "Block everything, then allow only what's essential" principal operates here.
- JLaw, Kate Upton exposed in celeb nude pics hack
- Google flushes out users of old browsers by serving up CLUNKY, AGED version of search
- GCHQ protesters stick it to British spooks ... by drinking urine
- Page File Love XKCD? Love science? You'll love a book about science from Randall Munroe
- Facebook to let stalkers unearth buried posts with mobe search