1696 posts • joined 15 Jun 2007
Re: Surprise? @AC 12:28
Yes, it's terrible. Back before the Morris worm, nobody thought to code anything with the level of code care that we now do, because nobody thought that it was possible to do these things because they'd never been done before. I'm sure that there were huge numbers of buffer over-run and similar exploits scattered around every single OS around at that time.
Every aspect of computing, be it OS design or the primitive networking that was available back then must be looked as being primitive by today's standards.
As always, isn't hindsight a wonderful thing.
It's interesting that the arbitrary code execution buffer overrun problems only work if the compiler stores localised variables on the stack, the stack grows downwards rather than upwards, and there are no stack frame barriers imposed by the OS or language runtime. It also helps in the actual potential stack-smashing arbitrary code execution exploit if the calling-conventions and layout of the stack frames and return addresses (which is architecture and OS dependent) are known about in advance.
This means that gcc compiled binaries running on Intel platforms using the conventional Linux calling conventions can be easily targeted, but otherwise you need to know the target before you start. Of course, corrupting the stack will have unpredictable results regardless of the architecture, but most of these will be denial-of-service type problems, rather than arbitrary code execution. Still a concern, but rather less so,
How times change
I'm putting my old-foagy hat on here, because there is a lot of history behind X.
When X was first deployed in the mid 1980s and early 1990s, it used to be that you did not have a graphical login. The sequence was nearly always that you logged in using a text-based login mechanism, and then you started X up with startx or xinit command.
What this meant was that the X-server was run as you as a user, rather than as root, a privileged user. Indeed, as I write this, I've just switched across to an AIX 7.1 system that does run an X server (it's the enterprise management system for an AIX cluster), and I can see that the /usr/bin/X11/X binary does not even have the set-uid-on-execution bit set, so on a traditional UNIX system using code derived from the MIT X11 code, the X server certainly does not need to be run as root when started using the old methods (I've just tested this as well, and startx still works, and still starts up MWM on an AIX system. How quaint!)
Whilst this does not alter the fact that an oversight in the code could cause the problem documented here, it would mean that any exposure will not have root access.
At some point, some bright-spark decided that a graphical login process was a good idea, so they started X (as root) before the user logged in. I would need to check, but I'm fairly certain that the original xdm (X Display Manager) actually re-spawned the X server as the user when you logged in on the console of a system, but it is certain that the CDE dtlogin, and GDM and whatever KDE uses for graphical logins keeps the server running as root, with the correct X cookies in the xauth file to allow the user's X clients to connect.
This is a broken concept. It was originally intended that the X server process should be running as a non-privileged user. It would have been pretty simple to start the X server as a non-root placeholder user, rather than root, but I guess that nobody thought of it as a problem.
I'm not sure whether there are any of the X extensions (like DRM) which need to be able to talk directly to the hardware as a privileged user, but the original intention was that the X server would not be privileged.
But anyway, who uses BDF fonts any more. They're pretty obsolete, and to exploit this, you would have to put a compromised font in the correct place, and then re-start the X server. As the default locations for the fonts are in a directory that an ordinary user does not have access to, as is the initial starting configuration for the X server, it would require root access to put the fonts file in the directories in the first place, which rather negates the value of the exploit! If I already had root access on a system, there are a whole load more back-doors that I could add without going through this rigmarole.
I suppose it may be possible to place the font file in the users own file space, then trigger a font path change and/or font rehash with the server running. It would be interesting to see whether this would actually crash the X server. I might give it a try.
Re: Lazy Parents @Big_Boomer
"perhaps they should have thought of that before becoming parents"
Don't take this the wrong way, but do you think people foresaw these problems 10 or 15 years ago when those who are currently looking after tween/teenagers made their decision to become parents? (although I think that the nature of sex often forces parenthood onto a lot of people unplanned, especially if they were not able to get good contraception advice because of a lack of good sources of information)
Nobody really knows what it is like trying to look after children before they have them. Don't you remember the increasingly hollow and worried feeling as the birth of your first child approached? I know I was petrified!
I'm sure that good parenting classes aimed at new parents from 10 years ago did not even mention the perceived hazards that the internet now has. For goodness sake, most households would not even have had internet capable computers before the dawn of this century, let alone devices carried in their pockets that could access it.
Things change, as do responsibilities, and the world of the Internet and what it can enable far outstrips what most non-IT literate people realise, both good and bad! This is why they want someone else to take the responsibility of protecting their children. They just don't know how and cannot understand the process to get the required knowledge.
Re: I agree with BT's statement 100%
I don't disagree, especially with the parental responsibility, but it's becoming increasingly impossible to install filters on all the devices, unless you only have a small number of internet capable device in the house. This is especially true if parents choose to buy smart-phones or tablets for their children which are allowed to connect to the Internet.
It won't be long before all TV's and other devices will contain some form of internet connectivity, and trying to put parental filters on those could prove to be a challenge for a technically able person, let alone the average Joe Bloggs. I have well over 30 internet capable devices in my house, and I do not know how to impose filters on Xboxes, Wiis and PS/3s, or even my daughters Mac.
More boundary protection (making the routers act more like a firewall as long as you could select the degree of protection) would help, but that would not be significantly different from the ISP filtering in their network, especially if they maintain the block-list.
Even if you do put some form of parental filter on the individual systems, you are at the mercy of the organisation maintaining the block-list as to what is allowed through just as much as if the ISP does the filtering. I fail to see ant real distinction.
Re: Drain cleaner @Alan Brown
I don't believe in filters as a substitute for responsible parenting. Our household has been connected for over a decade with wireless, and computers that the kids use exclusively (i.e. I don't) for much of that time. For the last 5 years or so, everyone in the house has had their own system that they control. (except my wife. She wants someone else to fix hers when it is apparently broken).
What I do have is a firewall that logs all the URLs that are visited. I told my kids when they were younger that I was not going to put any filters, blocks or parental controls on what came into the house. But I did say that I could see most of what they were doing if I had cause to, although I would not under normal circumstances. As far as I can tell (and I have looked for signs of them using proxy or anonymising services) they have never attempted to hide what they are doing.
We (my wife and I) also have an open policy that if there is anything they are worried about, be it viruses, health issues or inappropriate material, that they could always talk to us to discuss it without any recriminations. And of course, they can talk to each other about similar issues. It has not always worked, I believe that my oldest son was the recipient of non-physical bullying, which he said nothing to us about at the time. But we try.
I hope that my kids are well adjusted, and have acquired a knowledge of where to draw the line about what is appropriate.
That is my attitude, and my responsibility. I know that there are others out there who welcome the additional controls. That is their decision, and I accept that there are valid reasons why they may want that. And having a filtered internet feed does have a place for people who cannot ensure that their systems are suitably protected. It's just another (justified, in their eyes) brick in the wall. It really is the case that even quite knowledgeable people can't be totally sure that the systems in their house are protected to the degree that they would like. Computers are just too complicated for anybody but the most technically able to protect, especially the 'sheeple' you are talking about.
This means that I agree that parents need to take responsibility. But I'm not going to suggest that kids should only use computers under adult supervision, at least not once they reach an age where the parents would trust them to be out on their own, for example. That way leads to young people who will go to extraordinary lengths to get out from what they will see as over-controlling parents. Trust is important.
Your arguments risk descending into the realms of wrapping kids up in cotton wool that results not in well-adjusted members of society, but into a world where these kids, when grown up, do not want to take their own decisions. I've seen the results of over-protective parenting, and it often leads to behaviour as bad or worse than kids given free reign..
It's a complex and difficult area that will always have winners and losers, fans and critics, whatever is done. There is no winning solution, just a choice between less-bad ones.
Re: Drain cleaner @Roo
You've missed the point.
There may be some parents who want to have a filtered connection, but would like sites specifically set up for teenage sexual-health issues to be allowed, because giving a reliable source of good advice is much better than learning in the playground/behind the bike shed (or wherever teenagers hang out now).
From the article, it is these sites that have been incorrectly blocked, so parents with that mindset would not just turn the filters off because it would allow much worse through.
Quite often, sources of good information are publicised in doctors surgeries, libraries, and sex education classes at school. That is how the sites get known. Whether the blocks are spotted depends entirely on whether they are blocked silently, or whether it banners a message You've been spotted trying to access a filthy site. Desist, or tremble in your shoes while we tell the account owner!
Fortunately for me, the last minor in the house turns 18 in January, so I will just turn the block off when I get told how, not that I was overly worried in the first place.
Re: Douglass Adams was right, once again. @Gav
Good point. I should have remembered the exact quote better, especially considering how much of a wordsmith Douglas was. But still 126 ly is nothing bearing in mind that the diameter is 100,000-120,000 ly.
The distance from Earth to the centre of the galaxy is about 27,000 ly, so Ursa Minor Beta at 126 ly from Earth is just next door.
Betelgeuse, which is often quoted as being close is ~643 ly, which is considerably further away.
Re: Douglass Adams was right, once again.
I'd leave the peanuts unless you intend to travel by matter-transference beam. I'd grab the towel myself.
What I've never understood is why, if the Earth is in the unfashionable western Spiral arm of the galaxy, Ursa Minor Beta (β UMi or Kochab) which is a mere 126 light-years distant and thus in the same arm is the third hippest place in the Universe, and contains the second hippest place. The Hippest place may also be there (Zaphod Beeblebrox's left cranium) if he happens to be visiting the entrance lobby to the Hitchhikers Guide to the Galaxy offices.
"When you are tired of Ursa Minor Beta you are tired of life." (Playbeing magazine).
Re: FM radio will not be killed...@DiViDeD
Re: Across the channel
And, ironically in the UK, I hear most of the DAB commercials on .... digital radio stations! Makes me laugh.
FM radio will not be killed...
...it's just that it will be relegated to only carry local radio. It's only the national stations that will be forced to change.
So the FM radios that people have will not be come useless. They will still be able to be used, but only to listen to local stations which will still broadcast on FM.
Does not make me want the switch to happen any time soon. DAB reception is dire on my journey to and from work when I do most of my radio listening.
...offering a Win7 update at low cost to existing XP customers. Oh no, they're hoping that those customers will fork out for new machines, and count as new Windows 8 sales!
Unfortunately, unless MS do this, many XP users will keep it until they can no longer log onto their on-line banking, and then there may be scope for persuading some of them to use something like Linux Mint (note, I'm in the process of defecting from Ubuntu to Mint Debian edition at the moment - trying to resist whims of Canonical [Unity and Mir] has finally persuaded me to jump).
..market share... graph is strange
The X-axis is the 'wrong' way round, with the latest quarters on the left.
This is not what I expected when I first looked at it.
Re: OK, @cornz 1
Except that the 360 does not use x86 compatible processors, so is not strictly a PC in the "IBM Compatible" PC manner.
In order to run Xbox 360 games, they would have had to include some processor emulation or run-time translation of the instructions. This is what Transmeta did for their Crusoe processors, and we can see how successful that was.
Re: Mere tissue of a story
Looking at the staff writers remaining, then I think that your last statement is probably true.
Re: Big fish and small fry - yum!
One wonders whether this includes any single person contracting companies that many IT contractors work through.
If it does, then the figure is mightily misleading, because it will not indicate any change in anything other than how it is being counted.
...whether the PoS tills run an embedded version of Windows, or one of the full-fat versions?
Re: Linux support... well, who can say? @Jason
Ah. I'd forgotten the difference between SAS and SATA. I work so much with SAS that the restrictions in SATA compatibility flew past me while I wasn't thinking.
Linux support... well, who can say?
I'm making a (possibly erroneous) assumption that this thing is put together using industry standards, which may be wrong, but...
If this is two disks, with a 2 port SAS expander built in, then plugging it into a laptop will show 2 drives under Linux. They should just work.
What won't work is any fancy Acronis drive imaging software. But, boot from a live CD, attach your old drive via USB and then use gparted to partition and copy the data around. Only problem you may have is writing the boot record, but that should be relatively easy from Grub.
Anybody fancy giving me one to test this assumption?
And in the cloud, the storage is?
Well, I guess spinning rust and tape.
So not so clear cut at all.
I'm still dubious of the longevity of data stored on flash RAM, especially if the flash is stored 'cold', i.e. without power. Until this is proved, I would be dubious about using it for information that legally has to be kept for years, which is the traditional domain of archive and long-term backup.
And that's not to mention the security implications of having the data ultimately stored out of your control ('binding' contracts are only as good as the people who wrote them, and nothing like having physical security surrounding your data). If a cloud provider goes bust, or is taken over by another company whose modus operandi is not acceptable to you, how do you extract and export the terabytes of information they've been holding for you to move somewhere else, and ensure that they've destroyed all copies of the data.
The kit is not 30 years old. The design is.
OK the design probably needs to be updated, but the way this is written suggests that the exchange is still running on kit bought when the IBM PC/AT was the benchmark PC!
Re: Something not yet considered
Actually, it's mainly a matter of momentum. They are still providing the banking services for me mostly successfully, and it looks as if I was not too badly affected by the problems (I managed to get money out of an ATM at just past 19:00, in the middle of the supposed problems). I was not affected at all by the previous problems, but that is probably because I get paid at a different time from most people (it was credits into the accounts that were affected).
Also, it's easy moving an account, it's less easy moving a (reasonably priced) overdraft facility!
Something not yet considered
One point that has not been considered is that RBS are in an on-going transformation. As an RBS customer in England, I was given an outline of the Santander abortive sale, and recently information about the Williams and Glynn split (RBS are splitting most of their Scottish Nat-West branches, and most of their English RBS branches out into a separate bank with the revived name of Williams and Glynn).
I've already been moved onto a different URL for on-line banking and banking-as-an-app, and we've had our debit cards re-issued twice in the last 18 months with different numbers, presumably with a different bank code hashed in the long number.
I suspect that much of the $2bn+ spend promised next year is already earmarked for that split to happen
It is possible that it was some of the on-going work for this which caused the problems, although Cyber Monday would be a really bad day to plan it. More probably, someone just screwed up.
Re: How many are waiting for Windows 8 to be "retired"? @Steve Knox
Oops. For the memory in my first PC, read 16MB, not 16GB!
Wow. How much would that have cost in 1996!
10 minutes is obviously not long enough to spot such errors.
Re: Lets try to look at the facts @Denarius
I suggest that you take your Android tablet, attach a OTG USB cable to a small USB hub, and plug a proper keyboard and mouse in.
Android works fine like this, without the need to touch the screen at all. I really find that for desk related activities this improves the usability of the tablet.
The only thing it is missing for general use is applications-in-windows (although I know lots and lots of PC users who juat maximise the current application they are using and don't really need windows) and I can see the definite abillity to use a tablet-like device with some cloud services displacing PCs on desks.
If you have a modern Android phone, try adding proper display as well, and see whether you could conceive of using your phone as your only computing device!
Re: How many are waiting for Windows 8 to be "retired"? @Steve Knox
Steve, I respectfully disagree.
If you had said 2001 or 2002, the introduction of XP, I would tend to agree.
When the PC was introduced in 1981/1982, they were very clearly business only devices. Even in the US they were too expensive for a home purchase unless driven by a specific need, and this was even more the case outside of the US where a basic PC at introduction equated to about half of my yearly salary as it was at the time.
I would say that the whole area of media consumption is completely new from that time. You could not eve use a PC (and here I am talking about an IBM compatible PC) to play music off any domestic media available at the time, and that is the easiest media (unless you count books, which wern't distributed to be read on PCs).
The home market was better served in the US by Apple, Commodore and Atari kit, and by the plethora of UK manufacturers including Acorn, Sinclair etc in the UK, and Japanese companies for the rest of the world in the early '80s, and I would say that you cannot claim that a modern PC has anything more than a passing relationship to any of these devices.
It took until the mid '90s for the PC to become an attractive home purchase. I bought my first IBM compatible at that time (I was a committed BBC Micro user), for about £1000, and it came with a 100MHz Pentium, 16GB of memory, a 1.2GB hard disk, a CD Reader, an ATI Mach64 display adapter with 14" monitor providing 256 colour 800x600 resolution SVGA display, and a sound-blaster compatible sound card, running Windows 95 (and Linux - although this was a real effort getting it to work). This could be counted as a 'multimedia PC', and whilst you could listen to a CD, you would not (and probably could not) watch a film on it with any degree of enjoyment.
And to bring it back to the point, the majority of modern PCs in the home are used as media consumption and social media devices, which a tablet, chromebook or convertable will do as well or better than a PC. PCs (especially desktop PCs) will become niche devices for people who have a need for more storage or processing power than a low-power device can provide.
In business, PCs are mostly an alternative to form-filling, paperwork and performing data-lookups, and I suspect we will see PCs being displaced by thin clients based around the same technologies as a tablet-with-a-keyboard or small laptop (yes, really this time) because of the efficiencies in the administration and cost savings of large numbers of such devices.
A thin client deployed on top of Android on an Arm device, built into the screen (effectively a tablet), with a keyboard and pointing device, and limited amounts of local storage, connected to a server estate using a remote desktop technology or cloud service will be a very desireable device for many businesses, and probably cheaper than an equivalent environment built around desktop PCs. Deploy a secured WiFi, and you don't even need to cable the desks for data!
I can see devices like this retailing for around £100 per desk in volume, plus some extra for the backend services, in the near future. All we need are the applications, and I'm pretty sure that OpenStack or Azure, deployed either as a local or remote cloud will get sufficuent traction for serious software to appear and displace PC software.
It's all a bit bleak for the PC market, quite honestly.
Bloody hell. I've just argued myself into thinking seriously about SaaS cloud services! Maybe it's not all BS!
Re: How many are waiting for Windows 8 to be "retired"? @Steve Knox
I think that modern PCs are used for more than those from 1981.
Remember back then. No Internet (including facebook, twitter et. al.) no common graphics standard (and nothing capable of displaying a photograph), no digital cameras, connectivity to the outside world by modem and BBS if you were really advanced, no email, no audio hardware, no writable optical drives to rip/write CDs, no DVDs at all, no video, no mice. Monochrome dot matrix printers if you could afford $/£1000 otherwise no hardcopy.
If you are really talking 1981, then no hard disks. Everything from 5.25" 360KB floppy disks!
No porn! (at least none that was interesting - remembering animated ASCII art of a flasher that I remember doing the rounds by sneakernet).
And, and and. A PC would set you back $/£2500 (I'm actually looking at an IBM PC advert from 1982 in an old magazine at the moment), which was more than most people paid for their cars!
OK, you still had people writing letters and doing basic spreadsheet and simple database, and you did have text and basic block graphics games but that is about all that is common between then and now.
But now, there is very little that you can think of that a PC made in the last 5 years or so cannot do. Even new technologies such as 3D printing is well within the capabillities (with a suitable printer of course) of a modern PC.
What is happening is that the common sub-set of technologies that Jane and Joe Public need can now be done from a device that is more like a tablet than a computer. Only nerds like you and I use some of the other stuff.
I mean, really. A Chromebook or a tablet or even a modern phone with the option of an external keyboard and connectivity to a telly will do all of the media consumption and social media that your average non-technical user will ever need.
I can see nothing that will prevent the decline of the PC into the technical niche that it emerged from. It'll never disappear completely, but will become to a tablet as a tape recorder is to an MP3 player.
Re: OSs? @AC
If you want to be taken seriously, post this as something other than an AC.
BTW. My everyday system (systems, really, because I move/clone the disk from laptop to laptop as I change machines, upgrading the distro whenever appropriate - but still with the same home directory and machine identity) has been Linux only for about 8 years. It works very well, actually, and I make a habit of not fiddling with it under the covers, because I don't want to break it. Almost all of the installation, admin and maintenance is done using the GUI tools provided.
The problem here is that MS will probably stop updating Security Essentials immediately, and then pressure the other anti-virus companies to stop providing updates for XP (they have various contractual tricks they can use to force software suppliers who use MS application development tools and libraries to stop providing updates).
You will also find that MS will be indirectly pushing for companies like banks, who need decent security, to change their web-sites to stop accepting connections from IE8 or earlier, for "the user's own security and on-line safety". Of course you can run Firefox or Chrome, but Google have a history of stopping providing updates for Chrome on an OS once the OS goes unsupported.
Once you have an unsupported OS with no up-to-date AV, and a limited choice of browsers, I suspect that you may think again about whether the box may be more useful with Linux on it.
When it comes to XP on older kit
You absolutely have to keep XP SP3 off of it.
I'm convinced that MS added an extra "feature" to SP3, which was "make it run so slow that the user wants to dump it and buy a new computer".
My dad had a Thinkpad T43 (Pentium M Mobile, 1.83GHz, 1GB memory) running XP SP2 with auto-update turned off, quite well. Some MS social engineering trick (click here to fix this) got him to turn on auto-update, and now it is barely usable.
I know, I know. There's other security fixes in SP3 as well, but I've repeated the exercise several times on other machines running XP, and it really is the case that SP3 increases the OS footprint and loads heavyweight services that make an older machine really sluggish.
You should see how fast even a quite modest machine (by today's standards) runs with a fresh install from a 2002 XP install disk! (my first XP machine had an AMD Athlon XP1700 - 1.5GHz, 128MB memory, and was pretty fast at the time). Just don't connect it directly to the Internet, or browse any dodgy web sites!
Re: The overwhelming message I get from these ads
Microsoft also had an advertising campaign in Japan based around the recent Ghost in the Shell: Arise anime.
It showed various members of Section 9 doing what they do holding and passing around a Surface, which contained some important data or something.
The problem with this, as anybody who is familiar with GITS knows, is that the people shown (I remember at least Motoko and Batau in the ads) are cyberised, i.e. they are cyborgs with cyber-brains (implanted computers) together with some kick-ass comms. They had absolutely no need for a Microsoft surface to do the things that they were supposed to be using it for!
Just showed that either MS or their advertising agency really did not know what they were doing. I suspect that the animators probably felt a bit dirty to have done the ads, but only until the money hit the bank!
I think that the ads are still knocking around on YouTube if you want to see them yourself.
Re: Expansion @John Tserkezis
the Dark Lord was commenting about moving new kit into the machine room (normally this involves rolling it across the floor, which on a suspended floor would cause significant vibration, certainly more than having the disks powered up and the head moving.
I would actually have thought that the main reason why the disks were powered down was because of power consumption and temperature, rather than vibration. Disks are not that fragile.
480 drives in a rack is not that dense. 384 disks in a 30" rack mounted 4U enclosure is a much higher density (I have a rack with 5 of these disk enclosures in each of the HPC's I look after, totalling 1920 disks in 20U of space - about half a rack), and all of these are spinning all the time.
@Destroy All Monsters
My knowledge of telco machine rooms may be rather dated, but it used to be that almost all telephone exchanges put the kit directly on a concrete floor because of the weight (a practice evolved from having vast and very heavy mechanical exchanges). With a solid floor for load bearing, it made sense to take the cables up to the ceiling. Old habits die hard, and many modern exchanges were installed in old buildings.
It may be that modern electronic exchanges more closely resemble computer machine rooms, but in this case, you can see from the picture that it is a solid floor, with the cabling to the ceiling.
Re: Expansion @The Dark Lord
These are telco style machine rooms, no suspended floor and wiring from above.
The floors are solid sealed concrete, so probably don't vibrate too much.
And there you have PERCS. If you look at an IBM 9125-F2C (Power7 775 cluster), they are very dense, are water cooled (CPU, I/O Hub, memory and power components) with integrated electro/optical network interconnects eliminating external switches, and storage moved into very dense arrays of disks in separate racks.
When where I work moved from Power 6 575 clusters (which were themselves quite dense), they kept to approximately the same power budget, increased the compute power by between three and five times, doubled the disk capacity, all in about one third of the floor footprint of the older systems. And to cap it all, they actually cool the ambient temperature of the machine room.
But these systems proved to be too expensive for most customers, and IBM was a bit ambitious about the delivery timetable. Take this with a contraction in the finances of many companies, and IBM failed to sell enough of them to keep them in volume production. But they are very impressive pieces of hardware.
Replacing them with a 'next' generation of machines is going to be hard.
Re: Tellies can handle 60Hz input @AC
I understand what you are saying, you've not understood what I've said. But anyway, using linear light is introducing an additional motion blur component, as you've effectively got to interpolate intermediate frames that do not exist at the re-sample point, and they will always be in one way or another a guess. Also, doing it in near real-time may require more compute power for HD video than is in the Xbox.
What I said would still work, although as I also said, it is impractical.
Re: Tellies can handle 60Hz input @Mage
I'm assuming that it was you who down-voted me.
I was not suggesting frame conversion. I was suggesting that you used a frame rate between the Xbox and the TV that allowed exact timing of both frame rates to prevent the need to re-sample. This is why I chose 300 fps, as that is an integer multiple of both 50 and 60. This allows an EXACT number of frames for each of the different video sources. For the 50 fps source, you would leave the image up for 6x300HZ frames, and for the 60 fps source, you would leave an image up for 5x300HZ frames. A perfect fit, with no resampling, allowing both videos to be side-by-side at their native frame rates.
Of course it's completely impractical as well, and would only work for these two frame rates (or other divisors thereof).
If you assume both videos are interlaced, you could probably take that down to 150 fps, but that is a big assumption.
Re: Tellies can handle 60Hz input @Brangdon
Well, that's it. The Xbox is trying to harmonise the frame rates for two different video sources. It's not really a surprise that one or the other will be affected.
In order to be able to simultaneously display a 50 fps and a 60 fps picture perfectly, you would need to output from the Xbox to the TV at 300 fps (so the 50 fps image would appear on 6 consecutive frames, and the 60 fps image would appear on 5 consecutive frames).
This would be beyond most TVs, even modern ones.
Apart from the obvious power cable (and check the voltage ratings on the label on the back of the telly as well, although most European countries use between 220 and 250V), you have the problem of the DVB-T format, although most European tellies do DVB-T2, which is backward compatible with DVB-T.
You may have to tell it to scan different frequencies, and sometimes this is in a hidden menu. It depends where you are coming from.
If you are just using external video sources (DVD, consoles, set-top boxes etc), things should just work.
Don't understand this!
Modern flat panel televisions just do not have the old mains frequency lock or problems with the 'flyback' frequency that old CRTs have.
I very much doubt that there is any difference in the hardware for a Korean or Chinese television destined for the UK or for the US.
Tellies have a frame buffer (or two). The frame buffer is painted, and the picture is displayed. This can be asynchronous from any other timing signal external to the TV. As long as the hardware can keep up with fastest frame rate, it should be able to sync with any slower rate without any difficulty.
However, if the XBox is re-sampling the frame rate of an external video source as it passes through, then this could conceivably cause missed frames (50Hz->60Hz means some frames will be sampled twice). Anybody who has played around with frame rate when transcoding video will have experienced this, although I suspect that most people who believe they have done this probably used 'canned' settings rather than really experimenting.
So I suspect that the XBox is re-sampling at 60Hz, or possibly screwing around with the de-interlace settings (Sky broadcast HD at 1080i), rather than it being a problem between the XBox and the TV.
Re: Hmmm. Extract from the lawsuit.
It was the scale of the claim. "billions of images" and "near instantaneously" that I was mocking.
I'm sure that there are tools which will look at images and spot similarities, but I'm also sure that they're not instant. Lets assume the images are 100KB each, and there are "a billion" of them. That's 1x1014 bytes (hey, lookie what a silver badge allows me to do!), or approximately 100TB of image data. If they can read that and process it "near instantaneously" then they have a better system than the top 100 HPC system that I'm looking after at the moment.
Hmmm. Extract from the lawsuit.
Claim 29: ...which can scan billions of images nearly instantaneously......
Gosh. I really could do with some of these systems that Match.com must have. Near infinite disk bandwidth, and very sophisticated image hashing and analysis tools.
With that technology, I wonder why they're in the dating business. They ought to be coining it in from the application of this technology.
Re: New universities
I totally agree about your comments about 'New' Universities/Polytechnics.I think that giving them the option of becoming Universities was the worst thing that could possibly have happened to the Polytechnic sector.
I agree that most Poly's had a big chip on their collective shoulders, but I worked in Newcastle Polytechnic for 6 years, and I met people there who knew what the Poly's were for, and understood how to represent them. But I remember at the time how surprised the ministers were that all the Polytechnics decided to convert when given the chance.
Older established Universities are academic. They turn out people with a largely theoretical slant on most science and technology subjects. Poly's were set up to be practical skills based. They could take students and equip them to take on high-skill practical work. You could see them as a alternative to business led apprenticeships, leading to BTEC HNC and HND qualifications. Both of these were valuable but different facets of the education system in the UK.
Generally, academically orientated students with the highest 'A' level results (in the days when 'A' levels could be used to differentiate between students) gravitated to Universities, those with adequate results could go to Poly, and still get highly useful qualifications, just not necessarily degrees.
But there was also a difference in teaching methodologies.
'Old' Universities were more likely to drop the student in at the deep end with comparatively little support, and if they sank, threw them out. Those who swam (who were self-motivated and with sufficient discipline to actually get the work handed in and pass the exams despite the distractions), when they graduated, an employer knew that they could resist the temptations of student life, and still get the job done.
Polytechnics, on the other hand, used to offer better support to the students. The staff-to-student ratio was higher, and there was more emphasis on making sure that the students were coping (at least this is was what I saw at Newcastle). This meant that Poly's were a better bet for kids who were still in the 'school' mindset.
In the Computer Studies area, Newcastle Poly. offered HMC and HND courses in Computer Studies, but not a degree, which was catered for by Newcastle University. The one computing degree course offered by the Poly was a business orientated degree, specialising in COBOL as the programming language (we're talking 1980's here), with business oriented methodologies, system analysis and case studies, together with crossover courses from Business Studies so that the students would have an understanding of Data Processing and where it fitted in to a business.
The HNC and HND CS courses turned out people who's skills meant they knew enough about computer systems so they could program effectively, but had a less deep understanding of the fundamentals of a computer than their University contemporaries.
With the generally useless 2-year 'foundation' degrees replacing many of the BTEC qualifications, I really don't know what the split is now, and I think that employers have similar lack of understanding.
60 disks in 4U!
I admit they are special racks (they are nearer 30" wide and goodness knows how deep), but in the IBM P7 775 supercomputer disk enclosures, you can get 384 2.5" disks in 4Us of vertical space.
On more mainstream systems, and having used dual-connected SAS drives for about the last 5 years, I will say that the biggest problem here is the repair of a failed expander card in the disk drawer. The problem is that although they are redundant, so the loss of a SAS expander does not stop the service, the repair action is not normally concurrent. This means that you have to take an outage in order to restore the full resilience, even if you have the connected to dual servers unless you have the data moved or mirrored to disk in another unit. The saving grace is that you can plan the outage, but you have to be careful if you are wanting very high availability.
I learned this the hard way when planning for service work in what had been delivered as a totally redundant system. A bit embarrassing when you end up having to stop all of the workload on a top 500 HPC system just to carry out the work for a single expander card (no, I was not responsible for the design, I only help run it, and it could have been mitigated with a bit more thought)
By the way, this dual connectivity is not a new thing. IBM's SSA disk subsystem also had dual connectivity for both disks and servers back in the mid 1990's. Very popular for HA/CMP configurations, and allowed for 48 disks in 4U of space.
Ingres and 2BSD
Ingres was available for free (or at media and postage costs) to Universities and Colleges who had a UNIX source code license. I believe that it was on any 2 BSD add-on tape (it was certainly on the 2.6 BSD tape I had in 1982).
The University I was at (Durham, UK) was using Ingres to teach relational database in 1978, and I came across it in my second year in 1979.
I must admit that I could not stand it as a subject, because the lecturer was using set theory to try to teach relational algebra, and my maths was beginning to look a little shaky by that time, but when I ended up actually doing real work, I found QUEL quite usable. It took quite an effort to switch to SQL when I had to work on Digital's Rdb, Oracle and DB/2.
I don't count databse as a current skill now, but I still regard the experience I gained as invaluable.
Re: Symantec writeup very poor @Gorbachov
And that is the point of my OP. The writeup is so vague that we're all guessing.
I admit that the client side attack I sketched out requires access as a user on the client system, but that is a lot easier to get than breaking privileged access. All the usual vectors of Java, side-jacking and social manipulation etc could end up with a process owned by the user in question, which would have whatever access the user has on the client system (but no special privilege). This would mean that it could execute a series of shell commands as the user, run an SSH client program itself, read the user's public key and any private keys stored on the client system, and if the keys are passphrase-less keys, use them to gain access to other systems.
Here is a scenario, possibly far-fetched, and I've not worked it all through, but it could set LD_LIBRARY_PATH somewhere like .bashrc so that a local directory appears before some of the system library directories. It then looks at the Linux distro, and fetches a specific hacked SSL or other (including libc, I suppose) library for that distro off of the internet, and puts it in the directory.
Following this, every legitimate program including SSH client sessions that the user starts could be running with malicious code from the bogus library. If it replaced the right routines, you could have a key-logger, and this key-logger will be able to capture the passphrases as they are typed, giving access to all of the user's private SSH keys. It could also capture any passwords that are typed for remote systems.
OK. No breach of privilege required so far. Everything has been done as the user in question.
So, the user is an admin, who foolishly has the private key of a remote account that has some privilege in their keystore. The malicious code then has access to the remote system with privileged to attack that system.
Or, say, the admin has sensibly used a non-privileged account to access the far system, but then uses sudo to issue commands on that system via a compromised SSH session. Compromised client can then capture the password that the user uses with sudo, and again has access to the remote system with privileged to attack that system (unless sudo is really locked down hard).
In both cases, it could inject commands, or even start it's own SSH client session using the captured credentials.
How safe do you feel?
Please note that this attack could be used on almost any OS that allows dynamic binding of libraries at runtime, and provides an over-ride of the default system paths to the libraries. I've sketched it out as a Linux/UNIX attack as that is what I know best, but I seriously suspect that similar attacks are possible on other OSs.
Eternal vigilance is called for, especially for admins, regardless of the platform they are using.
Re: @AC 14:58
I'm not doing the GNU/Linux 'drivel' as you call it. I'm just pointing out that SSH is as much a part of Linux as Audacity or LibreOffice, or a host of other Open Source projects. They're part of most distros, sure, but not a part of Linux itself. I suggest that you just don't understand what a Linux distribution is.
As an analogy, would you claim that Apache or VMWare player or even Skype is part of Windows if a particular machine vendor chooses to pre-install it on the systems that they sell?
It's not even the case that OpenSSH is the only SSH implementation out there. F-Secure have their own completely separate SSH implementation, as have SSH Communications Security, and there are also other free SSH implementations like LSH and Putty (client).
@AC re slipstream SSH datastream
Yes it would be, especially if it could be done from outside the SSH client/server communication stream. But this does not appear to be what has happened. This is hijacking one end or the other, and intercepting/injecting the data at one end of the secure pipe as it were.
Just to point out that SSH is *NOT* part of Linux. It's not in the kernel, nor part of the GNU toolchain, and although it is in the repositories of most distributions, it's also available for most UNIX systems, and also for Windows and probably any other network enabled operating system as well. It's a cross-platform tool. What is important is how and by what vector it was compromised.
So there is a vector (possibly OS specific) that was used to break into SSH, and SSH itself is a vector to compromise whatever OS is being used. Which may be Linux.
Symantec writeup very poor
I know it's difficult to publish information about a vulnerability without providing a means of using it, but the Symantec write-up is pants! I mean, what does "Rather, the backdoor code was injected into the SSH process " actually mean?
Was it added to the binary before it was run, was it added to one of the run-time libraries, was one of the in-core runtime libraries hacked, or was the running instance of the process altered?
It also does not state whether this is a ssh server attack or an attack via the ssh client.
I can think of several ways of compromising the client side of things (each ssh session has it's own instance of the ssh client process), and these can be attacked using well known PATH and LD_LIBRARY_PATH attacks without needing privileged access to the client system, or the on-disk binary or the libraries can be attacked and altered if you have access to a privileged account.
Once into the client process, you will have access to all of the private key information on the current system (although you may already have access to that anyway), but I can see how you could catch and re-use key and password information as it passes through the compromised client process. You would also be able to subvert any and all stream traffic, including fixed passwords, SSH passphrases, sudo passwords etc. for any session that is run through the SSH client (using the client as a keylogger). About the only thing that you would not be able to do would be to compromise one-shot authentication devices.
Injecting arbitrary commands would be a minor trick, although hiding them is more difficult.
And if the SSH key management is lax (same key used for multiple servers and user identities, especially if some of them are privileged), then you have a recipe for system compromise on a massive scale.
But don't blame this on the Linux security model. Any system with some form of trusted remote execution could be compromised in a similar way.
Re: Shills @Bill
You need not be a MS shill, just part of a system where one supplier can control a market, compelling ordinary people like yourself to defend the indefensible. Microsoft want you to not have an alternative.
There is no reason why Linux cannot become as good or a better gaming platform than Windows. It's only market penetration that make gaming companies develop on Windows. It's possible that the Steam effort or Crossover may just change things.