Re: I remember Windows 95, too
Weezer - Buddy Holly and Edie Brickell - Good Times videos were on the windows 95 CD. No idea what the plus pack CD had on it.
147 posts • joined 1 Jan 2008
Weezer - Buddy Holly and Edie Brickell - Good Times videos were on the windows 95 CD. No idea what the plus pack CD had on it.
Well one big thing Microsoft also did right in windows 95 with the new UI was to also include program manager for those who were not ready for the new UI. You could run windows 95 with the same program manager UI from windows 3.1 if you wanted to. I never saw anyone do it, but you had the choice. Same when they later did the new colourful stuff in XP, there was the option to stay with the older look if you wanted to. Windows 8 was the first time you were force fed a new UI with no option of saying "No thanks" and sticking with the previous UI until you got used to it (and no one will ever get used to the dreadful UI of Windows 8, which was a shame given the improvements in every other part of windows 8). With windows 8 Microsoft managed to simultaneously make a new UI that was awful and not give people the choice to not use it, rather than as in the past, make a UI that was usually considered better and give the option to stick with the old one.
Well actually the Alpha never did 32bit mode either, it was 64bit from the start, just like the Itanic that killed it (well that, along with totally incompetent management and infighting at Digital).
Looking at the pictures in that test, the sony sensor does appear to have a tendency to make things quite purple in some cases that the samsung does not, but is perhaps a tiny bit more detailed in some of the shots.
The summary of the difference does appear quite accurate.
Now of course it is a cell phone. What are you doing taking pictures you care about with a cell phone anyhow? Get a proper camera.
It is a 12 core CPU with the ability to run 2 threads at once on each core. No different than what intel does with hyperthreading on many of their CPUs.
So 12 physical cores, 24 virtual cores. Your OS would see 24 CPUs.
I have no experience with xen, but at least the kvm information says that while device passthrough is supported, video card passthrough is NOT. A few people have managed to get it to work with some patching work done.
I do see some documentation on it having been done successfully with xen however.
I agree with other people on BTRFS. The developers say it isn't ready for production use. Of course if your machine is just to run games and do some hobby work, then that might be good enough.
I too would avoid AMD graphics cards. I also personally am a Debian fan, and have no interest or appreciation for the commercial linux distributions.
In the server area, most people really don't care about Windows. Serious server users run linux and ARM will run that just fine already.
The P4 was a bad design. It was obviously designed to aim for the biggest clock speed number because that's what intel marketing wanted. The fact it was lousy at running existing x86 code that had been optimized following intel's own recommendations didn't matter to intel. As long as consumers were buying the machine with the biggest GHz number, intel was happy with the P4. Only when they ran into leakage issues and overheating problems and discovered they wouldn't be able to scale to 10GHz like they had planned did they throw the design away and start over using the Pentium-M/Pentium3 design and create the Core 2 by improving on the older design. Only the new instructions from the P4 were carried over, netburst was dead and well deserved. The Opteron/Athlon64 destroyed the P4 in performance on typical code at a lot lower clock speed and power consumption. intel has made a number of stupid blunders over the years, but they do eventually admit when things don't work and recover quite well by changing course and they have the resources to pull it off. x86 will be around from intel long after the itanium is gone.
Also MIPS never tried and SGI bought into the Itanium idea and killed development. It still does well in embedded markets where almost all wireless routers are MIPS based although a few are ARM. Alpha (owned by compaq owned by HP then sold to intel) was killed off, and had failed because digital had priced it out of the market to protect the VAX market that eventually was killed of by competition from everyone else instead. PowerPC hasn't failed, it does great it the markets that use it (lots of engine computers in cars are powerpc, as is lots of other embedded systems, and IBM has rather nice servers). Itanium failed because it was slow and stupid.
Apple changed to x86 because no one was making powerpc chips that fit their needs. IBM was making high end server chips which used too much power for a desktop, and freescale was making embedded chips which were too slow for what the desktop needed. Nothing wrong with PowerPC itself, just the server and embedded market was vastly more interesting (and a vastly larger market) than tiny little Apple's measly desktop market. It is the same reason Apple moved from m68k to PowerPC in the first place. m68k wasn't getting faster any more.
ARM64 (well aarch64) is very much a 64bit extention to the existing ARM 32bit design a lot like AMD extended x86 to 64bit. All 64bit ARM chips are perfectly able to run existing 32bit ARM code and a 64bit ARM linux system will also run 32bit arm applications with no changes or recompile needed. This is not a new thing. Sparc went from 32 to 64bit, PowerPC did it, Mips did it, x86 did it, PA-Risc did it, and now ARM is doing it. Nothing complicated about extending an architecture from 32 to 64bit while maintaining backwards compatibility. Only a few architectures were 64bit from the start (Itanium and Alpha that I can think of).
The Sun Niagara is a Sparc, not ARM. Sparc itself is fine, the Niagara design, not so much.
All the ARM chips so far have been perfectly sane. Itanium was not a sane design for a general purpose CPU. it was assuming compilers would become able to do compile time scheduling of parallel instructions, and that didn't happen. I vaguely recall seeing a paper a few years ago that actually proved it can't be done, so what intel hope for is actually impossible if I recall correctly. And if I recall incorrectly, it is still a very hard problem that has not been solved. So as it stands, the itanium is a terrible CPU design and rightly deserved to die. It is an enourmous shame that it caused MIPS to give up designing high end chips, made the Alpha go away, and certainly hurt sparc (powerpc seems to be doing OK). I don't personally miss PA-RISC.
So no swearing, but killing is OK.
Well if Samsung sells half the worlds smartphones, then actually Samsung alone doing this would make a difference.
And yes the cell phone companies are vastly more evil than your average company. Especially in North America.
No kidding. The awful protocol, peer to peer disaster, ruining company networks, is just terrible. And there were plenty of standards compliant systems out there before skype showed up and they never do provide a gateway to those even though they have often promised to make one whenever the media remembers to ask why they are a closed environment.
Skype has always been about vendor lock-in and for the first while using the end users resources to run the system. What an evil company. I suppose Microsoft is a sensible owner of skype in the end.
Actually SunOS 4 was BSD based. Solaris (SunOS 5) was actually more system V ("real" Unix), although with some bits of the BSD code merged in.
OSF/1 became Digital Unix, NOT HP-UX.
Now going with Linux and GPL2) has the strategic advantage that you know anyone using Linux and making changes is required to release those changes, so it forces everyone to share. If IBM invests 1billion in helping develop features in Linux, it doesn't make sense to do so if some other company could just take the result and go make money without sharing any of their contributions. The BSD license assumes people are nice and that they will help out, but doesn't force them to or demand that they do. As an individual developer or a small team, perhaps that is OK and you are happy to see people making good use of your code. On the other hand if you are putting thousands of people on something, you might want to make sure you aren't funding someone else's business for free.
I personally would use the BSD license if I came up with some small useful piece of code, because I like the being totally free thing, but I certainly see the benefit of making sure everyone plays fair.
Of course these days Linux just makes sense since it supports more platforms than even netbsd now, and it is what everyone supports. The BSDs are just starting to look obsolete in comparison in terms of support for large systems, odd ball systems, etc. Last I saw, freebsd just added support for 64 CPUs, at a time linux supported 4096. There just aren't that many people contributing to freebsd as there is to Linux anymore. of course the BSD userspace being such awful obsolete stuff that drives you insane compared to any linux system in the last decade probably isn't helping, although I suppose one could always use debian/kfreebsd and get the freebsd kernel without the BSD userspace hell, instead using a nice Debian userspace.
If it is only free to use the binary version, then it isn't a free open source H.264 is it?
Rather misleading really.
I for one don't want any binary blobs on my nice open source system, and this doesn't change a thing. just Cisco trying to get some good PR while throwing around "open source" and "free license", except not at the same time.
So linux distributions can't include it, because Cisco pays the license for the binary downloads, which of course means they need to know how many downloads there are, and it only applies to the binaries they offer, not any others built from their source.
The issue here is that a femtocell is part of the cell network, but physical security of it is with some random person. This is a concern for cell phone users in the area should their phone happen to choose to connect to that femtocell.
I think by definition, only one brand can be giving you the most problems. That's pretty much what 'most' means.
On the other hand, it certainly doesn't match my experience, unless you don't actually deal with Seagate in the first place.
And of course if no month ever involves sending drives back, life must be great.
Our cheap time is from 19:00 to 07:00. I can easily handle running the dryer and dishwasher in the evening before going to bed. That is not a problem.
I have never noticed a manual for either a dryer or dishwasher telling me not to run it without someone around, nor have I ever heard any such suggestion from the fire department or anyone else, until I read your comment. You are the first I hear of that.
Where I live (Ontario), we have smart meters for electricity (but not gas), and time of day pricing. At least for me, it has resulted in a lower bill than before the smart meter came in, given I do run the dryer off peak when it is cheapest, as well as the dishwasher and such. So works for me.
I don't have a display telling me my current usage in the house. I would have to walk outside to look at the meter's screen to see that.
Of course given heating the house and water is done with gas, it is only the air conditioning that uses a lot of power during the day time when prices are high. Having a high efficiency model and good insulation in the house helps with that though.
So you want something like what http://eltechs.com/ is doing (except they are doing it for ARM servers and almost certainly are dealing with Linux, not Windows). i am sure there are others, that was just one of the first Google turned up.
I believe most of the SGI machines had all of the video card mapped in the CPU memory space so everything could access everything else.
Of course it used to be video cards had their memory mapped into the memory space of the PC, although there wasn't as much acceleration then, so allowing the CPU a fast way to write updates to the video card made sense. Once we got 3D chips with hundreds of MB of ram, the 32bit memory space started getting a bit tight and they stopped doing that for all the memory. No reason a 64bit machine couldn't allow everything to be mapped into one memory space though, unless you want to support running 32bit software still.
Wine won't help. After all the fact it is NOT emulation means it won't do anything to help run x86 instructions on an arm. So unless crysis is recompiled for arm, you won't have any hope of running that.
So what ebay should have done (and obviously did not), is to simply hide the dormant accounts, then wait and see if anything broke, and if it did, unhide them again and fix the bug that made it select the wrong accounts. Simply deleting data is not a good idea.
Actually secureboot is all about virus protection (and probably a bit about Microsoft making it harder for other OSs than Windows 8 to run on a machine).
it is not at all about pirating and does nothing to prevent it.
How can you say that a company that scores 17 one yeah, but doesn't make the top 20 the other six years has an average of 17? It is clearly much higher than that given in six of the seven years it ranked worse than 20th.
By their averaging methods a company that got a 1 in one year and NR in the other 6 years would have an average of 1, while a company that got a 2 in all seven years has an average of 2. Clearly that's wrong.
I am not good at statistics, but I am not as bad as they are.
This sounds like exactly the same PCS that my Prius V has, which I got in the summer, and the videos about Prius V PCS on youtube on Toyota's channel from January certainly sounds a lot like this. Did they mean to announce this in November last year maybe?
So I am confused.
The ITS stuff sounds new (and clearly isn't available), but the rest doesn't.
Everything I have seen so far shows end users do NOT like Windows 8's new interface. It is confusing and unproductive and very unintuitive.
I think people were a bit too harsh on Vista, but Windows 8 is getting exactly what it deserves so far.
I hate Android. It ruined what was finally becoming an interesting cell phone market with openMoko and similar projects in the works. Fully open cell phones. Now they are all gone, because google came along claiming to be an open source phone system, while being no such thing.
Also I don't want anything to do with java, which seems to be about the only way to do applications on Android. iphone is much better there, but the locked down policies of Apple ruin that one.
No wonder I am sticking with a plain old feature phone for now. If I can't add applications to it myself, then I don't need a smart phone at all.
I think the new UI is ugly. It reminds me of lotus notes from many years ago (which also had giant solid colour boxes and was an atrocious user interface). And if you don't have a touch device (and really how many of those have you seen around on a desktop), then the interface is just plain clumsy. The simple test the register posted today quite accurately represents exactly what I would expect to see happen. It is a confusing mess of an interface and clearly not well thought out. The primary interface MUST be designed for the primary input devices of the majority of users, which is a mouse and keyboard. Microsoft can add new stuff to make touch interfaces easier to use, but they have to keep the majority of users happy, and windows 8 won't do that. The tabletpc features in vista (and to some extent XP tablet version) worked quite well and in no way interfered with using it without the stylus. That worked.
Oh you just drag the metro app off the bottom of the screen to close it. Obviously. :)
And that is part of why I don't want to deal with any friend or family member moving the Windows 8. it is so completely impossible to know what to do by looking at it.
Well no. There is a difference.
In the past every new user interface thing Microsoft did to windows was pretty much an improvement. Not sure about ribbons yet, but they are not awful.
The Windows 8 changes on the other hand are awful. It is unusable. Everything has become confusing, difficult to use, and much more effort to do what you are used to doing.
Windows Vista was: Nice interface, shame about the performance issues.
Windows 8 is: Nice performance, shame about the interface issues.
That's a huge problem. I hope Windows 7 stays for sale for a LONG time. I hope Windows 8 last no longer than Vista did when the replacement comes out.
So what? If you want single thread performance, a power7 is hard to beat. If you want that at the same time as power efficient, well guess what, that isn't going to happen any time soon. If you want a super computer cluster that is power efficient, a power4 is a much better choice. Actually the power A2 isn't bad either.
IBM seems to understand that it isn't one size fits all.
I can barely remember the last time an interesting new sparc came out. Hopefully some time soon a new one will (but it won't be from Oracle that's for sure). It's a lovely instruction set that just happens to be highly neglected by its makers.
Have you seen the clock speeds IBM runs cores at? Nothing wrong with having 8 cores with 4 threads each when you have CPUs clocked at 5GHz. Try getting intel to do that with their designs. Never mind Oracle.
So for single thread performance IBM is currently very hard to beat.
The P54C is a P5 core. The Pentium II was a P6 core. Those are very very different. The P6 had out of order execution, and was the first intel chip to translate x86 instructions into micro ops that were then executed on a more risc style core. The P54C is a plain old native x86 design where instructions are executed in order.
For building a chip with lots of cores that run predictable code, the P54C design is not a bad choice. The P6 core is much more complex and uses a lot more transistors, especially for the instruction translation system and handling out of order execution.
So yes the P54C is close in time to the Pentium II, but about as far apart in design as two intel cores could be.
You ruin bacon by putting parmesan on it? You are mad. Bacon tastes great. You don't ruin it by putting stale old cheese on it. I don't even want to think about what black pudding would do to it.
Of course the admins don't agree. They have to guess which version of java needs to be installed to run the piece of crap, and try to make sure it doesn't conflict with the version of java needed by another piece of crap they already had to deploy. After all java developers don't ever tell you what environment they developed it for.
The way python wants indentation done these days is just fine. The requirement in python is that your indentation is consistent. That is all. Originally python had a mandatory indentation style, but that is no longer true. And if you don't think consistent indentation of your code blocks should be mandatory, then you shouldn't be programming at all. Pick anything you want for indentation, but always use that within a given file.
If you want scary use of whitespace as syntax, have a look at perl. That's scary. <$foo> is not the same as <$foo >
Hmm, so nic.ca and nic.mil don't exist, nic.edu and nic.org are something else entirely (and not registrar info). So does anyone actually know that nic.* is supposed to be a useful domain name?
I thought microsoft specifically has said they deleted the start menu code in windows 8 so there is no way of undoing their stupid UI change.
Everything other than the new UI is very nice, but the Ui changes are just a deal breaker. Hidden things are bad UI design on a desktop and those are now required knowledge to use windows 8.
But the current compatibility pack only gives you office 2007 support, which would be Office Open XML as Microsoft originally intended it which is ECMA 376 version 1. The ISO standard is ECMA 376 version 2, which is what office 2013 will now finally support writing (2010 can read them, but not write them). The ISO standards process did manage to fix some of Microsoft's stupidities even while Microsoft was trying to ram it through, and of course this meant that there was no actual support for the ISO standard in any Microsoft product until office 2010 which could read them, and now office 2013 which can supposedly write them.
So unless they update the compatibility pack for office 2000 and 2003 (and office 2007 for that matter) to allow reading the ISO standard of OOXML, then you are simply lost once 2013 starts writing proper OOXML files (2010 users can still read those though, so that's OK). 2007 and older users will be conveniently left out.
Actually my understanding is that everything Microsoft ever does is little endian, so I don't think they will have any endian mess to deal with. Even Windows NT on powerpc and mips was little endian unlike most OSs run on those systems. Most ARM systems are little endian these days too so windows being little endian on ARM is nothing unusual.
Simple: Pixar sells Renderman. They make lots of money doing that. Since one of the features of Renderman that can give you really really nice output is subdivision surfaces and not very many tools support them, then giving away code and data structures and patents to allow other tools to add support for this, will make people better able to use Renderman which means people will be more likely to buy Renderman to take advantage of its ability to render subdivision surfaces rather than some other rendering tool.
After all if you can work faster and better and create better detail using tools that use subdivision surfaces, then you would want to do that right? Then after you choose to do so, Renderman becomes the obvious tool to use for actually rendering your output in the end, so you give your money to Pixar.
I never did consider plasma an option. Of course given I am only now upgrading from a CRT, I am personally skipping straight to front projection DLP. I will admit that is never going to be a huge share of the market for TV though. OLED would be nice if it gets cheaper, but LED backlit LCD is quite nice. Plasma can just go the way of laser disc as far as I am concerned. It won't be missed.
I looked at the github for windows tool. I was highly disappointed to find that it is ONLY a github tool rather than a nice git tool for windows. What a sad waste of time. Why would you want to use a tool that only works with one site? Who wants that kind of vendor lockin? OK, I suppose they are targeting windows users, but still. Very disappointing.
Strangely I find bazaar to be the most incomprehensible command line I have seen in a long time. Git makes pretty good sense in most cases.
Bazaar also does far less than git. If you don't know that, then you clearly haven't used it.
Last I checked, June was the 6th month of the year, and the 3rd (and hence last) month of the second quarter, so wouldn't the next quarter be the one going from July to September? Clearly a quarter has not just begun if it is June. In fact a quarter is about to end. Someone needs to check their calendar, and it isn't Intel.
No not really. Some users always whine about changes. But overall the more technical users have in my experience liked the improvements in new versions of windows. Windows 8 on the other hand seems almost universally disliked. It is the first time I just can't be bothered to play around with the new version. It is too slow and difficult to work with and simply annoying. I even liked the interface changes in vista (the search to filter menus just by typing in most windows was brilliant), even though vista had other issues. So over all new windows versions have improved the UI. Windows 8 hasn't improved it, it has ruined it.
Of course I didn't miss program manager at all since I never liked it in the first place. Norton desktop for windows was an excellent product to make windows 3.1 actually usable in a way program manager never could. Windows 95 and NT4 on never had a need for such a thing. Windows 8 on the other hand is going to need 3rd party work to make the basic task bar work again. The metro start screen is simply unacceptable to a keyboard and mouse user.
Magic screen areas that pop stuff up when you go there are a bad idea. It is already hard enough to explain right click menus to casual users. if you can't see it then it does not exist and you can't use it. That means it is impossible for a casual user to find anything in windows 8 because of its magic screen corners. It is simply a terrible idea and it will fail very badly.
I always wondered if linux stood a chance against windows on the desktop, given linux was always trying to catch up. Apparently the real threat to windows on the desktop is microsoft, through the fact they are actively trying to destroy it.
In the past when a new version of windows was in development, you would see the press going "Look at this nifty new thing they are doing". With windows 8, must press has been "The interface has been broken." You would think microsoft could take a hint, but they have decided that they are willing to destroy the desktop market to attempt to get into the phone and tablet market. I doubt it will work.
And why does metro have to be so bland and ugly looking compared to the pretty stuff done in the past?
Of course you would not actually ever use internaldomain.local but rather internaldomain.anythingelse would you? After all .local is reserved for use by zeroconf and you break all sorts of things if you use .local for your windows domain. Sure microsoft used to have an example in their documentation that used .local, but they changed that years ago and even wrote a domain rename tool to help repair the damage, not that anyone seems to ever get around to fixing this mistake. Instead the mac and linux users and anyone else that has a system that supports zeroconf just have to suffer.