249 posts • joined 3 Jul 2009
Ugh!!! So much for new management
Surface is an awesome product. Xbox is interesting even when consoles are lame. Stephen Elop should never be in charge of anything. He's the biggest joke in the industry. How can you expect that guy to do anything cool? His version of cool is a bright yellow telephone which he presents while wearing a tie and saying "the poor people in the third world will love this"
Re: Hedge fund restructuring
Odd... As a Cisco guy (not employee), I've seen virtualization increase sales and profits per port substantially. I guess those virtual networks don't come with a replacement for virtual cabling.
Buy a foreign company... Progressively move the technology and products to your main location, then fire everyone who made it to begin with.
Are we forgetting the next step? Disgruntled ex-employees use this as an opportunity to develop a better product, a transitioning tool and and leave you supporting only your die hard customers?
Even if I agree...
Are you suggesting that the developers are terrorists too or that they should be punished for guilt by proximity?
P.S. Not that I agree
Worked well for Rome
By giving patricians a much higher voting weight than plebeians, it worked great for Rome.
Oh... Maybe someone should look up Milo and Claudios, pretty sure that wouldn't happen again.
I am a little confused by your response... I was there with you until you made the comment about reading speed and then discounting the benefits of higher speed due to 4k market.
First of all, at 270mb/s SD looks absolutely frigging awesome. Raw SDI signaling of SD video is truly amazing. I used to work a lot with film masters and often watched in original quality and it was insanely better. Higher frame rate is better. I recently saw raw 4K footage (which is heavily compressed since no one records at 12Gb/s) and raw 2k footage, both on 65" screens capable of 4k from 2 meters. Guess what? No difference.... Well until I looked at test signals which showed line art and hatch patterns.
4k is just not interesting outside of movie theaters.
So... 10Gb/s business case. To start with... Because we want to know if we can. For the moment, 4k is the only use case. At the rate which technology is evolving and the world is using it, real-time broadcast becomes less interesting except for sports and emergency information. So, we don't even need 10Gb/s for video.
Where can we use it?
- Offsite hard drive storage
Not so interesting since Google is also building Chrome which idealistically would run the app in the cloud too, so you only need bandwidth for remote desktop type of transfers.
- Remote gaming
Because running high frame rate games remotely is laggy, so eliminating latency issues with brute force instead of QoS makes sense. After all, QoS only works if you extend the trust boundary to the client which would be a nightmare.
- Video conferencing
Works pretty good at 384kbs in most cases, 9mbit for awesome quality, so even DSL should be fine. Most issues now are at the endpoint, not the network.
Honestly, I have no idea what the business case for 10Gb/s are. I personally don't notice the difference between when I'm on 1gbit at home or at a customer who has 40Gb/s access. The issue tends to be that the servers I connect to aren't fast enough to handle the load.
P.S. Downloading a film on iTunes from a PC with a 10Gbit NIC and a 40Gbit Internet connection rarely gives me more than 3Mbit rates.
Some companies can't move
I personally love Windows 8... I like 8.1 a little less since they made the start screen more moron friendly (people hooked on Windows 7). I think Microsoft has developed a truly amazing new system and for the smarter people, it's insanely fast and efficient. I even have been seeing many people switch back because of it.
That said, I was talking with some friend who are upset about the official death of Windows XP because of issues related to their inability to move. Applications were written which were millions of lines of code by consultant firms and are too big to rewrite and are not able to be recompiled (not sure why). There are applications which shipped with dongles for anti-piracy which can't be installed on anything newer than Windows XP as well. Some people might say "stupid asses shouldn't have bought software which uses dongles", the alternative of course was not being able to do the job.
I don't see any good technical reasons that someone needs Windows 7 instead of 8. I haven't found any applications which ran on 7 but not 8, but there may be a few.
Oh... Since when has HP ever been shy about shipping 10 times as many products as a sane company could actually support? Look at their network in line now... Wow!!! Even their top networking guys don't have a clue about even what OS is running on their equipment.
Re: He previously complained bitterly about the lack of hand lotion.
All of the parents are Norwegian, but many are "footballers". I've learned the hard way during a father-son game of football when my son was 3 that Norwegians compartmentalize. When I was passed the ball, a normally very passive father smashed me in the chest with his shoulder, laid me out, took the ball and kept going. I was like "Isn't this supposed to be a friendly game?", the other guys explained that football wasn't a game, it's a sport and the kids should learn that. I later watched two boys, about eight years old practicing a football move where their fathers were offering tips. The move was how to smash into another player and steal the ball in a way that would hit hard enough to knock the other kid down without getting penalized for it. The fathers were saying "Good job, now try this".
One of my neighbors worked as a prison guard of ABB's holding leading up to the trial. He told me that the biggest problem was keeping him alive. They had to make sure he didn't kill himself and that no one got access to him.
Most Norwegians believe (rightfully so) that prison is about removing criminals from the public until such time as they have been rehabilitated. They then make an honest attempt to help rehabilitate them and ease them back into society. When they are released, they have programs such as student aid and others to help them reintegrate into the world.
All that being said, I don't expect ABB to make it more than 10 steps from the prison when he is released. One of those footballer fathers is going to be there waiting.
Re: sounds familiar
I can't speak from internal knowledge, however I have developed software (large scale) on both platforms. I can honestly say from experience that Mac OS (pre X) APIs were amazingly difficult to develop for in C and C++ compared to Windows (even 16-bit). This generally was because Apple never bothered to add APIs for many tasks and also because when they did, you often had to spend ages searching through header files for functions since documentation was terrible at best, missing in most cases.
An example would be a simple task of changing a window title. You'd write code in assembler to wait for the CRT refresh, then alter the memory location of where the fixed length window title was stored before the screen was redrawn. This was the officially unofficial way of doing it because there was no API to change the title after the window was initially created.
Probably the biggest job involved in transforming NeXT Step into a Mac OS was development of the Carbon API which finally made full APIs for app development on OS 9 and later OS X. It was insanely difficult because the old Mac OS code was so littered with pre-object oriented APIs and other legacy garbage. Even the simple concept of a message loop didn't exist in the old OS.
OS X was a nightmare for developers since the good APIs were off limits from a C application and nearly impossible to reach from a C++ application. ObjC could call C which could call C++, but it didn't work well in reverse. This was solved around 10.4 or 10.5, but until that time, companies like Microsoft had to write Carbon applications because otherwise they wouldn't be able to reuse code from their other platforms.
If I were to speculate, Microsoft would probably have Mac versions done before Windows versions because they had 1/10th the features. No COM/OLE, no scripting, no support for apps like Visio, no publisher, etc... Office for Windows was just had many more features which had to work. Office for Mac was used mostly by individuals where Office for Windows was a business application.
FCoE and FiberChannel are both disgusting
As a long time operating system developer, protocol developer and most recently networking guy (pays a lot better and you don't have to think), I have to finally call bullshit on the whole iSCSI, FC, FCoE battle.
It's amazing how the networking world has forced us into buying overpriced junk to compensate for underlying issues which are caused by using a block communication protocol from the 1970s. We're buying all these fancy schmancy systems for transmitting and receiving SCSI over faster medias and forcing network MTUs to be increased, forcing single pathing, forcing insanely short latencies all because SCSI is a piss poor network protocol and should be abandoned.
This isn't 1978, it's 2014 and we should stop focussing on fixing this crap and instead design a block protocol which works awesome over normal networks and even better over reliable Ethernet.
A block protocol needs to have a basic 5 functions :
Seek & Read block(s)
Seek & Write block(s)
In addition, it should be possible to queue reads and queue writes. Blocks shouldn't be fixed sized and shouldn't assume they need to map to physical hardware block sizes. Algorithms implemented by Doug Lea and optimized by others such as Lars Thomas Hansen are ideally suited for scalable block allocation and LUT virtualization.
As a massive bonus, to scale wider, it the protocols should have a high level block device zoning system as well as enumeration system.
Oddly, the amount of work that's gone into half assed solutions to hacking the SCSI square peg into the modern storage round hole has been a disaster. We are NOT at the mercy of OS vendors to support alternative boot protocols. We only need to implement remote block device support in the virtualization environment and on a server.
I have experimented with this using QEMU and VirtualBox and found it to be insanely simple to implement. My algorithms are not as well optimized as you would get from the Ph.D.s, but I was able to boot all operating systems with zoning, security, line encryption and more within less than a day of coding. In addition, I saw no reason to be forced into using "Big storage" from vendors like EMC and NetApp.
There needs to be a networking group made up of people who understand networks, how networking people think and also protocol design and block device technology in order to replace SCSI since SCSI is an ancient dog with flees.
Re: An ARM world would be nice, but...
Would be nice if someone implemented a "fat-elf" (dwarf is already taken) shared library/executable format that could store x86, x64, ARM 32 and ARM 64 all in a single file. Would actually be child's play. Then just package an LD which can link fat binaries generated by four cross compilers.
Icahn and corporate governance?
When has he ever done anything in the interest of a company or the people who work at them?
Every time I've ever seen him do anything, it seems his only interest is to make a quick kill no matter the consequence. I hate the occupy Wall Street bullshit view about 1%ers, but Icahn is hellbent of giving them grounds. He's a predator, a very very successful one, but a predator all the same.
Hopefully he'll drive interest in the share, make his quick kill and sell out. But honestly, I can't imagine how his methods aren't illegal under insider trading. He invests in shares, the hypes the shit out of the share by making people want to rush and get in on the kill and then makes his kill at a peak. This is no huge deal except he invests large enough amounts and has so much impact that he is personally timing when the share will rise.
Too bad it's Nokia :/
Honestly, I'd consider it if there was a decent Windows Phone out there.
I've bought 4 Windows phones so far. One for development (it was practically free), one for my son, one for my daughter and one for me.
A little more than a year later, my kids now have iPhone 5Cs and I'm watching a tracking number for my iPhone 5S which I ordered. It's not that I think iPhone is so great, in fact, I'm pretty damn tired of it. I think Windows Phone is a far better platform software wise. But...
1) Most of the apps for Windows Phone are the crap apps you get from competitions like this. It's by far the easiest phone to develop for and yet, no one seems to bother. And if they do, they don't bother doing it well.
2) Windows Phone App Store has to be the only store app more crappy than the Apple App Store when it was remodeled. It's so piled with crap it's horrible.
3) Microsoft seems to feel there's a good reason to not make apps available outside the U.S.... this has me totally baffled.
4) There's no good GPS programs for Windows Phone... I here Nokia made one, but I wouldn't buy a phone designed by Nokia no matter how much you paid me. Wow! It comes in yellow and blue! And if you put a case on it... what color is it then? Was anyone thinking?
5) 3rd party phone vendors keep "value adding" the phones. Samsung, LG, HTC all seem to think they should spam the phone with their own branded apps. Just don't... it's cheeky and awful. It feels like buying a Core i7 desktop from Office Depot because you need at least that much power to still start your web browser after all the preinstalled apps start.
Microsoft bought Nokia.... the company who defined the term "What do you mean we don't get it?" and while all the other successful vendors tried to make their phones cool, Nokia put Stephen Elop and Sweating Steve Ballmer on stage wearing suits saying "We made this phone for the poor people in India" followed 5 minutes later with "Isn't it awesome!!!!"
Microsoft needs to simply shutdown what remains of Nokia or turn their whole company into an app developer and then tell their Surface team to make something cool.
What are you going on about?
I have twice registered for MAC addresses in the past. Both times I was given 16.7 million addresses because that's simply how it works. Stop making mountains out of molehills.
MAC addresses are made of two parts, the OUI and the vendor assigned portion. Each part is 24-bits. If you pay the registration fee to IEEE, they provide one or more OUIs to the registrar based on need. A single OUI is 16.7 million addresses.
Is it inefficient... sure, I was given no 33.4 million addresses of which we used about a thousand altogether. But this is not like IP addresses. It's not so easy to run out of them.
Why does it matter?
I always liked happy holiday since it covers Christmas and New Years in one shot. Didn't realize logic and laziness is an issue for you guys. :)
Looks good, costs a fortune, requires specialists?
If I remember the VW bug, it ran forever, anyone could fix it, it cost next to nothing to own and with a Porsche engine, it hauled ass.
Bugatti, though I have no personal experience with it costs a fortune, requires extremely expensive specialist mechanics, requires booking appointments weeks ahead for service and most parts are not available after a few years and have to be special ordered or machines.
Was this the point they were making?
Re: Now consider...
What percentage could actually read and understand a Donald Knuth book?
Legacy 16-bit support?
Uhhh... Since when did Intel put 16-bit support into the core again?
Also, ARM has its fair share of legacy crap as well.
Also, to be fair, ARM will work just fine in the server. Whether there is any power to he saved is a different question. Intel hasn't been sitting around and just letting ARM catch up. While ARM has been getting faster, Intel has been lowering power consumption... Hell, my Surface Pro 2 gets 7 hours on a charge and it's fast enough to emulate 200 Cisco routers.
I think you'll find the greatest advantage of ARM is the ability to also run as big-endian. There are billions of cycles to be saved by using big endian. Most compilers don't optimize endian translation. Most internet apps perform a massive number of operations in big endian. AVX is a bit of a mess in endian related tasks too.
Intel is also closer in reality to performing 16x16 matrix transpositions than ARM. So large scale video and image processing (very very common task for companies like Facebook and Google). If Intel implemented an AVX instruction set able to function on columns as well as rows, ARM would have a really long way to go to catch up on power vs. performance
I just checked their annual revenues and it seems that for the past 3 years at least, they have made a $20 billion net profit.
At which point was Microsoft in trouble and needing saving?
Oh... you're talking about the share which really has nothing to do with how well the company is doing?
The start menu is for loser babies. I like the new interface and would prefer that we don't get the damn thing back :(
Re: Uh ... computer says no.
Last time I programmed for CoreAudio on Mac (a long while back), the audio driver had a fixed sampling rate of 48Khz and the audio card's crystal had a interesting drift.
That would be 2.4 samples per bit. This is certainly suitable for sampling a sine wave and the .4 makes it likely that you wouldn't even have to be in phase. Of course, you'd need to run some form of digital signal processing to reproduce the peaks. You'd need additional DSP high pass filters to extract any actual data from the signal. We'd probably need some additional time to make it work so that PLL could kick in. Of course, there are other modulations methods for transmitting data over sound waves, but they're going to be SLOW!!!!
Now, in order to make this work, you'd have to have code running at all times on all audio cards and/or drivers and/or OS kernel audio implementations and/or BIOSes running what I'd imagine would need to be a 20 point filter for everything to work.
I just love this nonsense.
Oh.... but you said square waves... that makes it more realistic.
Can someone please get Zack Brown from the Linux kernel over here. He needs to make some comments to bitch slap some people around.
Re: Uh ... computer says no.
Well, to be fair, I do have a microphone connected to my video workstation with a microphone which does actually have a 100-22,000Hz frequency response range. I've tested on a scope as well.
What I love is the suggestion that there would be some special code which would contain code to run a filter to extract ultrasonic from an "ultrasonic signal". I'm pretty damn sure that Nyqvist would have a blast with this. Next we'll here there's a DSP PLL to compensate for sampling rate issues on these sound cards. :)
Re: Just too possible!
I heard from a respected homeographic doctors that drinking water with the essence of gold will change your DNA to make you appear as a direct descendant of King Midas himself.
Respected security researcher ... that's too damn good. :) I love this stuff.
William, do you drive around a van with your name and photo on the side and a nifty slogan like "PC Problems? Call the Dr. Data!"
Thanks, I needed a great laugh.
Damn it, that was my line!
Re: I call bullshit
Come on now...next you'll tell me the tooth fairy, santa claus and intelligent business grads are all fake too.
Get real :)
No it's not
No, it's not technically plausible.
Next dumb comment please?
How about giving people an excuse to upgrade?
I know more than a few people who still use iPhone 4 because they're waiting for something new.
I bought two Surface Pro 128GBs and a Surface 64Gb. I didn't even notice the price. It wasn't important. I understood when buying it, if it broke, it was a gonner.
I think you need to see who the target audience of a machine like this is. It's a well engineered machine which looks awesome, weighs very little, has a replaceable keyboard, has an awesome screen, has good battery life relative to the specs and size, is really versatile in general. It has made my life amazingly better. I tossed my MacBook Air 11" and iPad G3 and Samsung Series 7 Slate because now I have one machine which does what I needed three machines for earlier. If it cost $3000 a machine and wasn't repairable, who would care?
I already ordered a 512GB Surface Pro 2. Can't wait to get it. Better battery and more storage... it's like Christmas.... in fact, it'll probably be Christmas.
I guess some of us prefer to pay a bit extra for something that improves our lives.
An alternative hack... but in the spirit
If you take 10,000 files (or less, I'd need a proper sample set to work with) and make them sequential patterns, the given that you have g sub x and intend to recover g sub y when in possession of G sub X and G sub Y, then you encrypt the large data set using g sub x and G sub XY and factor characteristics of the common exponent the logarithms... I'm not conveying this right. I see it mentally, but am not good at wording. I read part of the paper you linked which takes a similar approach and might actually even shorted the brute force attack remaining.
Using my method, you construct a tree of common traits of possible key values based on the fact that you're actually in possession a single private key and both public keys. It's something I came up with when Diffie identified another weakness in the keys.
The main idea is that the Diffie Hellman Problem is called a "hard problem" not an "impossible problem". We already have more information available if we have the client's private key than the algorithm accounts for. We also have the ability to encode known sequential or patternistic data sets. This means we should be able to attack the algorithm by identifying common traits of the cipher when comparing the algorithm, the data sets and the outputs produced. This of course would be infeasible without the private key used for encoding.
I've always had issues coping with the DHP when the encrypting private key is included in the algorithm. After all, it should be theoretically possible to reverse much of it. After all, unless you actually specifically drop data making it useless to begin with then you should be able to work backwards through it.
I'm guessing someone smarter than I can probably hack more of it algorithmically, I have major limitations in that field, but I am pretty damn good at factoring based on producing tweaked data sets to build search trees or sets to brute force.
Let's face it, there's a reason we key cycle 3072 bit keys... it's because they should be recoverable by someone somewhere as their sample sets grow... in fact Diffie makes direct reference to this in the original paper and later articles. We're simply expanding the known sample set and exploiting the inherent weaknesses.
If they're using DH (likely) and they're using the same keypairs to encrypt and decrypt all the files, pause the machine, backup and copy a crap load of small word files to the machine and let it run its course. Once you have enough sample data with both source and scrambled and you have the local keypair and you have the remote public key, tree search the key bits and factor to a brute forceable length. Then GPU farm the remaining bits of the missing private key. Then decrypt.
what's the issue?
Uhhh... and even now he doesn't get it.
There is no room for a handset maker in the smart phone market. Never was.
Samsung is a strange bird because so far as I know, they have absolute control over every component in the telephone. They make flash, ram, CPU, batteries, screens, etc... This means that unlike other handset sellers, they can make a huge profit per handset. But even then, Samsung is pretty much THE Android phone. Even though Google bought out Motorola's phone division, it seems that Google is happy letting Samsung make the initial profit from the phone.
Apple sells their phone as an accessory to their more profitable streams of apps and media. They are making fantastic amounts of money off of add ons. I'd love to find out what percentage of Mac sales and accessories they make just from selling PCs to developers of iPhone and iPad apps. What about the tons of money made from licensing chargers and accessories?
Google makes money from search, the store and more. Not the phone.
So, how does a handset maker with a handset maker mindset compete against companies who just don't care how many phones you buy as long as you buy from the stores?
The only place for a handset maker today is in selling feature phones to the non-smart phone people. Things like tossers and plan freebies.
Could Nokia have developed a competing platform to take on Apple and Google? I highly doubt it. They always come out strong and get cold feet if it doesn't show returns on day one. They have never had any proper long term foresight. They always made it about the phone, not the platform.
The Windows Phone cock-up is probably more about Nokia's inability to establish an image. Their "coolest" advertisements seem to focus on a demographic of people who watch stock tickers. Even their release of phones have the two fats Steves on stage wearing ties. Then when they should be targeting teenagers and young adults and American welfare/food stamp recipients who have iPhones, they talk about selling the budget model to developing nations.
It's amazing how they never once understood that it simply wasn't about the phone anymore.
Re: Fashion company to fashion company... duh..
So, you're saying that :
Burberry woman understands fashion, I.E. branding. Make it look special, milk the customer on the accessories, get them to come back for more and more.
Dixon guy doesn't, he figures that all that matters is what happens in his part of the store because that's where his results are measured so who cares if you buy all the addons, it doesn't show up on his quarterly reports?
Yeh... tech vs. fashion :)
Fashion company to fashion company... duh..
Dixons guy was crap because he was a tech guy. Apple is a fashion company first, tech second. She's probably better suited for the job than Tim Cook. She probably gets it.
Might have some valid points
I don't think the paper says straight out "open source is bad". It seems to focus on the idea that private companies may already be able to support open source better than running your own division to do so.
It is questionable whether the DoD can in fact run any large scale software projects successfully internally. I don't care whether its based on open source or not. I am 100% sure, they can do it better than Lockheed, Boeing, Honeywell or other contractors. Oracle may well be better suited to manage projects than DoD. I think it's a toss up really.
Whatever the case, the DoD would at least be able to handle revisioning and have a staff auditing incoming software changes that up to this point, they probably didn't. So even if it's a room full of DoD enployees maintaining a distribution based on CentOS or Ubuntu and only letting changes in which are verified at a code level, it could be a good thing.
Now the question is, where would the military get the coders who are skilled enough to audit as such?
This is such a dumb ass method
I developed a watermarking algorithm two years ago which survived re-encoding, functioned over the broadcast network and actually managed to allow geographic location of who leaked a stream from broadcast to the local exchange. Best thing is, the viewers in the test group we ran it against couldn't tell the difference between the original sources and the watermarked video. We reencoded the files up to 10 generations and could still always identify the markings. It even survived HD to SD conversion and back. Even better.. survived qcif rescaling.
This design was stupid since it's so easy to bypass. You don't need to compare against the original. What you need instead is to confuse the detection software. How? Get two copies of the same film from two different accounts and compare them together and find the differences and obfuscate them further.
This is not rocket science... It's just math. Could my system be bypassed? Sure... but the way I marked the video made it almost impossible to do so without understanding the pattern of picture alterations I made. So even if you started mixing from multiple different sources, each source it was mixed from would be be identifiable.
I am a huge fan of watermarking. I hate DRM. I like to be able to get content and use it however I'd like. I have no problem with people sharing videos with their friends and family. It's when people rent or borrow a movie and then put it on the pirate bay I don't care for.
That said... the Pirate Bay has evolved into something we all need to some extent. It's a video archive. It's a place where almost all video and music can be found. I love that. Too bad there's no good official source for the material.
Yes they did
I met many people who loved their BlackBerrys and wouldn't have anything else. They were sure that new BlackBerrys would catch on.
On the other hand, they also wear suits... and actually think that's cool
Re: I always viewed the stock market as "intelligent high stakes gambling"
I disagree... an intelligent stock market investor would identify means of altering the value of a share and no gambling at all. I know it is supposed to be illegal, but that's only an issue if you get caught doing it.
Sad to hear about your confidence problems
It seems strange that you wouldn't take a great deal more pride in being a female who "beats the odds" as it seems. My wife spent the past 15 years at service desk at a major national newspaper here in Norway and how she dressed never came into question. She was confident in what she did and she solved problems and people treated her as well (or as poorly) as they treated her male peers.
You are obviously in a really bad place if your gender plays a role in your career. This should only be an issue where being able to urinate standing or giving birth is an issue. A competent person is a competent person and it has nothing to do with gender. When I was in my early 20's, I dressed in 3 piece suits to work in order to convince people to take me seriously at work. That was really sad. Now, I wear jeans and a t-shirt to work every day. My words and actions speak for me. I could easily (as a man) show up for work dressed as a woman and people would depend on me precisely as they do now. In fact, if anything, it would make people see me as being even more confident with my ability to accomplish tasks.
People who think the way they dress matters at work are precisely the people who aren't that good at what they do. If you're a problem solver, you're value is based on your ability to solve problems. If you blow sunshine up your bosses ass, gender doesn't matter, you'll get those raises and gain their misguided respect. I hate that I spent years having to read about sports to convince idiots who rate your value based on your tie that I'm of any value.
I now, at age of 38 years old signed a contract with an IT firm which gave me 5% of the company, salary plus bonuses and the money I need to do my work.... without kissing ass or talking sports. It came from years of 18 hour work days, constant studying, huge numbers of certifications and more. My work paid off. Gender had absolutely nothing to do with it. My knowledge, connections, skills and friendliness was all that mattered.
Want to know something scary? If you showed up for an IT related job dressed like a man, I'd see you as lacking confidence in yourself. I'd see you as a person who puts more value on appearance than on productivity.
Wow!! Norton! There's a name from the past!
First... Facebook is pretty much evil. We all use it because the only real alternative is Google+ which is fr ok the company who brings us 22.214.171.124 to track all our DNS queries. So, when you basically give 99% of your life to a company which is competing with Google to see who can be Orwell's big brother first, you can't really bitch when they try to get that last 1%.
Second... Norten! Wow! I remember those guys. They're the ones who too over from Microsoft for a while with regards to making computers slower so you'd have to upgrade right? I mean, when MS started doing stupid things like making Windows use less RAM and CPU with each version, Norton Antivirus used more and more so Intel would sell more chips :)
Windows 8 if only for efficient people
Let's be honest, Windows 8 is only for intelligent and efficient people. Windows 8 if perfect for people who prefer to press 4 keys on a keyboard to start an application as opposed to people who want to move a mouse all over the damn place. It's kinda like how some people will install enacts or vim on Windows to have editors which are just really efficient as opposed to ones which are dog slow and sloppy. Windows 8 made it possible to move at the speed of light compared to just about any other UI. By integrating with a search engine which works very well, documents and applications are far more readily accessible than before.
I understand why many of you are confused though. It's a slow witted thing. You need to go click, click, click, etc... To do anything. It's ok, Microsoft has learned their lesson, they'll dumb down Windows 8.1 for you.
Good!!! Nokia is killing Windows Phone.
Let's start with Nokia. THEY ARE A PHONE COMPANY. They make their money selling phone hardware.Once the phone is sold, if they want to make more money from a customer, they have to sell another phone. Nokia also never understood smart phones. Buying Qt was the smartest move they ever made and then screwed up. Nokia has a long reputation of selling phones and saying screw you to their customers. For some stupid reason, they think making bunches of different phones is a good idea. That worked before. Now, we want phones who get love from their makers afterwards. People complain about iPhone software updates, but in reality, people know they're getting love when they get updates.
Microsoft needs to make their own damn phone and do it right. Spend a year and focus on industrial design. Then, whatever you do, don't let some loser like Stephen Elop looking fat and sweaty in a suit with a tie get caught calling it cool. Steve Ballmer should never been seen in public with one. Give him a Nokia. Hire someone to make it cool. Not a pop star, but an artist with a gift for industrial design. People don't realize that John Ives is far more important than most think. He brings fashion to the devices.
I love it when a company tries to make a new version of an obsolete technology. Last night I was helping to spec out a new FCoE network with four 40Gbe connections to each blade. Seems kinda funny anyone would want to invest heavily on FC when it's so insanely expensive and incredibly slow.
Good to hear
I was burned more than a few times by using AMD chips over the past 25 years. I have to admit, I haven't bothered with them in a while. This is mainly because they're great at consumer and great at data center, but somewhere in-between, they're lacking when it comes to supporting small scale developers. Their development tools have always been lacking and their pathetic support for GPU code developers really chased me away from them. That said, I am happy to hear that a real CPU vendor is taking on ARM.
There are a few major problems I see here :
- 10Gbe support on die, this is just amazing, but to be honest, I don't any mention for FCoE/DCB here. To make the 10Gbe controller useful, it needs to be more than an Ethernet controller or it's just a waste of die space. A modern 10Gbe controller needs to support priority flow control and enhanced transmission selection. This is a minimum requirement for supporting FCoE. In addition, added support for RDMA over Ethernet is a major requirement in today's environment. Add to that VNtag support and you have a controller worth using. Yes, I know only a handful of vendors have those features today. Broadcom, Cisco and it looks like soon Intel will all have them. But two 10Gbe controllers with those features are a minimum requirement in a data center network adapter for 2014.
- Virtualization support. ARM has it, but it's borderline crap at the moment. This is really not ARMs fault. They're just noobs to this category of computing and it'll take some time to get it worked out. They should be actively working with VMware, Microsoft, RedHat and Citrix to make this happen. They should get silicon out there as soon as possible so those companies can get bare metal hypervisors ready for the ARM processor in the real world.
- Compiler support. ARM/AMD need to stop screwing around with the ARM compiler which has always been a pain in the ass except when developing boot loaders or code close to the hardware. They need to put together a team of real engineers to take LLVM seriously. Thankfully, the guys over at Apple take ARM seriously, but they're interested in the 32-bit core from ARM. I haven't heard a single rumor of a chip from Apple which will employ 64-bit instructions. In fact, in a smart phone, it's probably almost a disadvantage to waste space on wider word width. What's the point of adding a huge amount of CPU power using 64-bit in a phone when 99% of what you want to accelerate would profit more from custom cores and better GPUs. In server land, it's all about CPU and therefore neither ARM or AMD can bank on someone like Apple taking on the optimization of back end code generation for 64-bit ARM as there's just no profit in it for them. AMD needs to invest to make this happen.
- Wide busses. Most people simply believe that PCIe busses come for free.The fact however is that the PCIe controller of the CPU takes a tremendous amount of bandwidth, requires direct memory access within the CPU cache as well as system memory causing major issues. This increases the amount of multiplexing that has to occur within the CPU cache especially when trying to support cache coherency. This generally is accomplished by placing a much larger burden on the cache logic itself, the effect of this is to either slow down the cache and the speed at which the CPU cores can access it OR by increasing transistor count and power consumption. Want to see all those great benefits of ARM technology go bye bye? Add more PCIe lanes.
- Microsoft support. Yeh... I know.. we all hate Microsoft, but to be fair, unless Microsoft throws some server and data center love at ARM, there's little hope for this being a really useful technology outside of corner cases. Sure, you can do more web serving. You can maybe run some monster apps like hadoop, but in the end, companies run on Windows Server, Exchange and others. I doubt there's a whole lot of processor specific code in Exchange, but there's bound to be some. In addition, for developer workstations, there should be a version of Windows which runs on ARM with desktop support and Visual Studio for example. Most developers really don't enjoy debugging their apps remotely. There's something more natural about debugging on the machine you're coding on.
Guys, it could happen... I hope it does, but unless ARM and AMD take the real world problems seriously, I don't see this being more than a niche market item. And worse, if AMD comes to market with a half assed solution, it'll become a problem like Surface vs. Surface Pro. People still go to the store and buy computers with either Intel or AMD chips in them. If they get a machine with an AMD and it's an ARM and they end up running Windows RT or worse, an Android variant, people will stop looking for AMD systems because it's too confusing.
x86 is definitely not the ideal instruction set, but RISC vs. CISC or VLIW was never really what it was made out to be. There were just as many disadvantages to RISC as there was to CISC. Code size on RISC was huge. Then we ended up with the bastard step child of RISC being Thumb which was somewhere in-between.
These days, the instruction set means nothing in reality. It's all about efficiency in processing itself. It's about things like how the CPU handles cache coherency, how the CPU manages passing code between cores, how to handle multiple ring-0 contexts... effectively making Ring 0 the new ring 0.5. It's about handling SLATs. These are all things which matter. Then of course what matters is the ability to power down major parts of the chip. This is something which doesn't work well in a single die environment where 99% of the chip is synthesized from a common VHDL/Verilog code base which doesn't allow for the analog nature needed between units.
Intel's chips make use of x86 and x64 instruction sets, but no decent processor today will make use of that when executing code. Now the next generation of Atom is also doing away with x86 and x64 in the core and replacing it with a instruction set agnostic architecture. The CPU will instead attempt to recompile the code when it receives it in order to handle tasks out of order. In fact, to a certain extent, the nature of x86 and x64 lend better to this design since RISC groups everything into a single instruction where ever possible. Intel's nature is relatively granular and provides what will be easier to recompile on the way in and manage dependencies for. I can very easily in my head design algorithms for out of order execution of x86 instructions where ARM instructions require a second phase altogether to manage the instruction dependencies... though it's not particularly difficult either... just takes more transistors.
If you also give me a chip with AVX2 instructions, then I'm really happy. AVX2 is just damn sexy in everything regarding mobile phones. It would allow me to vectorize my code and make use of two-in-one-out instructions. If they make a new set which allows a single instruction for a 16x16 16-bit hardware transpose operation or an extra flag to access registers vertically instead of horizontally, I'd be in love. At the moment, a 16x16 transpose is the last missing instruction in AVX2 in my opinion.
I agree, he was a bit trollish, but HH has been far more active in things like genetic research during most of that period than involved Internet wise. This isn't to discredit HH and frankly his predictions are probably about as sane as anyone else's... well maybe not John C. Dvorak who has successfully predicted the exact opposite of everything in the industry for nearly 30 years.
But to be honest... there are some issues here. For example, you can't help but to feel that as what could be considered one of the fathers of ARM that he might be a tid bit biased. Let's not ignore that all of his computer companies did get their asses whipped by companies like Apple, Intel and Microsoft in the long run. ARM is really his only computing legacy that I could Google which has survived and impressively so. So discounting all the places where his ventures fell on their rears, he did an amazing job in the case of ARM.
I can't help but to personally dislike ARM and it comes from trying to write compilers and assemblers for the platform. I actually found it had to be the only platform ever made I considered to be less elegant than PIC. It was aggravating as hell and I wished they could just pick a damn instruction set an stick to it. That said, if Intel loses it's crown, I sure as hell hope it's not to ARM but instead to a company which actually cares about developers and wasn't so hackish as they are.
For a sixth generation of computers, I really hope that someone creates something new. I felt a great deal of hope for XMOS for a while, but they're pretty much stagnated into boring crap now.
My two cents
Larry should revive the Sun name. Oracle has a worldwide reputation for things that are VERY EXPENSIVE. Sun wasn't cheap, but Sun's name was more closely tied to innovative and powerful. I know that since the buyout, almost none of my customers (the biggest IT companies in Norway) have looked twice at what was once Sun.
Also, Sun did a much much better job on keeping OS documentation available to all users. People even bought Sun machines because the documentation wasn't buried like it is now.
Oh. Solaris? Who's going to use an operating system if they're not even sure what it's called anymore?
Re: With twice the users...
Strange, I bought a Surface and two Surface Pros. My wife and I haven't even charged our iPads in weeks.
Sure... There are a handful of bugs, but less than on my iPad 3.
Only true bummer is I have to use the stylus from my Samsung Series 7 Slate because the Stylus from Microsoft doesn't work well when held at a slant. But the Samsung pen is absolutely amazing on there.
Almost exactly the reason I won't buy Nokia, HTC or Samsung
It appears that other people may be happy with a $400-$1000 investment in a phone which no one at the company who makes it will care about a few months later. I personally however demand that whoever I buy a phone makes only one or two models per year. That's why I have iPhone 4 and am anxeously awaiting Surface Phone.
Re: actually no
I would say this card is an excellent example of way too little too late. Beside the points you made, let's focus on the fact that this card is of minimal advantage without making fabric upgrades for 8gb/s FC. After all, you buy switches and directors when you build the cluster. So you don't generally have a pile of unused 8gb/s ports laying around unused. When you upgrade the HBAs, you will also upgrade the fabric. There's no real advantage of replacing HBAs unless you're upgrading hosts and maybe directors as well.
So, if you're going to upgrade, who the heck would even care about pure FC when CNAs are far more cost effective at this time? I can't even begin to think about who the target market for these are.
First... YES FCoE DOES require special hardware to function properly. This is why we have converges Ethernet adapters.
So if we leave aside the fact that the author basically has less than the slightest clue about modern network environments, maybe it's a good article to sell $500 network cards to other people who lack a clue.
- Bugger the jetpack, where's my 21st-century Psion?
- Something for the Weekend, Sir? Why can’t I walk past Maplin without buying stuff I don’t need?
- Review 'Mommy got me an UltraVibe Pleasure 2000 for Xmas!' South Park: Stick of Truth
- The land of Milk and Sammy: Free music app touted by Samsung
- Privacy warriors lob sueball at Facebook buyout of WhatsApp