Is it better in Chinese?
Their English site is so bad I wouldn't buy something for a buck from them. I'm scared topic my credit card number in.
323 posts • joined 3 Jul 2009
Their English site is so bad I wouldn't buy something for a buck from them. I'm scared topic my credit card number in.
Snore... the evil empire has fallen. Ballmer (that fool) killed it.
Microsoft is one of the nicer companies out there now. This deal actually sounds like it could be good for Getty. What confuses me is, I couldn't find anything on their site without Google
Does the underlying OS matter at this time?
It seems more important to build up the tech itself.
Of course, it sounds like major parts of the Apple or Google app stores can be running pretty quickly
1) They have awesome head tracking
2) They have awesome 3d scene mapping
3) They have functional development tools
4) Unity is being integrated
5) They have managed to get semi transparent LCD working
6) They have headmounted the PC and I didn't read complaints about heat
We can assume
1) Microsoft didn't bet the farm unless their LCD team made guarantees about shaped, full FOV panels
2) Microsoft looks like they're prepared to inevest 10 digit sums into it
3) Microsoft is trying to be cool... and people are going for it.
4) Microsoft is one of the biggest 3d game companies in the world... they probably could find someone to make apps
5) It'll be SOOOOOOO weird if a Bill Gates company makes cooler stuff than John Carmack :)
Thumbs up from me on that one BTW :)
I have to admit that you're suggesting we have the same problem elsewhere, but I can't understand if it's a means of justifying the behavior such as "We do it with so many other things, why not this too?" or if it's a welcome from me to denounce those things too.
I don't do it with cars, motorbikes, bicycles or generally with technology either. I'm kind of utilitarian. Modes of transportation are for transporting me from point A to point B... if a specific model or device of transportation provides me greater utility, I'll use that if I can financially justify it... hence my crumby little Piaggio scooter which allows me to avoid traffic.
I use an iPhone 5S because I found the thumb print reader to provide me utility. In addition, the quality of the apps I use are higher than on other devices. I tried others but found them less practical and provided less utility.
I have a bicycle, I don't know if it's a good one, but if I push a pedal, it moves.
I have a Microsoft Surface Pro because I needed a tablet and a PC and it was the right weight, size, performance and screen quality to meet my needs for my job.
I think you made a greater point which is simply that I don't have one or maybe I don't understand other peoples. My perception of what I personally consider a human weakness is something you seem to understand better than me or accept better than I do. I see these vain attempts by people to try be more than they think they are as a sad emptiness which must be compensated for.
This morning I talked with a guy who has a BMW Z4 which I know is causing him physical pain due to it's poor ergonomic design (in favor of aesthetics) and I love and respect him in most cases. But he drives that car because it's pretty. I suggested that he should consider keeping that car in the garage as a toy to play with occasionally because it's fun once in a while. And he should have something more suited for utility and daily use instead.
BTW... I lost a great deal of respect for Audi because the CEO said a few years back that he has no intention of wasting time on things like fuel economy, electric cars or fuel cell since his cars are about status and power, not about utility. I wish I could find that article, I think it was on The Register.
I don't understand all the hair dresser references. I thought Audi's in general were just to try and convince people that your special. Buying such a vehicle has absolutely no practical benefits whatsoever, unless you're at a race track. They are purely for entertainment and status. I guess they could be fashion as well... a fashionable buggy and station wagon maker :) I'm always tempted to spam those things with wood panel shelf paper.
Point is, it's really a brand of cars that screams "I have nothing special about me, so I pissed away gobs of money on a toy and am hoping to impress people.", Don't feel bad, Porsche is the same... but to be fair, Porsche drivers are usually assholes without the car too.
Tesla drivers can swing either way
ARM is already an alternative architecture to x86 and 10 years after hearing about ARM in the server space, enough companies have invested to make something look like it might start happening. GPU acceleration of most code sounds really nifty, but unless compilers come out which can pretty much handle accelerating code as part of the normal compiler chain, it's benefits will remain untapped by most. I think database developers more than most are generally quite happy just adding more capacity to compensate for performance lost due in favor of massive amounts of clean code.
Sure, I can get PPC linux and OpenStack... Great! But I can also get that on ARM... So why would I consider adding PPC blades to my infrastructure when in reality, I can't justify the cost difference to begin with. ARM and x86 is CHEAP and power efficient. I don't even care what CPU I'm using so long as the applications just work and the power bill is small. POWER almost certainly is not cheap. The CPU they're discussing is a massive powerhouse, but I would be absolutely shocked if I can say it will have an equal or lower TCO after 3 years.
What do I do when people grow bored of this in a year? What do I do when I can no longer buy new versions of the chip?
UPS is irrelevent... 1.5TB of RAM on a XEON is a great idea, but the problem is, with 8x40Gbe (which is what I use my Windows Storage Spaces servers for an aggregate bandwidth of 320Gb/s per server, 1.5TB of RAM can effectively fill up at 32GBytes per second, reaching full capacity in around 50 seconds. With a aggregate write bandwidth to disc of about 2GB per second, in case of a power outage, I can't possibly flush in time to survive this.
We need battery backed RAM with flash stacked on the RAM to flush even during OS failure back.
Their system is crap
I can honestly say that since a proper deployment of VSphere 6 requires rethinking the entire storage system and arming the blades with hard drives. In addition, it would require using the barely integrated NSX which their competition sees as a base component as opposed to VMware who thinks first you should first pay for Enterprise Plus and then pay for basic modern networking features on top of that... pretty much charging you twice (or is it three times) for networking.
If I want the hack and slash VMware way of doing things, I'll probably use KVM as it's pretty much a VMware clone which I can automate using some really great third party tools.
If I want to properly plan an upgrade and have a proper solution, I'll use Microsoft Cloud OS which is SUBSTANTIALLY cheaper and is a complete IaaS and PaaS solution out of the box. And best of all, it's like $2000 a socket vs. $7500 a socket compared to VMware... and it's management system is MUCH better.
To properly alter a server, you need to :
a) Backup/Snapshot the configuration
b) Write a verification test
c) Write the change script
d) Execute the change
e) Execute the test
f) Execute all the previous tests to ensure that not only does the new change work, but it hasn't broken how the other features of the system work.
g) Roll back the change if any tests fail
h) Store documentation of the change and it's results in a change management system (like Git)
So... if you think this is better handled by a human, you're simply doing it wrong and should in fact be fired. Your company REALLY doesn't need you and you certainly should not be called a Senior
I don't want to buy switches from a company that has a complex. They always talk about how they're better than vendor A or B... But what's important is that I want to see why they're good... quit the "compensating" thing. Instead of trying to convince me your dick is bigger ... Just whip it out and be done with it.
Juniper is still selling product A and product B... Where are their solutions? How about validated designs? In the data center, if they don't support Cisco blades and if HP ships their yucky VirtualConnect and Dell has Force10 (good shtuff)... And Cisco makes FEXes for chassis from HP, Lenevo and Dell... Does Juniper even matter in the data center? Why would someone buy their stuff... If you insist on using weird 3rd party networking... Alcatel is actually probably a better choice at 1/4 the price.
Windows Scale Out File Server in the same price category with Storage QoS enabled?
How does it hold up against an OpenStack Swift solution?
I guess if you're stuck using VMware, it's probably a good solution... maybe.
These guys are so fixed on comparing the slow to the slow.
Hey!!! We implemented a storage technology from 1978 to be able to emulate an LSI SAS controller or Buslogic controller with no acceleration across a network topology we specifically designed to handle the archaic nearly 40 year old protocol which was broken... but less broken that other things then and hasn't fixed the underlying problems up until now.
So what we're saying "Now that we're supporting the most legacy crap protocol on the market which will be a core business for us, you can use us to move away from legacy!"
They are serially trying to make a really fast 40 year old storage system and call it modern?
1) SAN BAD!!! First of all, you're talking about the storage array, not the SAN. The SAN is the network carrying the data to and from storage targets and initiators. SAN is old crappy technology which is far older and crappier than most people realize. Even with VAAI extensions, SCSI is one of the worst protocols ever considered to work over a cable longer than a meter.
If you're even considering half assed SCSI transfer methods for anything other than booting, you simply did it wrong.
2) NFSv4 and pNFS is pretty nifty, but for the most part, we're still talking about a protocol that just doesn't have in-band support for features other than strict file I/O. It's a protocol that has had very little progress in the past 30 years. VAAI NFS extensions are fairly crap since VMware decided in their total lack of wisdom to insist that in the days of RESTful APIs, they would leave it up to every storage vendor to first spend $5000 on a license for a 13 page PDF file containing the API reference and then expect them to actually implement a full out-of-band protocol for offloading storage. This is so impressively stupid that they shouldn't be aloud near storage. Did you know that after thorough analysis of 5 different VAAI NFS drivers, they are not only insecure but can easily be rooted.
SCSI is a protocol that using Shurghart's original documentation from the late 70's you can easily write a parser for modern SCSI with some intuition. It has so many problems with it that a bunch of bumbling idiots implemented FCoE which might be the worst design ever to grace our networks to try and carry SCSI packets without the protocol completely crashing.
SCSI is possibly the best local block device protocol ever to grace us, but should never have been used as a network protocol and now, I calculated that 2% of my data center power budget is wasted in SCSI overhead for encapsulating, translating and re-encapsulating SCSI up to 7 times between initiator and target. The massive extra cost of sector translation in a dense blade environment with 160Gb storage per blade, costs over 10% in wasted CPU consumption and heat production because we use SCSI.
Add to this that Linux currently has 3 different iSCSI and FC target modules that just don't work in any Linux distro I've tried. When they do work, their stability is questionable.. their performance is awesome, but their all half baked and their configuration is junk.
SMBv2 and later SMBv3 were completely redesigned and rewritten recently. It's graceful and with its impressively tight integration with Windows Storage Spaces is elegant. Using it alone actually can very measurable impacts on data center power and cooling. It also can decrease data center fail-over time due to more intelligent tiering and replication support. The protocol overhead is low, the internal security is high and it's ability to be accelerated is amazing.
3) Modern isn't a moving target when done wisely. For example, investment in a Windows Storage Spaces or even OpenStack Cinder + Swift solution can actually evolve pretty well over time. Intelligent design of your LVMs based on 10+ year old technology accelerated by modern hardware can show massive performance benefits. Using something as bad as a SAN, you are doomed to fail. SAN was one of the worst ideas ever to be introduced to the data center. (I can easily in an hour prove every bit of it) as SAN has cost us environmentally as well as wasting insane amounts of budget on what is little more than an extremely dumb filer.
Modern storage is easy. Go with what works. Avoid wasting money on VMware solutions. If you really must support VMware, at least use DataCore on Windows as the massive market fragmentation has left all the other storage vendors sucking. Nexenta's latest version should have been "Curse!" since they added more incomplete and unbaked and undocumented features than ever. Linux's storage stack is so far from VMware friendly you should just run. It's like VMware is actually trying to ambush Linux storage. QLogic and Emulex Linux target driver support is so bad that I can't see how NetApp can use their trash instead of just sitting down to make their own. FC and FCoE and Ethernet are not the most difficult technologies to implement. And I have implemented the firmware of a 10Gbe controller, so I'm not just talking smack,
It's a sad sad sad sad world when the best storage solution is a SoFS Windows Server based SMB solution. I am actually investing very heavily in Hyper-V now simply because Azure combined with Windows Storage Spaces and Microsoft's SDN solution on top of all Cisco hardware is easy, clean, stable and cheap. I'll leave my HP blades running VMware and other crapware on a NetApp storage solution in case I just need to run OS/2 1.2 or SCO UNIX in a virtual machine at some point.
To be fair, Nexenta is probably the best solution right now for storage if you want something that's really third party. But...
1) It's fibre channel support is horrible
2) It's FCoE support is utterly awful
3) It has great NFS support, but it's SMB support is kinda half assed
4) It's user interface is archaic and barely usable.
5) It's options for backup and management just isn't very good
6) It's support for things like VAAI NFS is present... kinda sorta.. it's just not there
7) Support for OpenStack storage is ok, but it simply doesn't offer anything you can't just get with Ubuntu or RedHat running swift and cinder.
8) It's iSCSI target isn't great. It's better than Microsoft's but that's not saying much.
9) It's yet another system to manage and just doesn't have the integration that modern data centers should expect
10) It doesn't have PowerShell or PowerCLI support. It doesn't have full vCloud Orchestrator, System Center or UCS director automation support.
11) It's REST API looks like it's documented with a 4 year old with chocolate all over his fingers.
12) Calling or mailing support at Nexenta is a gamble. You might as well google the Oracle docs for Solaris since Nexenta is simply not likely to respond with anything more than an offer to sell you something.
13) At some point, Oracle will probably sue the hell out of Nexenta for license violation and it won't matter if Nexenta is in the right or not. They'll be lucky to keep their heads above water throughout the lawsuit.
Don't get me wrong, I LOVE ZFS and use it in my production environment since it's by far the most incredible solution out there, but I just don't see what NexentaStor has to offer other than a REALLY expensive storage stack with tons of half implemented features.
If you actually think you need NexentaStor, you haven't done your research... while I think they'd make a good $99 storage system, I couldn't imagine paying more than that.
Sadly, while there are hundreds of cheap and crappy Linux, BSD and Solaris storage solutions out there right now, Nexenta might be the best of the cheap and crappy. But for OpenStack, the stock SWIFT and Cinder solution is best. For Windows, Storage Spaces is truly the best thing ever made. For VMware ... there is no good storage solution for VMware at the moment. For some reason, VMware decided that they would focus on another generation or two of legacy crap packaged as a $2500 per CPU add-on. You almost have to use a SAN appliance from EMC, NetApp or Hitachi to even keep your head above water there. Nexenta could fit that hole, but unless you want to use some fairly odd SuperMicro twin servers for running Nexenta, there isn't many good hardware solutions for hosting it out there.
Really? I thought we far outgrew that a few years ago. I could see 4x1Gbe or a 10Gbase-T connection, but 4x1Gbe is just sad.
Large amounts of flash is only necessary in a poorly designed data center. All flash is for when you don't properly tier your storage. For so many reasons, storage tiering is a necessity. We need it because CPUs and blade memory capacity has now far outstripped our ability to move the data to and from the blades effectively. With a current theoretical maximum bandwidth of 960Gb/s network bandwidth to and from a single rack and a theoretical maximum bandwidth of 160Gb/s to the blade, it is necessary to use more intelligent storage systems than traditional SANs can manage. This gets up to storage tiering and more intelligent storage systems like Windows Storage Spaces or OpenStack Swift, if you really must use nasty old block based SANs (meaning you're actually still using VMware... yuck), Cisco Invicta isn't a terrible idea.
So, the central storage system running all flash is lame. A three tier storage system made up of 90% 7200rpm spindle and 10% high performance SAN spread across more servers and more drives is optimal.
The only case where all flash makes any sense is when mining a single massive data set.
And kids, this is why we don't buy traditional SANs anymore. They're slow!
VMware is claiming to be able to hit a peak of 7 million IOPs with their virtual SAN solution and with proper systems like Windows Storage Spaces or with OpenStack Swift, you can go way way past that.
The bottleneck in storage performance is centralized controllers like those found in SANs. They just aren't fast. Storage performance is bottlenecked by where dedpulication hashes are calculated. A SAN always does it at the storage controllers. VMware virtual SAN is just a distributed block based dedup file system. It's also a major hazard or threat during storage failure. Windows Storage Spaces and OpenStack Swift spread the load much broader and as a result are much much faster.
Using Storage Spaces or Swift, it should be cost effective to enable a storage tier at top of rack using two mostly flash based servers. Then a mostly spindle based tier made up of near-line storage for the entire data center. This means that where VMware caps at about 90,000 IOPS per node, a storage spaces or swift system can do 500,000+ IOPs per rack and carry dedup across the wire where VMware doesn't carry it to the next tier. VMware's performance is a dog with fleas because of silly cluster limits imposed on ESXi and Virtual SAN. Azure and OpenStack don't have those silly limits tied to storage. As a result, it scales much further.
Even with the stupid design of VMware vSphere 6, it can still scale to about 6,000,000 IOPs or 7,000,000 if the moon is just right per 64 blades. Storage spaces and swift will scale far past that. In fact.
So, using a SAN from Hitachi, NetApp, EMC, etc... is just a waste of money.
Let's be honest, the year is 2015... NetConf is a dog with flees at the best of times. Yang is utterly a waste for anyone who actually has a job to do.
The only reason NetConf is moderately interesting is that it is supported by OpenDaylight... which rocks, but let's be honest, in the enterprise network (where this product is even slightly useful) we don't need SDN. Enterprise networking is about 90% unused bandwidth anyway. We need SDN in the data center and in the service provider where there is no such thing as enough bandwidth and IOS XR and NXOS support these features without this product.
The version released by Cisco doesn't contain the REST API which is so impressively useless. What we actually need is the REST API for legacy IOS devices which the expensive version of this product supplies. This released version is just useless.
I ended up having to write an alternative to this from the ground up because NETCONF isn't reliable on IOS 15.
I recently bought a REALLY low end NetApp 2552, it's extremely heavy and extremely pretty. It's also EXTREMELY slow and EXTREMELY clunky.
I am absolutely amazed about how almost every other solution (other than EMC and IBM) that I've evaluated has much better configuration options. NetApp's documentation and naming is so poor, it's just not even worth talking about except as a joke.
The command line and web management tools for NetApp are just not very good. Simple tasks aren't simple. Keeping volumes organized is a mess. Fiber Channel configuration is bad. iSCSI configuration is worse. FCoE configuration is so fantastically ugly, it should be taken out and shot.
They have no real tools for troubleshooting.
I find it so completely humorous that NetApp actually is so bad that you actually need to take a class or two before it starts getting untangled. There are 10,000 ways to do things wrong, but, especially since Cluster Mode came around, it seems like there's no real way to do things right.
Their hardware architecture is so completely old news that in a hardware solution where they should have taken development of their own converged adapters seriously years ago, they are just a PC in a different box running generic Q-Logic stuff.
If you need performance for price and you want manageability, use software based storage solutions. But don't just slap the junk together. Get proper servers with proper power redundancy. Use Cisco C3150s for near-line and C240s for online storage. Use multiple 40Gbs VIC adapters and manage them using System Center or another orchestrator.
If you REALLY need to use something else, good luck with that. HP's automation tools are a bit sad right now. Dell's are um... we'll not talk about that. Lenevo might not be too bad in time once the documentation is handled properly. It might take a generation or two to get there.
I think that NetApp's biggest problem is that they've grown so big that they no longer understand what data centers need from a storage provider.
So, EMC and VMware released a whole new version of vSphere which simply was a whole lot of nothing new. It's like "We'll charge you more than you ever heard of for bug fixes and features everyone else gives away for free". They brag about awesome new performance maximums that their competitors consider to be a baseline configuration. The still have cross-the-board product fragmentation.
Best of all, they think that storage and networking are premium features! I'm still laughing my ass off at that one.
So, instead of re-architecting VMware to work like a super beast with EMC and implementing something intelligent like a proper storage system, they made the internal storage system of VMware look more like EMC's legacy junk.
Cisco on the other hand has built their own end to end solution which completely eliminates any need for VMware. They built monster redundant storage servers for software defined storage. They implemented full support for Microsoft MVGRE across their entire Nexus line including their full internal SDN solution. They even implemented MVGRE acceleration on their VIC cards.
The only thing I can say nice about VMware and EMC anymore is that at least it's ok for running legacy operating systems... really slowly... with no real management.... with half-assed integration.
Oh... should I start on OpenStack? Cisco with OpenStack and pure Cisco servers for storage can deliver massive performance performance which VMware can't even dream of with EMC.
Either solution costs much less and both solutions are far better integrated with Cisco UCS and hardware than VMware is. VMware sees automation as a half-assed after-thought. Don't believe me, try configuring a virtual machine with a serial port and serial port virtualization from PowerCLI, it seems that VMware and EMC don't consider things like full APIs worthwhile.
Let's talk about things like VAAI NFS. The closest thing VMware has to a proper storage solution is NFS. Unlike SCSI based protocols (such as those across the entire new VMware storage solution), NFS is highly flexible and actually can be extended properly to add additional functionality through the NFS RPC mechanism.
Well, here's the deal about that, VMware seems to think that before you can have that, you need to pay $5000 for an API reference before implementing the features. VMware didn't even bother implementing the NFS calls as RPC which is what Microsoft and OpenStack did with their solutions. They instead made it so each vendor would have to implement them out-of-band and secure them as well.
Guess what VMware and EMC, you can keep your clunky junk. I'll give my business and my customers' business to a vendors which actually have a clear path forward. I honestly couldn't believe that after all that time, VSphere 6 was the best you could come up with.
This is a clear case of where ARM is no longer thinking intelligently and now is making mistakes they've watched Intel make but should have learned from but instead just copied.
TLS inside the CPU is ok if we limit ourselves to clearly verifiable code. For example, an AES block ciper is easily verified as the algorithm is fixed. You can compare it to software.
MD5 is also pretty easy to verify.
Here's where the problem is, security code should never ever ever be static within a chip. As soon as the slightest exploit is found (and it will be) the system running on the chip is trash, It takes A LONG time to harden a security stack, as OpenSSL.
There is a far smarter way to handle this, but it will hurt performance per watt which is critical in IoT devices. Most ASICs tend to include at least some FPGA to make patches in the chip after release. This is how Intel occassionally makes CPU fixes... by releasing chip firmware updates... the concept is more complex than that, it has to to with instruction intercepting and stuff, but to stay on topic security features belong in FPGA areas of the chip.
You may not know how AES works or other block ciphers and stream hashes work, but they aren't particularly difficult to implement in hardware. In fact, it would be quite easy to implement an FPGA area capable of hardware accelerated streams and fixed size block ciphers. It would just be a large number of relatively small ALUs, shift-registers and mapped swap functions.
When you add things like key exchange and such, that where things get hairy. Accelerating key production and verification can be extremely valuable, but there never has and never will be a time where this should ever be implemented in ASIC. Here's the reality, you'll use it, it'll work great, someone will find a loophole in your implementation and now 1 billion+ IoT devices are hackable.
So you send you a library update which moves the security into software... now 500million+ devices lack the performance to run.
Bad form ARM.
These robots were absolutely amazing... the algorithm they used for rebalancing was natural reactive and just all around well done.
There was a single thing I noticed which was a bit rough around the edges, it was when the sensors noticed it was about to come into range of an object which would require adjusting balance to climb or descend.
I have never played it, but I've watched my wife messing with a game called Clumsy Ninja on the iPad. It's the first and only time I've seen any automated system which actually has a natural flow for approaching objects which need to be climbed. What was most interesting is that the code didn't calculate surfaces based on the perimeter of the rigging during the approach, instead it clearly calculated based on the actual wireframe.
I believe this is what seems off to me about the dogs. I think that the software is calculating for the distance from the point of balance, instead of the point of contact and then compensating reactively.
These guys should really check that clumsy ninja, their collision and climbing physics are the most natural I've ever seen. While CN is for biped, the proactive collision management system should be able to be adjusted for quadrupeds.
Just use Windows Storage Spaces, problem way solved!
VMware, there are limits!
The more I add up theTCO of VMware vs. every other solution, the more I realize that as long as VMware keeps on perpetuating their ancient design, there will just be limits which everyone else is blowing past.
A simple 100% fact... everyone else considers storage and networking to be a base component of the based package. VMware believes they're add-ons. While NSX is nifty, it just doesn't hold a candle to Microsoft SDN.
Want to talk storage? VMware is bragging about an uber-fast 7 million IOPs storage system with a wopping 90000 IOPs per node. Storage spaces scales WAY past that using SOFS.
I was SOOOO looking forward to vSphere 6 and when it hit, all I could think is "Where's the upgrade?"
Goody... yet another guy with a tie who is probably hopeless.
Last week alone I met 50 CIOs of companies ranging from baby sized to multinational conglomerate. One thing they all had in common. They were CxO ready! This week, I had lunch with the CIO of one of biggest, best funded government entities in one of the world's richest countries.
Want to know what CIOs have in common? They generally don't know anything about technology. If you doubt me, take a look at vSphere 6. It's an excellent example that VMware doesn't even have a sound understanding of what needs to be done in a data center. It's like they read in the news that this was hot and that was hot and they just slapped together a bunch of products to sell for as much as they could as fast as they could. They don't look like they spent any time at all asking the one simple question which was "How can we decrease the total cost of ownership of a data center and improve business agility for our customers?" Instead, they just slapped out a crap load of messy products and called them a suite and are hoping no one notices.
If the CIO of VMware didn't even take the time to get people to plan a strategy to optimize VMware's business, who the hell would want him in the White House?
He's been at VMware long enough that he could have taken the time to properly plan and deploy a VMware based cloud and build a series of massive data centers based on it, then he could have worked with the development side to make the VMware offering all it needs to be for themselves as the largest consumer of the tech. Then they could productise an actual proven solution.
Oh... wait, why do that.. Microsoft is doing that and look how badly they failed.... as they're chewing away at VMware's profits. haha
RadioShack is still one of the strongest brands on the planet even if they have done everything possible to ruin it by making it a mobile phone store.
The entire world is going entirely nuts about IoT, Cisco just announced a competition with nearly $100,000 of prized for teenaged girls to get involved. And RadioShack, already one of the biggest distributors of IoT technology can't figure out how to make a profit?
Let me help!
1) Fix the damn website! Shopping on the RadioShack website is painful at best. It feels like a relic from 1998 and is just littered with too much crap. Dump it and put something decent there.
2) Internet of Things! RadioShack has been the starting point of everything electronics and computers for millions of people for decades. I can't even guess how many copies of Forrest Mims III books on getting started in electronics on engineers mini-notebooks they sold. As a child in New York, I spent hours every week with my friends scanning Radio Shack catalogs and store shelves figuring out how many resistors and capacitors we could buy. If we saved up our candy money together for two weeks, we could get a little breadboard and really make stuff happen.
RadioShack is the place to make that happen. There is probably no company better positioned in the world to spark imagination among children and adults for making IoT happen.
Stock up the shops and get people out there to teach. Sponsor mini-maker fairs at RadioShack stores and sell parts, kits, components, books, etc... Get a 3D printer into each store or make an agreement with the nearest OfficeDepot or UPS store to get robotics up and running in the shops.
3) Sell virtual!
RadioShack used to have a great group of people building kits and tools for people who were going to learn. Virtual assets are the way to make BIG BUCKS!!! Sell 3D designs that people can print out using printers they bought at RadioShack or pay for higher quality versions printed on professional 3D printers. Get the assets going!
4) Time to revive the Tandy way of thinking!
IoT, 3D Printing, Education, Tools, intellectual property... the list goes on and on. Tandy/RadioShack was one of the greatest enterprises ever. What it took was innovation and education. Even to this day, many people remember the Tandy TRS-100 which might have been the first type of portable computer anyone had ever hear of. Bill Gates wrote most of the code on that computer himself. RadioShack has the one and only thing they need to make this all happen again... they have the name. It's time to start innovating.
Lastly, you have the final option...
You can try and stay ahead of the game selling chargers and cases and phones and just see how long you manage to maintain a huge business and supply chain against smaller agile players who run on shoestring budgets and zero marketing costs.
I've been making the full shift to Hyper-V and Azure as well. I work on a much larger scale than what you're mentioning but VMware is dancing around talking about how great their 7 million IOPs storage is and I haven't seen anything near that slow since moving to Windows Storage Spaces.
It's just a shame, VMware is still screwing around with making 30 different products they can sell one by one instead of focusing on a solution which is what MS did
Last I checked, Windows 10 which is the core of the new Windows Phone OS will also be running ... Windows 10... on ARM.
I wonder if anyone notices that Windows has run (in one flavor or another) on ARM for over 10 years.
Windows 10 on ARM will be completely supported and the Raspberry Pi 2 port is nothing more than a fun toy which happens to be a a BSP port to the Pi 2. It's a great idea if for no other reason but people will be able to do Pi projects using Visual Studio. You can also do Edison, Gallileo and Curie projects the same way and that's a headless x86 version.
Too bad people write about things without first researching them
How do you figure? Last I checked Windows 10 which the operating system with the same kernel for all devices will be the exact same kernel on Windows Phone. The Raspberry Pi 2 port is nothing more than a fun project which was easy enough to do simply by coding a BSP.
Let's not forget that Windows CE is also a real-time operating system... which in many cases is why it failed so bad as a user platform as did Symbian when they went that route.
Real-time operating systems are awesome for things which need predictable time. From a user perspective, they are not nearly as responsive since the scheduler simply does not prioritize user experience. They're excellent for things like real-time communications but almost always fall flat when running applications.
Of course in the case of QNX, using a more advanced scheduler which offers real-time to real-time tasks and prioritized scheduling to user tasks works out well since the real-time functions tend to use minimal amounts of time when running. Real-time preemption is deadly to user experience for large tasks like running an EcmaScript thread in a web page.
I spent A LOT of time porting the Opera Web Browser to QNX back in the day and while QNX rocked for machine control and such, it was a pathetic OS at the time for running a browser. These days, I'm only speculating on the new scheduler behavior, but I do assume that for the work they did for Blackberry, they probably made it a hybrid which makes sense.
I agree with your comparison of a minimal embedded OS like FreeRTOS vs a full-feature OS like QNX which happens to have a real-time microkernel, other than time-slice management, I just don't see anything else in common.
i860 - The original platform for the NT kernel
... In the next 3 years.
Those same consumers may purchase a butt plug...realistically, they probably won't though... Maybe some of them will.
Let's assume 10 percent of consumers will buy their first TV. Unless 4K is cheap enough that it trumps paying $500 for a 48" 1080p... Not gonna happen for them.
Let's assume 20% of consumers upgrade or replace sets. Unless those 4K sete offer the same value for the money as 1080p, it won't happen for 90% of them.
In short... Did CEA mean to say that 30% of consumers actually purchasing a new TV will likely buy 4K? On top of that, have they figured out if 4K would be why they buy it or if 4K is just likely to be available on the nice thin model which looks pretty on the wall?
What is this Blu-Ray thing you speak of? Is it similar to 8-track?
Whitebox works for Google, FaceBook and could work for Amazon and Microsoft, but what those companies have in common is they can afford their own development teams to support their white box designs.
Whitebox isn't for small data centers with only a few thousand hosts. Software defined networking simply can't work without a stable underlying network. Software defined networking can really improve performance and scalability, it can't compensate for a poorly designed and poorly manufactured underlying network.
I was training a tier-1 ISP a few weeks ago on how to successfully deploy SDN as a backbone tech. I was training an owner of multiple 30MW data centers on SDN in the DC a month back. What is absolutely clear is that white box is for Fortune 100 tech firms only. It's not for little guys. White box = build your own brand of switch. I say this because I feel entirely confident that I can design the forwarding logic and algorithms in VHDL for a modern SDN enabled switch and my friends can develop and maintain the transceivers and analog. I have real world experience in developing multi-gigabit transmission equipment over Ethernet. I would need a staff of at least 2 HDL developers, 1 analog, 1 signals and 3 system level coders to maintain whitebox switches in a network. Cost of employees, management, tools and software would be a minimum of $3 million per year. Add another million if you want the team to have the skills and resources to actually improve the tech.
Hmm... Now that I think if it... I know who to hire and I'm sur they'd come. Maybe I need to find some investors
It's been a while since I've seen a Blackberry product. I have almost no knowledge of them as they stand now. I know that there's iPhone and Samsung right now. There are a ton of ankle biters like Microsoft and HTC as well.
Consider that the entire ages 13+ market in the western world is already saturated and everyone who will own a phone does own a phone. That means that companies like Microsoft, HTC and Blackberry have to either convince people to switch or need to make their devices so cool and cost effective that kids will want them.
Also, almost all kids that will have budgets for smart phones when they reach 13+ will probably have been using mom and dad's last generation of phone for a few years already. So they will already have most of their stuff on either iPhone or Android.
What exactly is it about the Blackberry which makes it so that people would consider switching from either iPhone or Android to their platform. So far as I know, the only differentiating factor they've ever claimed to have is security. I actually would never ever buy a device which claims to be secure. That just means they haven't been targeted enough yet.. it also means to me that if that's their central focus for sales, they will be better at hiding when they do get hacked to avoid damaging their reputation for as long as they can.
Does Blackberry offer anything that :
a) Will make me look cooler to my friends?
b) Will make my life so much easier that I can't live without it?
c) Provide at least on-par media, navigation, etc... services with what either Apple, Google, Amazon or even Microsoft are already offering internationally?
d) Doesn't make me feel as if I lost something by moving?
e) Have they caught up on services like Siri style voice systems?
f) Can they do things like real time voice language translation services like Google can yet?
g) Can It run Office?
h) Is it compatible with major services like OneDrive, DropBox, etc...
This is why Nokia lost their asses. They sold phones while Apple sold fashion and Google sold services.
Agreed. When I heard the 6 was coming, I decided to see what it would be like to have one. So, I walked around for two hours a day with a toaster held to the side of my head to try it.
I came to the realization that no only did it look almost as stupid as using a tablet computer as a telephone (thing jumbo galaxy thing), but it was just plain uncomfortable.
I decided to stay with the phone which actually fits in my pocket.
Am I the only one which wonders why EMC even bothers anymore. Their storage solution has become so unusable that most of the largest customers I know of in a few countries left them. Their virtualization solution has become too messy and a gigantic unnavigable cludge of modules that it makes sense to look elsewhere to have something more cost effective even if it's not as complete.
I thank EMC for having been on of the great founders of many technologies. They were an amazing innovator. Maybe it's time they just rolled aside and let the new, more agile guys take over.
I've been calling myself Lord Darren for several years now and I have documents from respectable standardization committees validating my title as such. I also have students who refer to me as Lord Darren. My children when they want something refer to me "Lord Master Commander of the Universe".
As such it becomes much easier for people to more properly identify me as opposed to others.
Guys like Icahn have become rich by investing in companies they don't understand. They follow trends and depend on hype alone to alter the value of a company. Today it was announced that the Norwegian company FunCom has made a partnership with Intel for one of their games. The stock is going wild. Up 23.5% since the announcement and based on normal Norwegian dumb ass investor trend will probably go another 10 on top of that before leveling.
First of all, the share holders have no idea what Funcom does or how their business works.
Second the investors have far less of an idea what Intel does or how their business works.
Third investors go nuts over partnerships which exist for no other purpose except for making announcements that sound positive to drive share value up.
I genuinely believe that there should be laws regulating and controlling what people are allowed to invest in. Icahn should probably be blocked altogether. He is a predator and does more harm than good.
People complaining about performance issues should do as some of the others have mentioned. Spend an hour backing up your phone to a PC. Then delete the entire device to factory defaults. Then reinstall the entire OS. The pray your backup actually worked since it doesn't always.
It's an Apple product. It's not supposed to be reliable, dependable or fast. It's supposed to be fashionable. If your data is that important, then you aren't cool enough to use Apple.
I installed this encryption on a phone and took 50 pictures, opened the phone, desoldered the flash, JTAGed a few others and backed up the data and socketed the flash, remounted it and got through the PIN on average in 300 tries.
It takes 15 minutes in a lab environment. I plan on making some dough on personal data recovery from broken iPhones.
I just don't see this as effective. It's hardly even an inconvenience. I think I could build a briefcase sized robot to do the whole job.
I almost got my ass kicked by a guy at Oktoberfest this weekend for playing that exact same type of word game.
Microsoft's NT was both of these things. It wasn't that MS turned OS/2 into NT, it was that IBM believed that NT was supposed to be the next generation of OS/2 until such time as MS said "we don't need to sell this to IBM". This is almost proved by the fact that early versions of NT actually shipped with an OS/2 subsystem and if I recall correctly, HPFS support. At that time, OS/2's desktop manager (can't remember the nifty name power something I think) was almost exactly the same between OS/2 1.2 and Windows 3. This is why it was so easy to port code.
IBM got really burned on that deal because as you mentioned, the VMS style kernel was substantially better than the half baked solution from IBM in OS/2 v2. The bad part for MS was that it took 12 years before x86 could handle such advanced technology gracefully. Now, I have a Arduino style board from Intel which is running the NT kernel... supported... by Microsoft. Who'd have ever imagined I'd be running Windows NT to change the color of an LED inside a cat toy?
I used to work for one of the patent holders on H.263 and H.264 and they actually used the possibility of patent payments as a means of lying their asses off for shareholders and hyping the stock.
Oddly, I don't see why this is a big deal for Google. It's a valuable decoder and avoiding paying the $25 million will probably cost them $50 million.
This is a tool which is designed to provide DSP processing resources which will ship with development tools and support under one OS. Most of the features involved included shared memory access and synchronization between an ARM processor and 8 DSP cores.
I just can't imagine why the OS would even matter for you. I guess you want to buy a really expensive DSP box and redesign it from scratch?
BTW... do you actually have any idea what a DSP is or how it works and where you would use it?
TI's DSP toolchain has sucked fantastically for years. Their GCC tools for DSP are horrible. Their C++ compiler is trash. Debuggers are even worse in most cases. They keep building their DSPs as if they are trying to make them general purpose computing friendly. It attracts shitty developers who can't write pipelined code and then complain about how poorly the DSP performs.
The TI DSP just doesn't offer good performance per watt per dollar spent developing anymore.
These days, it is 100 times smarter to implement a system based for example on Altera's tools which start by simply providing a general purpose processor in a relatively small footprint. Then you can profile your code and develop VHDL, Verilog or SystemC pipelines to improve performance. This allows multi-stage single clock pipelines to be implemented directly in logic which will provide substantially more performance per watt. Even better, you don't even feel like you've been robbed because the compiler sucked.
It's a shame that HP did this... but I guess some dumb ass will buy a pile of these and write code that will never run on anything new. Seems like a stupid investment to use this technology and a very unlikely investment to pay off.
Emma shower promise back when she was on her way to getting an education and doing something useful.
Now she's just a Hollywood airhead whining about being threatened with naked pictures. If she hadn't lost gray matter and just became a Hollywood useless person, it might matter.
She is such a huge disappointment and I hope my daughter is never ever like her. My daughter will grow up to have a brain and use it. She grew up with a brain and let it rot. I'd be so disappointed if my little girl ever grows up to be a victim like her.
I love her as an actress but I have absolutely no use for her as a person.
Why would Cisco want that flea ridden dog? They finally dumped them... Too bad it was for NetApp, but at least invicta is coming along well now.
Cisco just made a huge product refresh and it's all about Hyper-V, System Center and OpenStack.
Cisco doesn't even make new drivers build for UCS VMFEX on VMware anymore.
I don't think we'll see any love there. Besides EMC doesn't do anything Cisco can't do better with storage.