> People will probably still buy old-style up-market mechanical wrist watches for about GBP2k.
Cheapskate. A decent Rolex, depending on functionality and metal will be 3 to 20x that price.
74 posts • joined 23 May 2008
> People will probably still buy old-style up-market mechanical wrist watches for about GBP2k.
Cheapskate. A decent Rolex, depending on functionality and metal will be 3 to 20x that price.
> first time a hardware neural network has been put into a consumer product.
My Tesla would like to say Hi.
So they could go back to the good old days and say 'nothing over 56 bit' or some random number above that.
Except - AWS. In ye olde days it would be troublesome to decrypt something unless you had lots of computers, something governments have but the unwashed didn't.
Cores are so cheap to rent now by the thousand. Weak crypto won't work.
Really they can play wack-a-mole and ask / tell each, and, every, single, developer, and, tech, company to give them the private keys.
Excluding China/Russia (oops), that'll work for big companies (in western countries) that provide SSL keys, and large app vendors such as Google, Microsoft etc.
Those pesky criminals, however, will use something else... since 'crypto' worked well before computers. Mine's a copy of 'The Catcher in the Rye'.
Not sure what analysis you were hoping for. I'll have a crack...
*** Analysis ***
Samsung have a lot of these units, which contain last years hardware and will likely only get 1 year of updates*. Technically a large proportion of the units will be refurbished so, at least in countries like the UK, wouldn't be able to be sold as 'new'. Which is one reason they won't be sold here, at least by Samsung.
The price of them is high when you consider the stock of units was already produced, so the only cost was for refurbishment (new, smaller battery). In all likelihood Samsung are trying to 'pull a fast one' and recoup a significant portion of the revenue lost from the original sale.
* Based on the experience of older Samsung devices like the S6, which got 2 years.
== End of Analysis
Side note: As an ex-Note 7 owner who was stiffed by Samsung through this debacle, my boycott of all things Samsung is still going strong. Well, all things apart from posts of course ;-)
> (Instagram) He said he likes that privacy, but would open up the account if compelled to do so under the law.
Except, what would happen is that Instagram would get the secret order demanding access, he'd (as a citizen) wouldn't know because Instagram would be compelled to not tell him.
OTOH, if the court did actually compel *him* to provide access, well that seems reasonable.
And, not forgetting that all the TLA/FLA agencies around the world already have access via their dragnets.
Samsung tried stock for a while with the Google Play Edition, but even when they weren't adding TouchWiz they still took forever to release updates. Example: http://www.gsmarena.com/samsung_galaxy_s4_google_play_edition_finally_receives_android_51-news-13959.php
S7 only got recently updated because there isn't a newer flagship; I had the Note7 (RIP), and I suspect the second the S8 ships the S7 will start to get the same treatment the S6 gets currently.
Samsung are crazy; not only do they have TouchWiz, they also have Good Lock, which is actually more useful IMO than TouchWiz but... wow... they must really have nothing better to do than maintain (or not) all these different versions of stuff, an entire Store. Oh, and the Game Launcher stuff, GearVR, etc. etc. etc.
Unfortunately, you've not got many choices.
iOS. Where software guys have to program specifically for the background API: https://developer.apple.com/library/content/documentation/iPhone/Conceptual/iPhoneOSProgrammingGuide/BackgroundExecution/BackgroundExecution.html
Fail to do it and your app gets paused (and killed, to free RAM)
Android: As per the link you posted, follow the process properly or your app gets paused (eventually) and killed (to free RAM)
BlackBerry: Now Android
Any other contenders?
And yet, I click on the assistant button on the Pixel and have this conversation:
me> what are you
pixel< I'm your Google Assistant
me> what can you do
pixel< Here are some things you can ask for:
For non-Pixel phones you can *also* get it. All you have to do is install a copy of Google Allo from the Play store.
I launch that on my S6 Edge, click on 'Google Assistant', and...
me> what is the point
s6 edge< i think the point is to leave the world better than you found it
So, you *can* get it on any modern phone that runs Android (and has the Play store) though it's not directly linked to a button like, for example the trash that S-Voice.
You mean, unlike the customers who are stranded on old versions?
My S6 Edge says 'Android 6.0.1', 'Android security patch level 1 October 2016'.
What happened to 7.0? LG managed to ship that back in August.
Or even, just more security updates since Google keep providing Samsung (and everyone) with patches.
Funnily enough, my (HTC) Pixel says '7.1.1', and is currently updating to the February 2017 patches.
The whole point is that Google are showing that it's entirely possible to bring updates out, consistently, and frequently. That's probably because they don't distract themselves by releasing 1000 variants yearly (SM-G920 / SM-G925 / SM-G928, in F and I variants - and that's just the S6 brand), or some of the recent junk such as the Galaxy J1 mini prime, J3 Emerge, C7 Pro, etc....
> Or an app on your laptop with the built in mic and camera.
Installing an app (on a desktop) for a single use camera isn't ideal either.
And, one you've installed that app, it's got complete access to local filesystems, bluetooth, local network devices (e.g. can start to sweep your local subnet), etc.
Don't get me wrong, I want this new functionality disabled/optional as well, but installing software is much worse for security.
Why do you care about the stats for build-server7? Or, any build-server, for that matter.
Would it not be better for the system to monitor and alert you if something was unusual (job complete/whatever you're looking for), rather than you having to spend time constantly asking so that you can then do the smarts yourself?
Or is there something specific in the response that you couldn't teach the computer to look for and monitor on your behalf?
Back in the world of reality.
"Hi Alexa, clone me 500 VMs'"
"Clone me 500 VMs"
"Sure, cloning 500 VMs"
"Which VM did you clone?"
"I don't understand the question."
"Where are the clones"
"I don't understand the question."
"What naming convention did you use?"
"I don't understand the question."
"Clone 500 VMs"
"Happy to. Which VM would you like to use as the source?"
"What VMs are available?"
"I currently see 2327 VMs in Vcenter. Here are their names: VM-Win-A0001100, VM-Win-A0001101, VM-H-Win-B0001100, [...]"
Would love to see how this sort of tech could work in the real world, with real environments.
Integrated systems by most definitions (a.k.a reference architectures, or pre-built collections of servers/storage/switches) aren't considered hyperconverged since there are still separate boxes for those functions.
Hyperconverged, again by most definitions, collapses together at a bare minimum the server/storage pieces, typically with easy management. Some go much further than that, others don't.
Software-defined, not software-only.
Your definition of software-defined might mean no hardware acceleration, but that's just you ;-)
You haven't got to the end of the book then. Typically there is a few pages of another book there, along with a 'go to our website to find out more about <tor publishing> et-al'.
> I went to the library
I remember those. I can't wait to tell my grandkids about them, along with 'I remember the days before the internet' and 'we only had 3 channels, one with a girl losing at tick-tac-toe to a clown...' etc.
Two 'key' differences (sorry):
1) How many keys (and doors) do you realistically have? Car, house, err... car, house. Even with 10 keys, it's a problem of scale.
2) > 'users would use the same key everywhere if they could configure the locks. I'm guessing they likely would.
Absolutely they would! So do jailers, janitors, etc. But that's again a scale issue, even if a key was stolen and not recovered, how many locks would you need to change?
On the internet though, changing 100's of passwords isn't going to be practical.
OpenID tried to have a go at this but my bank won't accept it, neither will El-Reg, so it's dead to me.
> This will guarantee that it is easy to memorise
I currently have 775 sites, with 869 passwords total. 67 entries are duplicates. I'll fix those soon(tm).
Even if that was only 100 passwords, it'd still be impossible to remember anything but a few of the most recent ones.
Especially when some sites, internal systems etc demand frequent password changes, $tup1d p4$$w0rd$ L1K3 it'll make any differenc3. And then there are sites where you have multiple accounts.
Password management in general is broken. Tools like 1password/Lastpass help, but overall the whole thing is just utterly tosh. I don't have a better alternative, but you can't blame the users for using the same password everywhere...
Being a proud owner of Eric's first smartwatch (Allerta) and the non-waterproof kickstarter version of the 1st gen Pebble, it's sad to see the end of it. But as my daily driver is a Huawei, which generally just looks like a watch, it's not really surprising.
Now if only Samsung would answer the phone, or answer emails, or turn up when they said a week ago to collect the replacement unit that is now sitting in my garage.
And, refund me, of course. And, even, maybe refund the accessories that I've been unable to return. And, perhaps, maybe some compensation too for a couple of months of hassle, recalls, battery limitations, airlines telling me to turn it off, notifications telling me that it's faulty, dated loss each time I switched, etc...
Xda developers note 7 UK site is full of others also waiting, also not getting any response... Way to go Samsung at killing your brand.
Graduates seem to leave Uni with media degrees and then start work at a company in the sales department is graduate sales / inside sales.
Amazingly... the companies then train those staff by having them mentored by more senior sales people, and they can typically spend a few years in that pool before migrating into a 'field' sales person.
Yet, these same companies seem disinterested in doing the same with developers?
> “I think that is what sysadmins really want to know: how many IOPS do I lose by running the NVMe devices over a network?”
Or, maybe they really want to know: What business problems does running NVMe over blah blah solve?
I guess the whole 'Software Defined' (storage, networking) isn't happening then?
Why bother with custom ASICs when you can just use off-the-shelf hardware that is plenty fast enough for the job?
Hololens as an example vs. Atom? Really? A Casio watch is more powerful than an Atom. But, also, for single task workloads a custom processor can be useful since it can reduce power and cost by only having the components needed.
> Apple’s new A10 chip, powering iPhone 7, is as one of the fastest CPUs ever.
Also, how inaccurate. Lets compare an A10 vs. a modern Intel CPU.
http://browser.primatelabs.com/processor-benchmarks#2 vs. http://browser.primatelabs.com/ios-benchmarks
The A10 is slower on single core workloads by a large margin, and the multicore result from the A10 is around the same result as the single core result on the Intel. Now turn on multicore on the Intel...
To get the (around) 6x performance increase on the A10 you'd need to bolt another 6 of them together, consuming considerably more space.
Don't mistake me. A10 is good for a mobile CPU, but no where near the 'fastest cpu'.
> They are advised to call Samsung's customer service team on 0330 7261000.
Advised by whom? Advised where?
There are at least a 1000 units in the UK already since the pre-orders from Samsung shipped on Monday. Plus units shipped via CPW and the carriers.
Income != profit. So the ROI will be substantially longer.
Nope. Can't decrypt without the key. Splitting the key, sending the bits via different routes etc - won't make a difference. At the point where you decrypt you need the key. That's the point of attack.
From the blog:
“Instead, we want to use the keys to decrypt the data inside a multiparty computation,” says paper co-author Kim Laine [...]. Doing so unencrypts the data for a computation “without actually revealing anything to anyone except the result” of the computation.
And the key. Which can then be conveniently stored somewhere, because.
If the data is properly encrypted, it's pure random noise and no insights into it should be possible (other than it exists and is of size X). If it is decrypted anywhere outside of the organisational boundaries then that means keys have to be sent... at which point all that data outside the organisation sharing that key has the potential to be exposed.
From the whitepaper itself:
> In short, the protocol is secure as long as the cloud is semi-honest and no evaluator cooperates with the cloud. This holds even if the parties are otherwise malicious (simultaneous with the cloud being semi-honest).
However, in this case, if the malicious actor sits in *both* 'cloud' and 'evaluator', as agencies and organised criminals tend to for extended periods, then the protocol is not secure.
So, if you have no adversaries (of size and technical capability) then the cloud is safe anyway. If you do have them, then no amount of 'cleverness' like this is going to make any difference.
Also when you have 24 drives, you (effectively) have 24x the endurance since you can spread writes across all drives.
If a single drive is rated for 150TB of writes during it's life, then having a shelf of 24 disks effectively gives you 3.6PB of writes. or around 2TB/day/5 years.
Now if only the same could happen for HTML5 videos...
> that way at a later date we can purge all "retain_24m" files at two years and a day.
You'd need to change every app for it to be meaningful, since it's at data creation that you need to set it. E.g. start working on a word doc and put the tag there. In an email, in Excel, in Pages etc etc etc.
And then you'd need to encourage users to use it properly. Which they won't, because they can't. Is someone going to train all the users that a doc they create where they paste in an email address can't have a date of more than X, but if it's a different type of doc then it can have Y.
And each countries DP rules are different, so a multinational would need to figure out how to train users about the country the data belongs to.
Or just store everything forever. I know which offers more 'visible' value to the business...
Pros and cons.
Pro - your single update theory.
Cons: Wikipedial 'DLL hell'
1) Difficult to test your software since you'd need to test against every version of every library that exists, with potential interactions between all of them, since a client machine could have anything installed.
2) if your code has workarounds for buggy implementations they may fail when the bug in the upstream implementation is fixed. Or an incompatible upgrade where going from 'v1' to 'v2' removed some API or added something that you didn't implement because it wasn't needed in 'v1'?
3) Who is responsible for doing the updating? Your app? Every app? Relying on the OS vendor to own and update? If not the OS vendor, then what stops some naughty app adding a bad version that then affects all other users of that particular set of libraries?
4) What happens when an update breaks apps? How is it reverted? An update to 'Pokeman' breaks your banking app. Who's at responsible?
> ...to upsticks and the rest will follow like sheep.
Why bother though? What are the advantages to doing so? The multinationals all have offices in most major countries anyway.
> London (and mostly the city) pays a very high percentage of all Tax to the Government
Could that be related to the fact that there is a large amount of employment and wealth is concentrated there?
Which might explain why the BREXITeers secured the votes they needed outside of London.
> This indirectly subsidises the rest of the country.
Citation needed? It might feel anecdotally true, but would love to see evidence, outside of newspapers and blogs.
> [Fairsail] He wants to hire more EU nationals as developers, partly due to their high quality, all done partly because “there are nowhere near enough British skilled people”.
Total number of employees 51-200.
Lets look to see what type of employees they want: http://www.fairsail.com/working-at-fairsail/
4 entire roles available... (2 developer, 2 tester)
Nowhere near enough British skilled people? I suspect there are more developers than the entirety of his company currently looking for work.
Here's a Senior Developer role: https://fairsailrecruit.secure.force.com/hr/fRecruit__ApplyJob?vacancyNo=VN092
"You’ll be an integral part of the agile[...] embracing the force.com (SFDC) platform and play a key role in the development of one of the UK’s leading force.com solutions. If you don’t possess force.com experience you’ll need to have an appetite for it. "
Hmm... so the experts they need don't need experience in the primary product they develop with.
This has nothing to do with quality or skill levels.
A couple of years ago when I looked at Hyper-Convergence I didn't understand what it's purpose was. Just put Edge on 3rd party hardware, and you've got nearly the same thing? Cheap hardware, power of ONTAP, what's not to like? If only customers would have bought it...
Or the Frankenstein that was ONTAP + EVO:RAIL. That's HCI, right? Did anyone buy it?
Now I've spent a bunch of time with actual customers who either have, or want, HCI. They don't have armies of SAN admins, they have generalists who just want things to work. Create a lun? Set up a mount point? Configure complex settings? Troubleshoot is it this device, or that device, or this config, or that? No thank you.
“We've announced the FlexPod lifecycle automation solution[...] that allows you to shrink the time from receiving equipment to serving data to less than an hour. This is competitive with the hyper-converged solutions that are shipping in the market.”
'time to install' isn't the problem hyper-converged solves. Simplicity of operation, removal of complex steps needed to do basic stuff, getting rid of everything below the server, that is where the value is.
However, this isn't true.
If I'm selling two very similar sausages, one I sell for 5p and one sell for 50p, but I buy the first for 2p and the second for 3p, I'm making 3p profit on each of the cheap sausages and 47p on the second.
The quality of the sausages might be identical. The manufacturing plant, type of meat used, way the baby sausages are grown into big sausages, and everything other than the packaging might all be the same.
Is my 'artisan hand cranked' £10 sausage better than the 5p one? Possibly... but, like wine, are you buying the bottle and the label, or are you buying the stuff inside?
Real world example: Go look at the price for fruit'n'veg in Aldi/Lidl, and then in Waitrose. Probably came from the same farm, same earth... one has nice packaging.
> How about having 2 nodes fail in a Simplivity federation? Also down.
Not a federation. Federations are made up of multiple highly available clusters.
2+ nodes in a cluster - will make the VMs on those two nodes that aren't also stored on other nodes become unavailable.
> Then a 2 node Federation with Simplivity also has a 50 % storage overhead since everything is mirrored between 2 nodes
Technically each VM is mirrored to two nodes, so in a 3+ node cluster VMs will be spread across multiple nodes with multiple copies of data across multiple nodes.
All data, regardless, is compressed. Most customer see at least 1.5:1 compression (50%) so actually mirroring is 'free'. Sure, there will be workloads where storage can't compress as well (e.g. video archives) but all systems that erasure code will be less efficient than a shared-storage array such as an E-Series or VNX.
> but Springpath could leverage Erasure coding in the future and bring the RF overhead way down, just like Nutanix can.
Absolutely. And SimpliVity could too (instead of replicating to two nodes, split data over multiple nodes). But 'in the future' everything will do everything, so doesn't really matter right now.
> Then the overhead. Nice that Simplivity only uses 4vcpu's (due to its special dedup card) how about memory overhead?
Typically 2 vCPU; only 4 vCPU when under maximum storage load while also replicating, doing garbage collection etc etc. But regardless, I agree, CPU is mostly less important than memory/storage.
> Simplivity memory overhead is huge and memory is way more important to VM density than CPU
All systems are sized with memory overheads built into the configuration.
Large systems can take up to 1.5TB of RAM and even on the largest system we're only reserving about 101GB of RAM (and average 2 vCPU), so that's still nearly 1.4TB of free RAM for VMs. Should be enough for most workloads.
BTW - all in-line dedupe systems need RAM. This is because you have to store the hash metadata table somewhere, and if it's on SSD you'd incur read I/O for each hash lookup for each (potential) write.
Example: https://wiki.freebsd.org/ZFSTuningGuide#Deduplication - for 20TB of disk (about the same as a large SimpliVity box), will need around 100GB of RAM.
Other systems may reduce RAM requirements by only deduping portions of data E.g. Nutanix: http://nutanixbible.com/ says 'As of 4.5 this has increased to 24GB due to higher metadata efficiencies.'
- so might be great for VDI where you may have lots of the same Windows boot disk of 24GB or so, but probably less useful for other workloads.
If you can think of a way to solve the in-line metadata RAM requirement while still maintaining performance for hash table lookups, patent it quickly and start discussions with all the storage vendors since it'll be quite valuable tech.
Customer puts a bunch of data in glaciar. Restores more than 5% of it. Then reads it's pricing mechanism?
And here I was thinking this cloud stuff was all just free!
It's not even small print. It's disclosed on the pricing page with examples in the FAQ: https://aws.amazon.com/glacier/faqs/#How_much_data_can_I_retrieve_for_free
> EMC, VMware and VCE will soon...
EMC, EMC and EMC will soon...
“For the last several months EMC and EMC partnered very closely to develop a new next-generation hyper-converged appliance family that uniquely leverages technology from EMC, EMC and EMC..."
Imagine a similar press release from, say HDS:
Hitachi and HDS partnered very closely to develop a new.... that uses unique technology from Hitachi and HDS...
It'd just look wierd.
Also what technology does VCE own? Some 'engineers' who tune? (source: https://www.vce.com/asset/documents/vce-vs-ref-arch-topline-strategy.pdf)
APRA? Addiction Prevention and Recovery Administration (APRA) or American Professional Rodeo Association?
Using music licencing as an analogy probably isn't going to work.
In the UK we have the PRS (and also the PPL). The PRS definitely charges depending on square footage, number of classes taken, types of equipment, radio vs. jukebox vs. video player (with or without screen) etc.. Enjoy: http://www.prsformusic.com/SiteCollectionDocuments/PPS%20Tariffs/j-tariff.pdf
And a more complicated example from the PRS, that has elements of distance of hearing (so volume) for outside, as well as overlapping music played from multiple locations: http://www.prsformusic.com/SiteCollectionDocuments/PPS%20Tariffs/HR-future-Tariff.pdf
Hi Dan, miss you (and the rest of the gang) too :-)
http://www.netapp.com/us/technology/storage-efficiency/feature-story-compression.aspx most workloads benefit from dedupe. Not everything, of course, but a significant portion.
Any time you can offload from the back end or save space is a benefit.
I agree, most implementations of dedupe are pants. If a feature is worth using then it should always be on. Being under load isn't an excuse - since systems properly purchased should be as busy as possible.
In an ideal world, everything would be in-line and the idea of a 'post process' for anything would be eliminated since doing work post process just means you're doing the same work twice, along with all the other implementation joys such as having blocks locked in place by other features.
As you rightly note, memory can be limiting factor if all metadata for hashes is in main memory. Fortunately, that is solvable as well, either using SSDs to hold that metadata, or using scale out memory solutions. I can understand why not SSDs if you're on a race towards 0, but most workloads don't need it...
If encrypted or pre-compressed (and unique) data hits a dedupe device of any type, it'll not dedupe.
Backups of said data will still dedupe well.
If it's properly in-line, no thrashing, they would (sensibly) generate a hash of the data and go 'new' (store), 'new' (store), 'new' (store) for each block of data.
But, reality check. You encrypt the (physical) disks, not the apps. I've yet to come across any customer who runs server VMs with guest-level encryption. I'm sure there must be some people out there who per-VM encryption and to those, they'll already have their restricted product lists that they'll be used to working with.
Listening to the Tech ONTAP podcast it's something that can still be turned off, so it's still an afterthought.
I'll bet the release notes / documentation for it give a whole long list of reasons to stay away.
Done properly, dedupe speeds things up, not slows things down since you get more out of each deduped cache, and you also avoid I/Os since you can discard duplicates.
If only networked worked that way.
Those people wanting to snoop traffic will do so in ways you (as the subscriber) can never see. For example, port mirroring on switches, or using WCCP redirects on the ports that they are interested in. So ICMP requests will happily continue to went their merry way, but you aren't using ICMP when you read email...
> So how about a return to the apprentice/journeyman model? The government pays for your degree and you in turn agree to work for X years for the government (or until you pay off the balance of the debt).
Here's a better one. How about regardless of the government paying for a degree or not you spend the rest of your life working for them.
That's called income tax. About 40% of my time is spent working directly for the government.
The primary reason education should be free is that it pushes the potential income of an individual up for life, and hence the government is repaid via increased tax revenue. This assumes that the quality of degrees being offered are marketable and valuable and that having achieved a degree there is suitable employment related to that degree.
So remove funding and loans for all the non-valuable degrees e.g. history, literature, 'american studies' (wtf?), languages in general, ethics, films, religion, 'gender analytics in economics' (wtf?) etc, basically anything that either doesn't have an actual paid outcome or where 'on the job' would have been a better use of time.
Make sure the number of places offered for a course resemble the market for a skill. Pretty pointless training 100,000 people in Forensic Science if there are only 4,000 jobs and those are mostly filled by stable candidates.
And when people want to study 'american studies', more power to them. Go fund your own way or persuade an employer to fund it. No need for a loan from any government.
Biting the hand that feeds IT © 1998–2017