no mention of chargeback?
How can anyone allow self provision without constraints or at least charge back for budgeting?
1650 posts • joined 19 Jun 2007
How can anyone allow self provision without constraints or at least charge back for budgeting?
Or risk losing out to someone like lenovo or maybe HDS by doing so. Who knows maybe someone else will make a bigger bid. Back when 3par was bought Dell thougt it was good my rep came to tell me they were going to Dell and bam, next day or so HP comes out with hostile bids.
Haven't used any HCI myself but the cpu overhead for storage just seems insane, have heard or read some systems want 4 cores per node to do that shit. Having an ASIC sounds great on paper though as another poster said perhaps it is slow I don't know of course never used it. My latest boxes already use 2x22 core xeons, wouldn't want to give up 4 cores from even those.
Beefing up the Xeons is difficult because well of course they have to wait for intel to do that.
Easy for me having worked for 2 different social media companies that imploded. Bullshit numbers are the name of the game. Last social media company I was at their web traffic numbers were going down at a 45 degree angle on the graphs month after month.
On my exit interview they had the balls to say I was wrong and their traffic wasn't going down the tubes. I only ran the servers that ran the site what could I possibly know more than HR on website traffic.
First social media startup we delivered tons of emails. Companies like yahoo and google started banning us because of too many bounces. We asked to remove the users from the system so there would be less bounces and more email would go through. The answer was "no, that would hurt our user numbers".
That company was basically in the same market as linkedin. They would have contests to see how many users employees could get to sign up. (Same at 2nd social media company too). I never participated in that shit. My linkedin network(1st to 3rd degree?) alone literally exeeded the number of users at my employers social media company which at the time was around 50k I think. Clients would pay big bucks to use the site to recruit people. Software was clunky and buggy. We had to fly people on site to hold their hands to use the software. That was expensive. They stopped that practice and usage dropped like a rock overnight.
Oh oh!! I forgot this bit too. That company was at one point going to be THE partner for social hiring shit on facebook. They had an agreement and everything. Then literally at the last minute (few days from launch) facebook opened up a bunch of APIs and killed the agreement. They gave us some free advertising credits or something to compensate. I laughed so much. Such clueless folk. CEO has started several companies and last I heard every single one has gone down in flames. Why the hell do people keep giving that guy money. One of his more recent companies lost several hundred million or something. He seems to be always looking for the next SHINY.
Only social media I use is linked in. Very light usage of that even. But I do find it useful for career shit. I am terrible at keeping in touch with folks.
I don't think they know. Verizon wants one(get out or get a much cheaper price) thing yahoo wants another(stick to the deal). Sounds like the contract to buy didn't have enough legalese to settle the situation.
I wouldn't be surprised if it ended up in court. Kind of surprised it hasn't already. I assume because both companies fear losing control over the situation by handing a decision to a 3rd party.
My lenovo p50 has dual 512g samsung 950 pros(pcie). And a sata samsung 850 pro 1tb.
The size of these are so small intel is really having to scrape the bottom of the ocean to find a use case.
Most orgs will want hot swap high availability storage for their databases.
Amazon? The company that built themselves on the back of oracle databases and hitachi storage?
Won't touch supermicro with 10 foot pole for anything that has a scent that makes me think of production workload.
So there's one reason.
Been using splunk for 11 years, I think I know how it works.
Unless you are needing to search over all 70 billion ay once.
Company I am at has 30 billion events in splunk without a special setup.
No takeaway here everyone knows NVMe is fast.
if AZ is so friendly maybe Uber should take their employees there too.
Glad that they were held to the rules in one case though. What I'll perhaps never understand is why they didn't just register. I mean with the amount of cash they are burning they don't need the revenue from those self driving cars picking up people. Just drive them around the city and simulate such activity, they could even launch wads of (1? 5? 10? more?) dollar bills out the window every now and then to attract attention.
Maybe Uber will finally implode some day that would be a good day for me.
I don't ride taxis often, few times a year generally(in the U.S. I either drive where I am going to or rent a car at destination), to-date I do not recall a negative taxi experience (typically I book them(by phone) at least 6 hours in advance so maybe that is part of it).
debian user for 18 years(for personal stuff, work stuff is mostly ubuntu, and before was centos or RHEL), though I still do not trust unattended updates, even on stable(haven't run testing I think since 2003,never ran unstable). I don't recall issues of the top of my head, but I still prefer the peace of mind of knowing that the change is going through.
To my knowledge none of my personal systems have ever been compromised(I have run internet connected debian systems since 1998 and Slackware before that - debian powers my personal email server, DNSs, etc), and on the systems of my employers the only ones that I was involved with that had been compromised have been ones that I was not responsible for(on that note the number is 3 or 4 compromises over the past 16 years).
Perhaps too paranoid, or not paranoid enough not sure.
is probably using the wrong product.
trying to get the most efficient, and automated system possible, there will always be gaps, and sometimes big ones that people don't expect.
If you want a "real" backup then the backup cannot be connected to the primary (e.g. real time replication or clustering). A tightly integrated backup protects against many failure scenarios but obviously cannot protect from all.
I endured a similar event on a 3PAR system close to 7 years ago now, I learned a lot during the process. The support(at the time) was outstanding (since HP took over it has been closer to adequate than outstanding), and made me a more loyal customer as a result. That case at the time 3PAR determined was a one off as well(at least at the time). The backups I had at the company were limited to small scope tape backups due to limited budget. Fortunately I was able to pull some miracles out of my ass and bring everything back online in a few days (storage array itself was back online in a few hours). After all of that the company axed the disaster recovery budget I worked on for a month in order to give the funds to another project that they massively under budgeted for. I left a couple of weeks after that.
I was part of another full array failure data loss event more than a decade ago on an EMC system, that was an interesting experience as well, I wasn't responsible for that system at the time(I supported the front end apps). Maybe 35 hours of downtime, and we were recovering from the occasional corrupted data thing in Oracle for the next year or two that I was at the company.
The key is of course to realize no system is invincible. There are bugs, there are edge cases, and in highly complex environments those can be nasty. It's certainly very unfortunate that this customer got hit by one of those, but it wasn't the first, and it won't be the last.
The biggest outages I have been a part of have been application-stack related.
Some of the more recent management I work with freak out when shit is down for an hour or two, oh my they have no idea how bad things can get.
This kind of thing has also kept me more in HP/3PAR's court(customer now for almost 11 years), because if this kind of thing can happen to a storage system that is roughly 10 years old then I can only imagine the issues that can happen with the startups. These big 3PAR boxes get a lot more testing and more deployments etc.
But it's also probably indication that HP won't ditch Hitachi for the ultra high end just yet(where they have 100% guarantees).
In general perhaps I am lucky, or maybe just lazy that I don't encounter more issues because I tend to not leverage much of the functionality of the systems I use. Take 3PAR for example some people are surprised that I haven't used the majority of the software available for the system(e.g. never used replication). Part of that is budget, part of that is I know there are more bugs in the more complex things(on any platform).
Same with VMware, I file on average 1 ticket with HP/VMware support per year over the past 4 years, currently running almost 1,000 VMs. Runs smooth as hell, very few issues, and again much of the more advanced stuff (even though we use enterprise+) goes unused (but we do use distributed virtual switches and host profiles that are in ent+). I have seen lots of complaints over the years about vmware bugs that I honestly have never seen, I guess because I just don't have a need for those features. The only crashes I have gotten have been because of hardware failures (maybe 6 in the past 5 years, and none in the 6 years before that at least while I was at the companies). And no - no plans for vsphere 6 anytime soon.
Same goes for my ethernet switches, the feature set I need on those hasn't changed in a decade. List goes on...
at the end of the day you have to realize what you are protecting against. Right now I am trying to get a tape system approved (with LTFS over NFS) for offline backups. What I am protecting against there is someone breaking into our systems and deleting our data AND our backups. Having offline tape(stored off site) is a good tried and true method of protecting data. I don't expect to use it ever, we use HP StoreOnce for backups & off site backups, but still someone could delete data from those just as they could delete data from an API-based cloud system.
Co-ordinating someone to return all of our tapes and delete them is a far bigger task.
Dealing with tape directly isn't fun, I am hoping that LTFS over NFS will make it pretty easy since all of our backups write to NFS as is(on StoreOnce), so adapting them to LTFS should not be difficult. Certainly am aiming to avoid working directly with fancy tape backup software at least.
It would be really cool if StoreOnce could automatically integrate with tape, so I could write to NFS to storeonce and then have it write it to tape on the backend. It would remove some steps I will otherwise have to do myself. I know there is 3PAR->Tape automation but that is too low level and relies on use cases that don't cover what I do for the most part.
(Note 3 person not Note 7)
Just curious. e.g. I have been blocking AT&Ts attempts to install android 5.0 on my note 3 for probably 18 months now just by keeping wifi off 99% of the time since the update requires wifi to be enabled.
(I have another ATT note 3 with the latest android 5 that is supported on it and still much prefer 4.4.4.)
So just curious if people wanted to block the update might it be as simple as doing what I am doing.
just saying hi to lior(one of the founders, I know him and a couple other Excelero folks from Exanet days) I assume he is reading this article.
On the topic of host vs controller caching one important aspect is if you are caching reads or writes. Caching writes at the host layer is much more complicated of course. Qlogic wanted to try this with their Mt Rainier tech(and I was excited at the time), but from what I heard that aspect of the tech never got close to making it.
Now if you are hyper converged then caching writes at the host is more feasible I imagine.
For me, the bulk of my org's workloads are in excess of 90% write. Most of the read caching is handled in memory in the app layers (wasn't designed with that purpose in mind just ended up being that way)
but don't remove the option.
At my org where we use Duo approx 18% of the users (according to monthly Duo report) use SMS or Voice, about 65% use the Duo Push app(most of the rest use a duo generated pass code). This number hasn't changed over the past 6 months. At one point I noticed for whatever reason people located in what might be considered non 1st world(not knowing off the top of my head what constitutes 1st world) countries seemed to be more likely to use voice as the 2nd factor, at least in my org.
I don't expect Duo to remove the option, though I suspect their admin UI has the ability to turn off various forms of 2 factor if the companies wish to(I haven't checked).
only every social media type thing that allows you to "like" something would allow you to "dislike" as well then maybe we can make some more progress.
judging a company on one quarter's numbers? The graph is right there in the article. It reminds me a lot of Violin's graph, here is an article from 2013 about them
heat is on to keep the growth going.
I think if anyone is buying into VDI right now it is probably safe that Nutanix will be around in 3-4 years even if they are in the toilet at that point(see how violin continues to hang on by a thread).
yeah for sure it's over reaction, internet archive is already subject to existing laws whether it may be copyrights, porn or other things. Whether or not anyone has ever gone after them for such things I have no idea.
If they are going to move somewhere there have got to be better places to go than canada, which is what the 52nd or 53rd or something state?
"The idea behind E8 is that not 100 per cent of what an AFA does inside the array must be done in the AFA itself, and since NVMf requires a high-bandwidth low-latency network anyway, there will be no performance hit if those things are done outside the array"
What are those things ? It sounds like this thing has absolutely no data services, so they get high performance(which is how Violin started??). How can some of those things like replication, or snapshots be done outside the array on a shared volume ?
It will probably be a few years until controllers can catch up to NVMe, just like it took several years for them to catch up to regular old SSD. Maybe by 2020 ?
Until then people will have to make due with compromises on features if they need raw performance.
Fortunately for most customers this is a non issue since as the article says regular old SAS SSDs are plenty fast already, and will be fast enough for a long time to come.
Mission critical(which is really in the eye of the beholder) data already sits on centralized storage for probably 98% of organizations out there. You simply cannot get the reliability and stability(and data services) with internal storage systems on any platform. Even the biggest names in cloud and social media make very large scale usage(relative to your typical customer anyway) of enterprise class storage systems internally.
Certainly you can put "critical" data on internal drives, though it's highly unlikely that truly mission critical stuff (typically databases and stuff - that may be responsible for millions or more in revenue) would sit on anything other than an external storage array (likely fibre channel). Ten or so years ago VMware brought a whole new life to centralized storage simply because of vMotion.
If you don't understand that then I don't have time to discuss it further.
Though the person who is touting this idea in the article sounds neat, getting that kind of thing done right is far easier said than done and am not sure when it may happen (certainly none of the solutions on the market are even close). Some solutions do file well, others do block well, others do object well. Nobody comes close to being able to do it all well on a single platform. Maybe it will be another decade or so before we get to that point if we ever do.
I think at this point speeds of flash is really not important anymore(outside of edge cases). What is far more important is simply cost. Cost is improving but obviously has quite a ways to go still. Many data sets do not dedupe, and lots of datasets come compressed already (e.g. media files). Need to wait to continue to get the cost of the raw bits down further.
SAS-based SSD systems will be plenty fast for a long time to come for most workloads.
I have some mission critical systems that do not use our SANs, though they are generally stateless(web or app servers), there is no mission critical data on them.
For my vsphere 5.5 clusters (no plans to upgrade yet maybe in 1 year)
As a debian user since 2.0 in 1998.. (whose personal servers still split /usr /var and /home etc on different logical volumes)
is they keep system rescue utilities on /
But sounds like they won't.. stupid stupid.. sad to see.
reminds me of one time I tried to rescue a solaris system many years ago, the /usr was not accessible for whatever reason I forgot why at the time, and of course I couldn't even run 'ls', I had to rely on 'echo *' to see what files were on the system to try to find the command to mount or fsck or something.
oh well, like systemd that I haven't had the misfortune of using yet I guess I'll have to get used to this move too eventually.(*trying not to think about it*)
how many SSDs are in the system? for Pure it says 40 (estimated) for each test. I tried looking for a link to the results in the article but did not see any.
as a long time HP customer I agree(and I tried going the white box route for a couple years only to learn the hard way and came back to HP), they do a lot of stupid things. Though it seems like many (most?) companies are doing stupid things recently when it comes to wanting to show some kind of growth. If you are not showing growth then you might as well be dead in the eyes of many stock holders, even if it might take you a decade or two to actually "die".
Which is of course one of the big reasons Dell went private.
The next full recession whenever it hits (I thought it would of really hit 2-3 years ago, some people argue we have been in one the whole time and I partially agree with that but think there is significant pain on the horizon) will be interesting to see what happens across the various industries.
someone at Cisco throwing chairs around right about now
I know these don't use the fancy flash but I was just curious what a typical HP DL360 Gen9 (1U) can do, and HP says can go up to 3TB/server with 128GB DIMMs.
They can do NVDIMM too, though 128GB max ?
So other than cheaper what is special about this supermicro thing
(Enterprise Dyn customer for 7 years now across 3 companies, have never used their free services)
I bet the attack got Dyn on Oracle's radar just how important they(Dyn) are to many companies out there in cloud. Oracle has lots of cash to blow and seems like Dyn is likely to be dirt cheap compared to a lot of their purchases.
As a Dyn customer I like the idea of Dyn staying independent, certainly do not believe the attack hurt their business all that much(all of their paying customers especially the big ones are well aware of the DDoS risks).
I just think that some big exec had an "aha" moment(again after the attack surfaced how critical Dyn is) and decided to give Dyn an offer they felt Dyn wouldn't be able to refuse.
Not sure what your drinking but I want some.
Not an apple customer either. With my recent laptop upgrade I too don't expect to upgrade storage for a while. 512G maybe pushing it for now, which is why my lenovo P50 has 2TB of SSD (2x512G samsung 950 pro and 1x1TB samsung 850 pro). My previous laptop(2011) had a 512G samsung 850 pro as well(that laptop went through 3 HD upgrades over it's life )
For a 'tech pro' 512G is likely bare minimum these days.
And how is 3GB per sec enough for swapping in place of ram? I looked up the specs of my i7 6820HQ cpu and it has 34 gigabytes per second of memory bandwidth. I assume that is with all 4 memory slots filled, currently using 2(16G upgradable to 64G)
My 6 year old toshiba with i7 620M CPU apparently has 17 gigabytes per second of memory bandwidth.
Apple has the money and skill to make machines for that market, probably their longest most loyal userbase. I bet most of them would be perfectly happy with the older form factors just with updated components.
Sad that they just don't care.
you must be new to this then, amazing to me that it would be amazing to someone else to think there wasn't such protections in place. It's not as if botnets are new. What was it a decade or so ago people clocked unpatched XP systems at being hit by botnets in a matter of seconds too?
oh sorry 20 minutes, I guess bandwidth was tighter back then. Though the brief outbreak of that SQL slammer had MSSQL systems infected in a shorter period of time. If people were running (and continue to run) SQL servers (and mongo etc) exposed to the internet on server class platforms I'm not sure how people can expect non technical folks to run their cameras in any more of a secure manor.
(not to knock XP specifically just that sticks out in my head)
and as for government intervention, wasn't it someone in the U.S. government anyway recently saying they are not equipped to handle this ? They need a new department etc. That will take time to spin up, fund, and stuff. Enforcement will be difficult as well, look how well the copyright enforcement goes and they are equipped to handle that.
I don't know what the solution is myself, whatever it is it will likely take a bunch of time and resources and will probably be full of holes anyway. In the mean time do your best to protect yourself.
Any more than I'd want a perl shell.
I'm sure it makes a fine interpreter but as a shell it has always sounded pretty terrible. I say sounds because I've never spent more than a minute or two at a time in powershell.
Windows does have the disadvantage of having to deal with objects like the registry instead of text files. Some may like that approach more but if course isn't for everyone.
My favorite native shell for Windows dates back maybe 20 years.. 4NT. And for dos 4DOS. Fond memories of both. I use cygwin these days though my usage is pretty limited to a dozen or two commands. 98% of my computer time is in linux
Most people writing apps haven't either.
Good luck running RDMS etc on object stores?
than my Lenovo P50 laptop which has an Skylake i7 (Xeon capable), 16GB ram (64GB capable), and 2TB of SSD (1TB SATA 2x512GB NVMe), and Nvidia M2000M 4GB GPU, running Linux Mint 17 MATE.
My laptop is a bit bigger(14.86" x 9.93" x 0.96-1.16") probably because it has to include a full sized keyboard and 15.6" LCD.
Though I'm assuming it's cheaper than my $3400 laptop(probably doesn't include a 4k screen by default even though I run mine in 1080p). I don't see pricing for this new HP thing anywhere.
I love HP Proliant and 3PAR though this workstation doesn't seem like much of a workstation if a basic laptop easily out classes it. (I have a HP xw9400 workstation next to me here though it is about a decade old, by no means is it small though).
per above poster my laptop is pretty quiet and comes with a 170W external AC adapter.
From what I have read in user comments over the recent years it seems the bulk of what people call net neutrality could be re-worded as "Unlimited netflix streaming".
It also seems as if the people that want this unlimited netflix streaming really don't care how it's done, they just pile on to Net neutrality as if it will save them.
Has been out there for over a decade. Nothing new. Same with 3rd party optics. Which is why many first tier vendors flag 3rd party optics and won't support them (or may for a premium)
I wouldn't be caught dead deploying a supermicro ethernet switch though. Plenty of low cost options out there that have real support and better software.
Seems most NAS systems are limited to read-only snapshots, useful for data recovery but not quite as useful for testing with stuff that wants to write. Not sure why it seems so hard to do for NAS when SAN folks have had read write snapshots for a very long time now.
Dell OEM deal to fight off... Dell... hmmm
Try getting a /24 it is pretty painful. But even /27s not hard to get still.
Does anyone have an example of new protocols or ideas that this might impact? Just curious, I can't think of any new protocols that I have heard of that would have been useful to me in the past decade.
Or if someone can just name some useful protocols that have come out in the past decade?
I have been doing networking for the past 16 years or so, though generally base stuff. There is a bunch of fancy shit out there I know that has never had any value to me(e.g. TRILL -- but that is a layer 2 thing totally independent of course of layer 3 IP).
Would HTTP/2 count as such a protocol ? I suppose it would but again I'm perfectly happy with HTTP 1.1.
They are applying kernel updates to 200 systems and rebooting them in 5 mins without significant impact to availability(otherwise they would talk about much more than 200 systems)
After all this time seems really low for them.
A quick search shows an article saying they expect to hit 100M in annual revenue in the next couple of years. Quite a tough market for storage "startups" these days.
I no longer actively blog but tgat is where my WP admin is at too. Someone cracked into it earlier in the year somehow (I've never run the vulnerable plug-ins,maybe compromised one of the admin accounts).
Ended up locking it down by limiting access to the admin to my private VPN. Also deleted all other accounts(been 3 years since anyone other than me wrote articles), no issues since fortunately.
Which is one reason why the deadline keeps being extended.
For my org we've been unable to go beyond 1.0 due to a blocking citrix netscaler bug I was working with them for 18 months on(unrelated to TLS, but blocked upgrading). Now they have a fix which is due in early December for public use. Which means immediate deploy to test envs then production hopefully in late January. Assuming no further blocking bugs discovered.
Not vulnerable to the major TLS attacks according to SSL labs though.(qualys does regular scans to validate we are PCI compliant)
Hit submit on accident and can't edit posts on mobile.
Wanted to say only complaint about camera is low light. Though i generally work around that by recording a video in low light then taking a screenshot of the video.
Long live android 4.4
Reading your comment on my note 3, still my main phone (and a 2nd for backup). Haven't found something good enough to entice me to switch yet. 160Gb of total flash storage, finally enough that I don't feel constrained.
I've never recorded 4k video and the camera has never been set beyond the 8Mp setting (14Mp max I think? )
Is to destroy twitter, facebook, and other sites like them.
I think linkedin is somewhat helpful(to me anyway), though the stupid feed that people post pictures to and write comments is dumb.
I figured nest smoke detector would be able to text you that your home is burning down in that case :)
But I do feel for those pros getting burned. What's worse is apple literally has more money in the bank than any other tech company so has resources to build a product that fills the niche of power users. I didn't pay attention to the event, seeing the macbook air get canned in this article is the first I've heard of that.
For whatever reason last night I was bored and spent some time reading comments on arstechinca about the new pros. Lotsa upset people who waited years for a lackluster update.
I recently upgraded my laptop after 5 years. It was a massive upgrade for me whether cpu or ram or gpu or storage or screen
Went from a 2011 era Toshiba Tecra A11 dual core i7 8GB 1600x900 Nvidia video with 512Mb and samsung 850 pro 512G(5.6 pounds), all the way to Lenovo P50 quad core i7 with 16G (upgradable to 64, don't see going past 16G for a long time), nvidia video with 8G a 1Tb samsung 850 pro *and* DUAL samsung 950 pro 512G (total 3 SSD and 2TB). Oh and 4k display (but due to windows 7 remote desktop and vsphere client being unusable in 4k I run at 1080p, works fine for me).
I fussed about with 4k for a week or two but don't see much benefit even if all my apps worked right. It just made everything smaller forcing me to increase things like fonts etc to compensate. I'm sure perhaps for media it is great, but for system andmin stuff I do it has little value (I do use 16 virtual desktops with edge flipping which provides far more value). I do not use external monitors either (never have).
But hey 4k is there for me waiting if I ever change my mind, same with 4 memory slots that can take me to 64G of ram.
New laptop weighs the same more or less same physical size. Old laptop was $2300 new one all in about $3500(company paid for everything but the dual 950 pros).
Both laptops stay in linux mint 17 99% of the time and dual boot windows 7 if needed for games, also run windows 7 in vmware for some work related tasks. ( you can probably guess how I feel about windows 10 despite knowing it surely handles 4k better than win7)
I'd guesstimate battery life in linux around 3 hrs. Which is fine it stays plugged in 99.876% of the time. Only complaint for travel is the power brick is more than double the size of the toshiba. I think the P50 tops out at 200W, though with normal usage with a kill a watt meter I saw about 55w total.
Spin out the IP business and call it something like Foundry Networks.
I was thinking there would be trouble for broadcom if they bought brocade given so many companies OEM broadcom for their switches etc. Good to see they realized that too.