* Posts by Probie

79 publicly visible posts • joined 8 May 2014

Page:

Government embarks on futile mission to censor teen music vid viewing

Probie

Re: Teens and music videos?

Age of consent in the UK is not 18, it is lower.

Probie

Re: Won't somebody think of the parents???

Realistically, scanning or searching the app stores for "parental control apps" is stupidly easy and that alone removes 95% of the nanny state/ Daily Mail arguments.

I have no expectation of someone else being responsible for my children, none, nada, zip bupkiss, Sweet FA, in the end the are MY kids,so I accept responsibility for them and that includes EVERYTHING they do.

So unfair! Teachers know what’s happening on students' fondleslabs

Probie

Oh no, I can blame the teachers. My daughters school photocopy all the homework and give it to the children on a FRIDAY of all times for handing in on Tuesday, as a consequence I get to review the homework. Frequently I have to correct the photocopy. The spelling is atrocious, the Math's problems are frequently incorrectly stated, and for the pity of all the deity's do not get me started on grammar.

I know I did piss poor in my English, but I scrapped a pass. but these "teachers", also known as "moron's", the exclusion to Darwin's "survival of the fittest" and a whole host of other unrepeatable colloquialisms make me look like Jesus walking on water. To top it off, they acknowledge the mistakes, doff said cap and then repeat the mistakes a week later. This from a well performing ofstead inspected school. So if the teachers cannot produce mistake free homework for the pupil's, how much faith do you think I have in them actually teaching anything apart from how to be a victim of society.

On the other hand, this might give them a fighting chance to do something correctly, but my breath is not being held in and I believe there are no rashers of bacon at 30,000 ft. The teachers are already illiterates.

Back to your point, yes I can blame the teachers, I can blame them very much indeed, as they attempt to fuck up the pupils education, or more the point my kids education.

Sales up, profit up, but no champagne corks popping at Rackspace

Probie

Re: Why?

No they are not ALL US based, they have separate corporate entities across the world such as Rackspace LTD UK. All that said I am not sure what happens legally with respect to handing over of data, especially on (USA perspective) foreign sovereign land.

Rackspace have already said they will not complete on AWS on price and I think in the earnings call there was mention of reselling AWS or at least getting the staff AWS trained.

"We have deployed a team that is building the market-leading offering for customers who want specialized expertise and Fanatical Support on the AWS cloud"

- From the mouth of Mr Rhodes., taken from the transcript of the RAX 2015 earnings call posted on seekingalpha.com

But now that makes things really confusing. Personally (and as much as I root for the techies) my view is that they just flat loose to AWS, Azure, GCE, Digital Ocean etc ... on any measurable or meaningful statistic. RS do not Innovate fast enough or wide enough, execution of an idea is like an octogenarian running a marathon etc ....., when compared to any other large public cloud vendor. The vision also seems to be as flaccid as a punctured member. In the space of three years, " Largest Open Source Public Cloud provider" to "Someone please buy me as a part of my strategy" to "Leverage other public cloud and put a IT support desk on it". Not something to grab your attention in an excitable must "play with the stock" manner, certainly enough to make me think four to five times before I host anything of worth with them. I mean really the "vision" seems to be "survive at any cost" and not "create and sell value".

I think the stock market has had it in for them as some personal grudge, but on this occasion Rackspace got hammered bang to rights.

Now do not get me wrong, they do try for the customer and will go way beyond the corporate line to make sure a customer is happy. All laudable efforts, but it counts for shit if the upper management do not have a handle on the costs and controls and the sales etc ....... The leaders ask the employees to trust them to lead. It seems a shame (and the earnings call and the analyst all seem to line up on this) that the leaders cannot return that epic amount of faith by actually leading.

I should mention many years ago I used to work for Rackspace.

So in short to your Why? I wouldn't they do not engender enough ... well anything really.

German spooks want to charge journalists with TREASON for publishing spy plans

Probie

This is not the same

The response here is worse than with Snowden, Snowden was about active ongoing intelligence and capability. This is about proposed capability (not that I am saying the fuckers are not doing that right now, just the documents themselves were about proposed capability according to El Reg).

I say roll on the fat lady, or this case Brünnhilde of Streisand.

Let the intelligence folks swing from the gallows (metaphorically) for piss poor performance. Seriously I would be more inclined to JUST look inwards for a bunch of dicks letting the information get out in the first place, rather than add an attack to the one type of agency guaran-fucking-teed to make sure everyone knows about.

Missing in action: The OpenStackers lost from Gartner's quadrant

Probie

Need Glassholes !!!

Perhaps Gartner could benefit from some new google glass clip on's !

It seems a bit (or massively) crazy that they miss out on Mirantis - they must have over 100 organizations using a generally available product that runs on x86_64 and does virtualization, plus management !!!

This one seems impossible to defend.

You did forget to mention that VMware or Cisco are in fact an Openstack contributors and have a distribution, seems crazy that they would miss those of as well.

For those that care check here https://www.openstack.org/marketplace/distros/

Now listen, Gartner – virtualisation and containers ARE different

Probie

I am with Nate on this, but it does depend on your view.

If the goal is to run multiple application services (be they a micro service or a monolithic service or anywhere in between) on a piece of tin (aka server) then I really do not see the difference. The hyper-visor is a thicker wedge and overhead on a server to compartmentalize workload (hopefully in a secure/isolated sandbox), the container is a thinner wedge with a smaller amount of overhead to compartmentalize workload (hopefully in a secure/isolated sandbox). The principals do not change, just the amount of overhead.

I fail to see the fuss here. Gartner's magic quadrants are about markets not method of technology deployment, and without doubt Containers and Virtualization will compete in the some of the same markets.

Apple and Samsung are plotting to KILL OFF the SIM CARD - report

Probie

Re: Wow

I love this... Just out of curiosity who is the gateway to the customer, because it sure ain't the device maker. This will end up the device maker bitch to the network provider.

Its Nokia from the 1990's all over again.

I am pessimistic because however much of an illusion it might be, I like a degree of control over MY things. This concept appears to remove that control.

Oh an think about this, you would now have a device that you could NEVER remove from a network and NEVER remove power from at will. TINFOIL hat brigade ..... CHARGE ....... Oh the potential for abuse .....

Probie

Re: Cynics

You can hit more than one person at a time with brass knuckles. Have the up in a line and swing really hard, physics and the conservation of momentum does the rest.

But you have to swing REALLY, REALLY HARD. M'okay?

You care about TIN? Why the Open Compute Project is irrelevant

Probie

Misses the point

So of course running a data centre efficiently is hard, no one said it was easy and this:

“The type of work and focus needed to run a data center effectively is very different than running a short-term project. A data centre requires day in and day out focus on being perfect and making marginal improvements, while avoiding risk to production operations.”

is stating the bleeding obvious and treating most people like idiots.

Most people who look at ARM, and exotic projects (for tin) already have a sizeable invest in the data centre (one way or another). So really stop treating them like children.

Also you forgot the point of Pay for provision on the cloud (run 24x7 as day to day data centre) and pay per drink (only turn on when you need it).

OCP supporters hit back over testing claims – but there's dissent in the ranks

Probie

Re: Cole is Delusional

Hi Trevor,

I do not have the spare time to do all that research, I am like you on fire, but for different reasons, thankfully though I have managed to quit the flying. You make really good points, some of which I hope I answered in my reply to jaybeez.

I think OCP stays relevant for the moment at least, but maybe not be visible, especially if it is only large - hyper-scale deployments.

Black boxes will have a place such as hyper converged systems, but in truth they only scale out so far. They are great for the SMB, and the smallish enterprise that wants ease of use etc ...

A main reason I walked away from OCP was because in the end I could not see a way of making it viable for the small companies. I hope that someone else can prove me wrong.

Probie

Re: Cole is Delusional

Hi jaybeez,

Yes I was around at the start from the golden years up to the Santa Clara Open Compute summit. so 2013 officially in the project, but worked with an SI (AVNET) for another year or so. I also know Yf Juan, the ex director of ITRI you are referring to and was in Asia for long enough (about 6 months) when the chapters were being formed for Taiwan, and yes I know Paul Rad as well (UTSA).

The Centres were an answer to a problem, the problem is not the one that most people are commenting on though. First of all you will have to suspend disbelief for a second and realize that Open Source software is nothing like open source hardware, and second stay with me through a rambling explanation.

For a start (and as the main driver for the bitching point here) the licenses used to govern contributions and manufacture are taken from the open source world, where software is can be 100% open source, or if not can be compartmented up enough that a license granting use or non assert clauses does not over reach into proprietary code. E.G. Apache License (ASF). There is no corollary to that that I know of in the hardware world. So Open compute uses the OWFa license 1.0 which grants no assert rights but not transfer of ownership. E.G. "Bob" makes hardware uses OWFa 1.0 I can make the hardware exactly the same as "Bob" defined, but I am not allowed any deviation from it because I do not own the IP, but as long as i make the hardware the way "Bob" defined then he cannot bring a lawsuit against me. At least that is how I understand it to be. Apply that logic to a project say a motherboard and see who owns what. Add to that a fear (reasonable in some cases) that to publish any technical details would be to open up Pandora's box on your IP and you can see a) why OWF was adopted and b) why publishing detailed technical information on a component is scarce. Rather it is easier to publish specifications that force a particular way of doing things, generally only ONE way of doing things.

So in a way we have an openish thing with a black box core as a result. Because to get to a meaningful state where you can understand this thing you have had to sign NDA's with ODM's and other manufacturers. Remember the specification does not say HOW you do things, just what it has to be at the end.

Now jet back to the start of open compute and remember that this is a project for hypersccale deployments. ASDF has a couple of things wrong in my opinion, a) nearly all public clouds of note certainly large ones run on some sort of "bespoke" hardware. b) most users do not give a crap about that, they care about there workload running not what makes it run, as long as there is an SLA they feel fine, the same can probably be said of large big data farms as well I mean do you think AWS run around changing failed drives every second?. So having Open sourced a specification that works for hyperscale deployments where substantial amounts of money can be thrown at hardware by in house testing teams or contracted testing teams what do we do to say ODM A produces something specification equivalent to ODM B? Or that servers from ODM A work with Knox JBODS from ODM B ? We cannot open source testing for components because we most likely have an NDA against them, we cannot open source the IP, the only thing we can do is provide tools to test against the specification, or at least that is all we can do from within side Open Compute as a foundation.

Independent testing labs can/could go a lot further, but and here is the catch it still requires the IP holders consent, and then we also get into that cost exercise I described in an above post.

So considering what the use case is Hyperscale and considering the tools at our disposal to help the community we have two wildly diverging points of view.

If I then overlayed how ODM's will make the run for motherboards or backplanes, say 20,000 in a month, and you need that volume to swallow the tooling costs for the production line (including any potentially lost revenue to the ODM by holding up other production if you need it in a rush). No ODM wants to make 20,000 motherboards just for them to sit in a warehouse, you need to take delivery of the 20,000 motherboards. This tends to put the non-hyperscale guy out of the purchasing equation.

So I go back to my two diverging points of view and only one becomes relevant.

Hopefully you can see now where a) OCP is applicable, b) why the certification is what it is c) why this is not prime time for non hyperscale.

That's not an end though, because OCP is also supposed to foster innovation, and innovation can trickle down to the little person or the non-hyperscale person. I have seen precious little of that myself in the last year or so, but then I have not tried to find it either, it seems happy to just "bimble along" in that sense ASDF is right. What I can say though is that ODM's have seen a path to take more of the "value" out of the supply chain, example Quanta with QCT, and OEM's have responded with HP whiteboxes by Foxconn and Dell with DCS (although they were around before OCP). We have seen innovation from storage companies as well think closely coupled compute and storage, seagate Kinetic was back ported to Knox i think.

So all in all I have to say OCP has been somewhat successful on what IT said it was going to deliver and not what people HOPE it is going to deliver., So Cole's main points stand, although I may not agree with the method of elucidation or self flagellation around how the points were conveyed. Also I think there are things OCP could do a lot better, it could be more communicative. It could take on board the internal and external feedback better. I am sad that OCP is still at that juncture - something that I saw in 2012-13, bit it is what it is. The problems are complex (perhaps needlessly) and what I have descried above is like pealing a layer of paint, there is still more underneath that. So most of the OCP folks do what I consider and admirable effort in trying to keep all appeased, through I agree the outliers sometimes they need a taser up the backside.

As for the external testing, well the answer may well be wrapped up in the explanation above. Proof of life (if you can call this proof of life) is using the taser and seeing if the C and I chair squeaks or smokes.

Now to keep all the lawyers happy, this is my personal subjective view, treat it as a hypothetical conversation if you like.

Probie

Re: Cole is Delusional

Me write a piece, have you seen my English. It's like I was taught by a weasel. !! Holy crap if you saw my OCP charter you would not ask!!

Honestly I am not sure this is technical in nature. I think It's more about listening to the community around them, and not just the signed up members but also the nascent rumble from everyone else.

As I have hinted at right now I think OCP is in mass deployment only mode. That is a hard view to change. Also I have no context to what really made the anonymous test engineer to break cover. One thing I am sure of though, you are not going to break how IT procurement is done. The SI's such as penguin, quanta etc ... see the opportunity. It's about customer momentum and how to make that happen.

That said 'mouth and money'. I can see what's involved I writing something if you are game...

Probie

Re: Cole is Delusional

Its hard not to agree with you on the quality of a major design element such as a server motherboard or storage backplane, and honestly if Open Compute "could" publish everything then it would make life a lot easier, but not being able to publish propriety stuff makes that impossible. This is why I think Barbara's point resonates so much.

When I suggested the SI in the supply chain look at taking that burden, it was because I know that is a viable alternative. Honestly if the SI's got their shit together they could then pool the results to the community at large. That would solve both problems. Of course there would be "someone footing" the bill, but likely that would be a large anchor customer. Example "facebook" for a specific run of motherboard or storage etc ... but its an enabler and way over the hump and its an embryo of an idea. Why not run it by the OCP board, either Frank, Mark or Andy. From the sounds of it Barbara did not have much luck with C and I project lead and for that reason I would leap frog the innovation committee.

And when I say "why not run it by" I mean could El Reg run an opinion piece? That would be a way of putting 2 pennies in for all the little guys.

Probie

Re: Cole is Delusional

Trevor, Its rare I agree with Cole enough to write about things, but I have to break a long standing tradition of saying sweet FA, and now I have to publicly agree with Cole. At least on this one.

My name is James Hesketh and I was there at Rackspace as part of the founding member team, I am the guy who chaired the Virtual IO project and I left a while ago. The great opportunity I personally saw in Open Compute is the one Trevor champions, its the one where the little guy gets to use Open Compute, sadly for me the project is not there now and I doubt it ever will be. But that is not Cole's or Open Compute Projects problem. There are some points from the founding that probably need to be revisited?

1) Driving down cost is an important aspect but WHERE do you drive down the cost? Open Compute was about reduced cost by efficiency, not primarily about reduced cost at point of purchase. This leads to a long debate on how IT equipment is produced, Talk to the ODM's about how stuff is produced.

2) How do you do hardware and testing on open hardware - an just for shits and giggles think about the testing argument you waded in on with Nutanix and VMware. Open servers have proprietary shit in them too. Do you think OEM's place all the testing our there in the open or shut the doors on a whole bunch o' crap? When you do certification what do you, do it on different vendor hard disks, different raid controllers, nics, memory, just where do you draw the line? And do not get me started about firmware updates and the whole cycle of testing. ANY ODM could make the motherboard so to be comprehensive testing and not biased how many configurations do you test?

3) i) Costs I mention costs because with certification there comes LIABILITY. You said "this" will work with "that", who has liability for what?, Open Compute is a overarching body, its a council it does not fullfill purchase orders. ii) Not to mention who the hell pays for a full tier1 testing regime? You the end customer? You want that cost added to the purchase price?

All that said Barbara raises good points and ones cannot be brushed away, but Barbara's point does nothing to address the points above, it compounds them, even the remedy compounds them if undertaken by the Open Compute Project. It has to be done outside of the Open Compute project main body, because of the Open Compute ecosystem.

Picture this, in a chain of companies that create and fullfill an order for "something" open source where is the "value"?, value can mean a lot of things, but in this case I mean "where is the bit that I can say here is something that you Mr customer value that I can assign a monetary value to". Arguably in open compute deeper testing /certification can be provided in the supply chain most likley at the system integrator, and at a "cost" if that is what is desired, or as an end customer I can forgo that and do it myself.

Trevor, I read your articles regularly and generally agree with a large percentage of what you right, and in my heart of hearts I truly wish "this" certification and testing was up to Tier1 standards so everyone could use it without fear, but that does not come free and the cost of filling in the "gaps" falls to someone. So if you really want to have an OCP certification standard that is equal or better to an OEM standard then by all means ask for it, but expect to pay for it, and not just fiscally either. I trust you are wise enough to see the parallel with Cole's comment about facebook doing testing.

As I have said in the past "buyer beware". Holy shit look at the difference between Red Hat and Centos or Debian and Ubutnu, or Suse Linux and Open Suse etc ....This is philosophically no different, seriously why do people by Red Hat support subscriptions when they could run Centos ?

To Cole's other point, it is easy to be critical or castigate from afar and do little else (i personally have not seen you do it). but I have seen the rant of open source software folks on this comment list. How many times have people had to change something just to make it "functional" because it was written for a particular flavour of Linux? I know I have had to do it for Community editions of software that have a "enterprise - pay me for it" edition frequently I contribute the change back somehow and make the world just infinitesimal amount better off. Of course that is the Open Source way.

Open Compute stuff is not ephemeral, it is not code, it is physical it needs to justify the expense of companies developing and building for orders. It has to show a return, it somehow has to conform to a degree of business economics. In the end the community vote with the wallet, if there is no wallet for Open Compute then Open Compute Project dies on the vine.

Rackspace to resell and support Microsoft's Azure

Probie

Seriously ...?

Flucking Idiots. I am sure someone in the product team got a directorship out of this genius.

Open Compute Project testing is a 'complete and total joke'

Probie

Open letter for an Open Project

So being involved in OCP from the start and running a project group (virtual IO), and I got involved as an interested party in the testing /certification delivery asked to look at it from my then employer at the time. I should also mention I know Paul Rad and YF Juan both professionally and personally.

I left or got removed OCP depending on the perspective that you want to employ around 2012/2013, I know I certainly made the decision to no longer participate in OCP at the summit in 2013, although my project mandate got split and subsumed around the time of the announcement of Intel donating certain optical interconnects. most of the reasoning behind this is not really germane to this conversation. One most certainly is and that is applicability of OCP projects.

At the time (and nothing has lead me to think otherwise since) the project as a whole was and is geared towards massive scale deployments (or hyper-scale for those with a marketing disposition). In other words it was not meant for the legacy enterprise workload. It is true to say that given some application development, some effort in testing, and a nice tailwind that the enterprise could use OCP hardware, it was certainly defined as an undertaking. However my personally held view and one that can resonate in OCP to varying levels is this.

"ANY entity wanting to use OCP hardware has to take a level of responsibility in testing this equipment for itself, to satisfy itself that the necessary criteria that it needs met are met. If an entity or company EXPECTS this as a defacto provided service then an OEM should be your first port of call for your hardware, or you need to revise the expectation."

The fundamental absolute personally held truths are that if you are not prepared to accept a "degrade in place" infrastructure model. A model that places methodology and implementation of data integrity, at the "software/application" level, think very hard about the hardware you use, think very hard about the needs of IT landscape you are overseeing. and that "OCP is about the deliverance of Open source hardware to a community. It was not about being an OEM"

As for the enterprises, the consumers using OCP, I consider them to this day vital to the effort, more than most people know they have contributed to OCP and they have contributed to moving the rhetoric from a single entity (Facebook) to multiple entities, and perhaps at some junction in the future enabled an ecosystem where community driven software and community driven hardware can be used whatever the scale.

The project for Certification,(at the time this was headed by a representative from the financial industry, I am ashamed to admit I cannot remember his name, and one from the distribution and supply industry, I respect his privacy so he stays nameless) started out of a need, making sure that equipment was built to the specification, that say some from ODM A was the same as ODM B at least on a functional level, and providing a set of guiding or example scripts or harnesses so that sniff tests could be performed. I specified function level as there needs to be a degree of flexibility around a hardware BOM. I remember discussions around expanding that reasoning, but I took myself out of OCP at that time.

I do not hold 20 years of testing in my resume /c.v. I barely register for a quarter of that time, unlike the anonymous testing engineer. I also have not kept up with Open Compute in any material sense. So I could be speaking out of turn here, but it seems doubtful they are trying to make an OEM like certification here, at least publicly.

As for where the labs and facilities are located, it might be wiser to ask the question of "Where was the community effort located at the time?", rather than speculate or inform on whom is running what and the previous history or specific ego's involved. Not because I want to defend individuals, but because sometimes the most obvious answers are the right ones. ask yourself this "If I started a community project where would I try an locate resources, people, projects, "things that need doing and need presence" ?" The rest of whom does what I have no idea or interest in.

On a final note, I would be very interested to learn in the anonymous testing engineer has submitted a proposal, plans, assistance or general help in rectifying what he sees as a problem? I assume he has, few people make a loud cry and stay anonymous without at least trying to fix the problem. I for one would very interested in knowing the message he got back from OCP if indeed he did as I assume discussing things with them.

The great idea about it being open though is people get to judge for themselves if this is something you want. In your way, releasing your results. People do not have to take my view or the 20 years a testing engineers view, you get to choose what you want do with say hopefully a bit more transparency. At least I hope that is what an open community project stands for, because that is what it should stand for. I cannot necessarily have the same hope for OEM's, what was it someone said about the the condition upon which God hath given Liberty?

I am sure the Register can check me out to see if I am kosher, if anybody cares.

Regards

Probie

Nutanix vs VMware blog war descends into 'he said, she said' farce

Probie

Re: It should be about ....

I agree it does mean you need to understand your workload, but without trying to sound like some pompus prick, understanding the workload and suggesting/doing some "real world" like tests is part of doing the job right.

Probie

It should be about ....

After testing loads of virtualization platforms with regards to hardware and end user performance in previous jobs, I can honestly say that it comes down to "good enough performance". Ultimate performance is a pointless reference, because well before you get that embryonic thought germinated, 90% of the time the "TCO" kneecaps it with a cost benefit shotgun, and quite frankly that is correct.

Once "acceptable" performance has been reached, the main driver is one of cost per "unit" in this case I would suggest VM over the lifetime of the solution. that includes ease of Administration, ease of upgrade, how to scale ...cost of licenses and features etc. These all influence the solution architecture and that I see where vendor value can be demonstrated.

All that said though If you are an enterprise you will run your own PoC to prove things to a level of satisfaction, if not well the phrase "Sucks to be you..." springs to mind. If you are a small company, the rules stay the same, but you might want to check on the entity that is doing to the job for you. In say someone like Trevor Pott's case, you cannot say he does not test and quantify the technology he would routinely use, if you find some back street IT dealer and they had a crap reputation then "Sucks ...". In either case this war between Nutanix and VMware serves no purpose than that of marketing and should be seen and treated as such.

Personally though I laughed at this

"30 distinct steps needed to set up a resilient vCenter data centre deployment" ... The Nutanix equivalent is much more simple ..."

Seriously if you are wanting a resilient system who gives a flying pigs arse about how many steps it took, or 5 minute setup vs 1 day / 1 week set up ? You should care about it being done right.

Mass break-in: researchers catch 22 more routers for the SOHOpeless list

Probie

The old way

I am for ISP's just giving a modem or a bridge and opening up a market for Sophos, Cisco, Juniper, Palo Alto, etc ..... (the list of enterprise vendors go on) to provide a small appliance. Really the problem is that carriers only certify a small list of "many function" routers. So why not just make is simple and go back to providing a link with a connecting device that "just" bridges mediums? It does make it the responsibility of the end customer, but perhaps if people were more aware of there own security it might make for a better outcome.

Scale Computing: Not for enterprise, but that's all part of the plan

Probie

Awwww hell ....

I have been using KVM for years, I found that my personal philosophy of "reasonable cost" vs features, ruled out VmWare somehere around the last epoch. Hyper-V was interesting but felt bloated and Xen as a pain in the backside with regards to management and not using a windows machine. So as a result I have been using KVM for quite a while. Storage wise there are a whole bunch of open source alternatives that are : a) stable b) rationalized. For me though NFS is good, though for SMB hyper convergence not the right choice, maybe drbd, maybe sheepdog, etc ..... The only thing I am missing is automatic HA. That can be fixed with a number of other open source initiatives. The setup has been stable for years. the only thing I wished I could get my hands on would be some 12 drive 1U's with a raid card and expander card and then I would be golden. The only thing I would say, is that it is worth thinking about 10Gb per node no matter the node hardware BOM. It will be common to need more than 2 Gb/s of bursting bandwidth and 10Gb has come down in price..

Oh and for moving there are a couple of packages that allow you to move to a qcow2 format, they are mostly in Virtual box through, still its free (as in beer).

BT Home Hub SIP backdoor blunder blamed for VoIP fraud

Probie

Re: Well watch out for 192.168.99.X

The point I was making was that BT ARE routing 192.168.99.X/24 out of my home hub, to the wider BT network, IE the traffic is NOT being dropped as per IANA.

Also soem FWs have some "helpfull" - AKA stupid default groups users can use like 192.168.0.0/16. Just for the reason you highlighted.

So whilst I would agree with you on a proper firewall, having BT in the mix means no trusting BT, no way. So I have two, the swiss cheese emental BT home hub and a more fine grained firewall, which is a BSD variant. Yes the BT hub is that untrustowrthy. And beacuse the Home Hub does not allow for any static routes, if I want to have anyting outside of a single flat network and the Home Hub maps to MAC address and NOT IP address (try the home hub and give you laptop two IP addresses see what happens!) I need another NAT function somewhere.

I do get it though - the Home Hub is aimed at uneducated consumers - it tries to be helpful, but seemingly fails in a epic manner when presented with anything come even close to "desinged".

IPv6 will remove NAT which frankly makes life easier, i still need a firewall/modem/bridge to pass traffic through, so a firewall riles set will still apply. Although it remains to be seen if NAT will be pushed through on IPv6. Least trust on everything and then modify a rule when the kids scream "the internet is broken".

Probie

Well watch out for 192.168.99.X

Yeah, BT are doing regualrly screwy things. I recenty did a local NMAP of my networks and found 192.168.99.X responding, which was strange as I do not use 192.168.99.X.

Traceroute showed.

traceroute to 192.168.99.100 (192.168.99.100), 30 hops max, 60 byte packets

1 my internal router Before the Home Hub (192.168.X.XXX) 4.719 ms 11.084 ms 14.443 ms

2 BT Home Hub (192.168.0.254) 15.342 ms 15.556 ms 16.438 ms

3 217.32.146.173 (217.32.146.173) 18.579 ms 18.580 ms 22.018 ms

4 217.32.146.238 (217.32.146.238) 22.026 ms 23.532 ms *

5 213.120.156.202 (213.120.156.202) 29.742 ms 29.748 ms 46.668 ms

6 217.41.168.201 (217.41.168.201) 46.674 ms 5.724 ms 6.427 ms

7 217.41.168.109 (217.41.168.109) 7.364 ms 7.365 ms 7.913 ms

8 213.120.183.30 (213.120.183.30) 36.461 ms * *

Looking up the 213 address shows it is a BT address, and a reasonably port selathed one at that. And there was me thinking 192.168.X.X/16 was all PRIVATE. I could see nothing on the forums about this, so right now all my internal clients null route 192.168.99.X/24, Well all that I can get hold of anyway, the ones where I cannot access the OS rest behind two other firewalls.

As I have never trusted BT (or any other ISP for that matter) there are multiple firewall/router between the Home Hub and my home network. I should thank BT really for the free security and paranoia couse. Almost enough to make you think they are out to get you.

I do agree with lost all faith you cannot abbroagte resposibiliyt for your security. The fact that VOIP worked for incoming calls should have raised a warning flag, but I also expect something to do what is reports it is doing.

Apple orders white box servers from Taiwan for data centre refresh

Probie

Why do you need hyperscale to use White boxes ?

Seriosuly why would it be so hard for the Enterprise to consume whiteboxes? Think about it if you have "enough" enterprises to make a community (both as a supply chain community and a more traditional open source software/support community) then what little value the OEM's provide is chipped away. If you say a "guarentee of performance" I am sure the ODM's could provide that at some cost between a whitebox and a OEM model.

Do not get me wrong OEMs can be great at innovating things, but lately the seem to be on the "me too" phase rather than innovating. Then again the software vendors seem to be being a bit shy about that as well with regards to HCL's vs Support etc ... There is a lot of needless waste that can be trimmed out of the supply chain to enterprises. What a white box solution challenges is for the OEM's and the Software vendors (see OS vendors, Virtualization vendors etc ....) to be transparent about what they are charging for with granularity down to the component/feature strata. I am not so sure that is a bad thing. You could certainly shed a fee marketing research types at the OEM's as a result.

I am not a open source zealot, I am just tired with OEM's making money of things which are just standard and commodity now adays such as IPMI or remote manegment consoles, and telling me it is the "monkeys testicles" of awesomness this time around, its not is a remote managment module. Give me the one I have used for 5 years, stop developing it to make me coiffe or tea and do something worthwhile with the money saved.

Attack of the Digital People: The BBC goes fully Bong

Probie

Re: Post-Hutton BBC

Funnily, the last time I looked the BBC was not an outpost nor a sub department under the "Department for Education". (Quoted becuase i find it laughable to that the department educates).

Whilst I agree with a level of support for government initiatives, please note the use of level. In this case five seconds on newbeat would be enough.

It is well past time that the BBC went back to, forced, or realized (through epihpany) that its remit is for good quailty broadcastable programming, and not some quasi abortion of an orwellian branch of the government. Or to say it another way

"FOR GODSAKE, please bring back programing on basic literacy, numeracy and science for children, teach them how to reason and learn and judge for themselves."

‪Obama criticises China's mandatory backdoor tech import rules

Probie

Bring back the mouse.

Do we remember that time when we needed to giggle the mouse in a random pattern to generate entropy to then be used on encryption? I would laugh my scrotum off is that was the fall back to this globally. USA, UK and China get stuffed because we make our own encryption keys from entropy, for EVERYTHING, from the SSH key for a network switch etc ... right down to the IOT fridge !!!

Of course it was never random (at best due to RSI), and that does not account for a backdoor.

PS for everything made in China there is a more subtle answer, they make the hardware, the firmware and software get flashed somewhere else. After living in China, the USA and the UK if I HAD to have my data exposed I would rather it be here. Frankly the skills shortage and poor education over here make decrypting data pointless, safety through ignorance !!! Well at least until they offshore it.

Nimble gives Mann severe kicking while he's down

Probie

Re: Apology and Clarification

Sorry, but this is to little to late. Perhaps the Nimble PR folks should have sawn of the plank before the company decided to stick it in is mouth,

Hairy situation? Blade servers can reach where others can't

Probie

Holding you by the balls

What about vendor lock in? Concepts such as dual sourced strategies become far harder to implement. Now I am not saying that this is not a problem in the un bladed world, but IPMI and heterogeneous monitoring and management packages have come a long way and help relive that problem. I cannot say that is the case for blades and blade enclosures. Although if you are going to deploy Blades for the sake of ease, perhaps it is right to say that you care less about the supply chain and keeping the vendor on his toes.

Also I truly dislike blades, they are in my opinion the union of the worst traits of networking, servers and storage all birthed in one bastardized package. In todays IT landscape I am pretty sure automation of network and compute and storage using COTS and API's which can be manipulated by boot images and/or then automation (puppet, chef, ansible ........) can provide the easy and prescriptive control you are assigning to blades, with a more flexible deployment pattern and less complex architecture, and likely less cost as well.

Fair enough I used to work in OCP, but my view on blades was formed well before that. It probably came about when a blade architect said to me "what do you mean you only have 4.5 KW per cabinet?"

Cost-cutting Barclays bank swings axe on 5,600 IT and ops bods

Probie

RE:A mistake surely

I wonder why anyone would willingly work for a major bank in IT anymore. It seems that every two years a big <insert favourite euphemism here> gets swung, and Barclays is always the first in the queue.

Page: