Not TLS?
> Bacs is adopting the new security, called SHA-256 SSL.
Great idea. But dude, if you're going to try and sound knowledgeable at least get it right. SSL (2 and 3) are deprecated per RFCs 6176 and 7568.
894 publicly visible posts • joined 1 Aug 2009
I don't want to totally diss Supermicros at all. But they're always a bit, you know... "GET... ON... THESE... EFFING... RAILSSSNNAAARRRGG! There, phew."
The Supermicro IPMI is clunky but I agree, does the job. The Dell iDRAC license thing does hack me off massively, particularly these days when it's built in to the board. Fair enough when it used to be a separate BMC addon board.
My home kit is always Supermicro.
Wasn't the R720xd the one with 24 drive bays across the front and 2 at the rear?
Is the R730xd the same?
Always liked working with R720s. Apart from when you find that one iDRAC that isn't licensed (or some other bullsh1t) and it's the one you really really need.
But compared to that old cabinet of Supermicros that everyone seems to have, working with Dells is typically great.
Mobile app and NFC sounds intriguing, however I've found connectivity for emergency/personal devices is always a total mission in data centres.
Wish I could do more kit wrangling - anyone got any jobs going? Clicking around AWS/vSphere isn't exactly why I got into computers. Would prefer to play with bits all day (ahem)!
I like Puppet. Massively prefer it to Chef, having found Chef Server to be so incredibly flaky on RHEL. Puppet just works. Throw in Passenger and it's quick enough in a 400 node environment. Start getting into loads of exported resources though and it starts bogging.
Most of the time I find using Puppet modules to be a complete PITA, making something as simple as an Apache configuration across 50 web servers turn into 10 separate layers of Puppet/Ruby manifests and inheritance. An haproxy configuration, ~700 lines using the Puppet module with the config structure almost unreadable, or a simple config file of half the size that you can read and compare against the haproxy docs. I hate abstracting this stuff into Puppet/Ruby and moving it away from the project's docs. Most of the time I find it easier to just dump some config files and have a service subscription on that, still conditional on facts etc.
Also there's so much snobbery in the Puppet community, "IT SHOULD BE DONE THIS WAY AND I REFUSE TO ANSWER YOUR QUESTION UNTIL..."
> We live in a representative democracy so.. 2020? Slight issue - basically all he parties agree on essentially this issue because they're all equally clueless - not sure what the fix to that is.
>Good news: this committee seems to essentially agree with the sensible arguments against on definitions/funding etc.
>IIRC the Greens are opposed to surveillance.
>The Lib Dems acted as a brake on it in the last parliament.
>Whilst Labour was definitely pro-surveillance under Blair, Brown and Miliband and most of the current PLP including Burnham are also in favour of it don't be surprised to see the Corbynites come out against it.
So just in the nick of time then, after the Gov has done whatever it wants. Sigh.
How about people actually voting or vetoing changes in their legislation, with exact diffs presented with plenty of time for analysis and consideration. Maybe a separate legislative cycle every 2 years, or something, I've still got to hammer out the exact details for my plan of Democracy 2.0.
User Blair+Dave+Theresa is requesting permission to merge the following changes from branch "neoconlunatics:snooperscharter2018" into "legislationgovuk:master". Accept or deny?
Sadly for most peons the choice is either vote for who your parents voted for or for who your newspaper tells you you should vote for. Because they've seen the film before and it's sh1t.
So when do UK citizens get a say in all this, considering it affects all their lives?
Or is it more along the lines of... "Don't worry your little head about it, tax payer #154192574. Mummy and Daddy Cameron-May will look after you!"
That's not how democracy works dear boy... See you later at the golf/private supper club what!
From what I've seen of VDI, I'm not sure there's much of a cost saving at all. A team of engineers wrestling with app packaging, registry keys to remove features and customisation, UCS chassis costs, virt host licensing etc. And then Wyse thin client terminals that cost nearly the same as a mid-range PC.
The whole lot comes crashing down, quickly, if there's even a minor blip in AD, DNS, network, storage etc. More hassle than it's worth?
At least all legislation is fully announced in parties' manifestos, with the exact diffs to existing legislation presented BEFORE the election. So the people affected by changes in legislation can make an informed choice between the red or blue Oxbridge blazer-eunuchs.
Or just get elected and do what you want for 5 years.
7 day outage? More like 13 year outage!
I don't know about anyone else but I have found IMAP under Exchange to be pretty much non-functional from at least 2003 when I first did a bit of Exchange wrangling.
In fact it was the reason we canned M$ altogether at the time being a publishing company with a load of Macs and the Exchange connectivity options were Entourage if it worked, IMAP or the basic webmail.
Mind you I don't begrudge anyone moving their Exchange into O365. The last thing I want to see these days is on-premise Exchange.
>Sure it is. Worked so well for say Open SSL across 18+ years didn't it?
Eventually found, fixed and disclosed.
Potentially found, exploited and not disclosed for 18+ years.
I'm not sure your suggestion of relying on the vulnerability disclosure policy of governments is a particularly great idea.
Corporates, hmmm, it would be interesting to know how many corporates have audited the source code of SQL Server. Having theoretical access to audit is a long way off having the technical skill set, time and money to do so.
In my experience black cabbies definitely know what they're doing! Aiming straight for the busiest effing roads and batches of road works, just to keep that meter ticking over... 20p... 40p... 60p...
Thank the Lord there is now someone challenging this cartel!
The other night a perfect example. From Fleet St to Westminster, I suggested Blackfriars Rd, St George's Circus, Westminster Bridge Rd, done. "Oh no mate, it'll be quicker this way". No it wasn't, not even close. 20 sets of traffic lights costing about 60p each on the meter.
He even started going off on one about Uber, citing the following grievances:
- "They don't speak English." Good, the last thing I want is my driver speaking to me.
- "Congestion. All these new minicabs causing traffic jams." Well I'd rather 50,000 Toyota Priae dripping water onto the road than diesel Black cabs belching out black soot!
- "They don't know London, they just drive by GPS." Another great thing. I'm fed-up of a bloke who did the knowledge 20 years ago roughly knowing the location of a where a place used to be, gets to the nearest big road then just drives around in concentric circles until they bisect the road you asked for.
Black cab drivers do not deserve every single pound they are no longer getting.
> but in london traffic, with added roadworks, marches, accidents, state events etc., i'll take a cabby that knows what they're doing over some part timer with a gps any day
You are right - in my experience black cabbies definitely know what they're doing! Aiming straight for the busiest effing roads and batches of road works, just to keep that meter ticking over... 20p... 40p... 60p...
This is good news. The frequency of Fedora releases has actually always put me off using it. I tend to use Fedora Server RawHide (bleading edge) on the VMs in my home lab because it tends to be a bit more of a rolling release. I don't really see many things break but switching to a stable Fedora would probably be better.
On my laptop, I still find *ubuntu (I use Xubuntu) or Mint has better hardware support, with less mucking around required to get MTP or Bluetooth etc working on my crazy hardware.
This is a genuine question. The Feds might think they have jurisdiction over AMZN. But knowing how sensitive people are to where their stuff is stored, would having Amazon regions owned by Amazon GBP 2016 Eire Ltd, Amazon 2016 GmbH or Amazon Cayman Holdings etc make any difference? Surely Amazon are doing something similar to that anyway for tax reasons, or some other more beneficial-but-still-legal financial juggling! I sure as hell would.
Or do the Feds/US Govt just assume that because they originated in the US and listed in New York that all international subsidiaries, even if legally non-connected, are part of the parent?
I looked at Persona a few years ago when the Mozilla wiki and bugzilla switched to it. Technically it's quite nice and easy to implement into existing applications. I considered creating an LDAP-Persona bridge in PHP to allow a couple of the organisations I still unofficially sysadmin for to sign in to internal stuff with corp creds, without having to making LDAP available and tunnels etc.
The downside to Persona was that it is decentralised. The money-makers of the modern web aren't particularly fond of decentralised.
I ran my own OpenID endpoint (provider?) for a while which was great apart from most services just use OpenID for initial sign-up then just keep a copy of your e-mail address etc locally anyway. Instead of storing the OpenID endpoint. It seems this is often the case with the sign-in with Google/Facebook/Twitter brigade too, you usually end up being funnelled through the process of creating a local account with the third party anyway.
Sadly there's more money in flogging e-mail addresses than OpenID endpoints or OAuth session keys.
One of the nice features of OpenID (possibly in v2 IIRC) was that you set things like usernames, forum nicknames, locations, timezones, avatars and any other metadata locally on your endpoint then the 3rd party service grabs and updates that data when you login.
Edit: I can believe I spent 10 minutes writing about Persona. I need to get out more.
I see so many DevOps (urrg, horrible word) articles talking about containers like it's the best thing and everything should be containerised because it's, like, so hot right now, yaa.
If it's not got your application code in, then in my view, it shouldn't be a container. For me the benefit of containers is being able to spin up identical worker instances of an application, rapidly, that can sit behind a load balancer and buzz away and die as quickly as they were born. The idea of enforcing a maximum container age of X hours/days is quite appealing too, frequently recycling. Granted many applications would not fit this architecture, then in which case don't Contain it.
I saw the headline and thought, I'm not going to read this as it will get me all annoyed and I'll end up ranting into the comments. Well, here we are.
How exactly are these meters smart? Sending usage data back directly to the provider? Sounds like a smart way to cut the salary bills of the private utility companies, for the benefit of their shareholders.
It's not like these will have fine-grained control over every plug socket and light switch in the house to tune usage. It's an on or off remote control of the electricity stopcock... about as dumb as you can possibly get!
If you want to monitor usage why not just buy one of those gadgets that count the flashing light on most meters. Then ask people to hook it up to their wifi to submit readings over the web to the provider. Ok so not everyone has wifi but enough to ease the meter reading aggro on the provider.
Or a better idea, why not just stck with the current system of asking people to submit their own meter readings online? Gives you exactly the same information as to power usage as a smart meter.
Does anyone know the actual power usage of these smart meters? Presume the provider will pick up the bill? Or whichever Baroness Sir Something Wotsit decided to spend our money on them?
Better off just rectifying the under investment in critical infrastructure over the last 40 years and buying some nice new nuke stations instead. I know there are some in the pipeline but however many that is, double it.
I see a use for these Windows sticks as always-on family kiosk PC things. Or thin clients for connecting to Citrix Receiver. But this is far too expensive.
I've had a Zotac Zbox Pico for a year or so which sits in the study if any of my users need to do a bit of browsing/cat videos/word processing/skypeing etc. And at £150 that was much more reasonable and has decent enough specs for the job. Zotac have just released a bunch of updated models as well so might be worth a look... http://liliputing.com/2016/01/zotac-launches-mini-pcs-with-high-performance-graphics-up-to-6-displays.html
This is too expensive for media playback as well. I have local media handled by a Pi running OpenElec, streaming and on-demand using an Android TV box (a Minix something, supports Sky Go output over HDMI).
Agree with every word of this.
My favourite is the database schema. A separate database table for each combination of content type and field, inevitably leading to thousands of database tables thanks to point-n-click web designers, installing all manner of modules. Which leads me on to...
A close second is the module overhead of 3+ DB queries per page. A multi-site Drupal environment I saw was using 4 different modules to trim the output length of different page elements. Genius.
@roytrubshaw
> All large sites will use CDN/Varnish/nginx as a matter of course and why on earth would you run PHP without opcode caching?
Of course, any big site would and should be using those.
But you need all that even for a basic noddy Drupal site. An impressive achievement in a way I suppose!
> Pick a PHP framework, and build a website with it. Even better, just build a website with static HTML pages.
100% this! Go bespoke if you're a bigger client and needing to managing many sites off a single interface/codebase. Get a Laravel/Symfony dev team together, use Composer libraries when possible. You'll be able to get a decent result in a couple of months
I'd always advise that than try and wrangle Joomla/Wordpress/Drupal into what you want with hundreds of modules, addons, customisations and dependencies.
I am a PHP fan and I have to say Drupal is an embarrassment really. I have always avoided it but as an operations bod it has been dumped on me a few times.
Baseline performance is average at best but if you actually want it to work as a CMS then you need 100+ modules. Then you quickly end up with 10,000+ file includes and 500+ database queries per page.
Anyone who says they are using Drupal for a high-traffic site is lying. It's most likely Varnish/nginx/a CDN serving the requests with Drupal running as a static content generator behind.
In my experience a reasonably customised Drupal site is not capable of serving more than 10-20 req/s alone. It doesn't even work without APC/OpCache and Drupal's built-in cache enabled (which is stored in the database... seriously). Throw in memcached/redis to replace the database cache handler and you might get 20 req/s on a not-very-complicated site. Want any interactivity and per-user content on any pages then forget it.
No doubt people have pushed Drupal performance higher than that. But factor in the time, expense and server resources required to hit higher levels of traffic and Drupal just is not worth it.
Have been a big fan of GIMP for many years, only using Photoslop for final exports or conversions of CMYK images. I have a decent workflow with GIMP in conjunction with Scribus and Inkscape too.
Progress with the switch to GEGL is a fantastic milestone as I believe that will allow easier integration of CMYK handling at some point in the future. It's been a few years since I dropped into the GIMP IRC to ask about CMYK though, they don't like that question very much ;)
Ok, so it's not as polished as commercial software with a budget millions of times larger. But what an amazing piece of software, that anyone can use, for free. FREE! So just let that sink in before you diss it.
What, no mention of TITSUP?
It's at times like these I spare a thought for the poor operations folks who are no doubt getting glared at by various species of middle-manglement.
As an infra/ops myself, only one phrase adequately sums up these moments... brown out.
My advice, get it fixed then have alot of --------------------->
Presume the self-styled City of London police have solved all the financial crimes that occur on a daily basis on their patch?
I don't know about anyone else but I've not paid the film/music cartels a penny for over a decade. How? 2nd hand purchases, thanks to everyones least favourite but monopolistic auction site. And Amazon. Oh and Google. Though I will probably have to start accessing those sites through a proxy soon. Surely they'll be blocked by ISPs soon since they facilitate and monetise piracy on a massive scale.
There's only one thing to do... drown them.
A bit of JS hosted on a CDN that willing website operators can embed into their sites. The JS then makes random(ish) background requests to all those lovely dodgy Islamo-Paedo-Money-Laundering-Terrierist-Torrent websites. Stick that up your ICRs GCHQ!
Ask the people? Are you mad?
That's not how democracy works dear boy. See you later at the golf/private supper club what!
I would like to see a legislative cycle where changes to legislation can only be introduced every X years. Parties would have to present the exact diffs of their proposed changes. The people who are governed by those laws then vote whether to commit them to UK law. At the moment the same old Oxbridge blues/reds have a big party and do whatever they want for 5 years. Not good enough.
> Except they were really in power were they, they were just allowed to sit in the next table so long as they were all quiet and didn't do much. If there was truly an LD government they'd get the same bug as everyone else does.
I don't know. Being able to veto a proposed piece of legislation is more power than you or I have. Good on 'em.