Not familiar with Daniel Morden, I'll check him out - Daemon by Daniel Suarez provides a fairly apocalyptic view as well.
485 posts • joined 10 Jun 2009
True, but if I was to hire Tigerswan to review and advise on security, I'd expect any report to include something like "define and implement appropriate security controls for the sharing of data with third parties, and for ensuring the third parties demonstrate compliance". That they didn't enforce that is pretty dismal.
One of my stock interview answers goes as follows:
Interviewer: How much do you know about x?
Me: Not a lot, but ask me again in a few days and you'll get a different answer
Then, following the interview, read up on the topic, discern some pertinent points or relevant bits, then mail the interviewer (via the agency if necessary) with something that shows your understanding and enthusiasm.
Free training is everywhere these days, from overthewire.org to aws free tier. Write blog posts, make github contributions, use social media. Experience isn't purely from the job itself.
So, my understanding is that Marcus posted his PoC code showing some form of malware API hooks publicly, then a month later and much to Hutchins public surprise, it turns up in Kronos, heavily adapted to weaponize it and turn it into something useful. According to the Kronos analysis on malwarebytes:
1. The API hooks had been shown previously, suggesting that both Hutchins and the Russian-speaking Kronos author had lifted the concept from elsewhere
2. The other common factor was use of a particular lock instruction
3. Kronos is quite different from Hutchins code, involving an extra layer of difficulty in using shellcode instead of a pe file, combined with some counter-surveillance and anti-detection techniques
So, the FBI appear to be gambling a couple of decades of gradually-fostered goodwill between white-hats and the authorities on the use of a single command, to try and show intent of financial gain by a guy who donated his $10k wannacry bounty to charity. Uh yeah, good luck with that.
Regardless of what happens, why would Hutchins collaborate with the NCSC again?
Weirdly, malwarebytes goes so far as to patronise Hutchins by declaring that Kronos is the work of a 'mature malware author', rather than an 'experimenting youngster'. Sort of a backhanded exoneration, if you will.
I recently applied for a cloud-focused role, and agreed with the client that there is more to the cloud than AWS, Azure and Google. When I thought about it, I realised I had more cloud experience than I thought, having also worked with Quivox, Trusteer, Threatmetrix, Websense, Diligent and a few others. None of those have a compelling alternative in the big two providers.
Perks indeed. When I was in the RAF (based at Honington in Suffolk), we had a week's firing practice arranged in Altcar barracks in Liverpool. I was seeing a girl in Liverpool at the time, and my parents home was in Manchester, a short trip away. At the end of the week, I planned to depart the barracks to do my own thing locally for the weekend before travelling back to Honington on Sunday night. This was refused on the basis that the unit should always travel as one, to preserve the integrity and morale of the squadron, and to not be seen as affording any perks. Thus I had to travel the 235 miles back on the coach, only to have another 235 miles to travel back again, then a third time before Monday morning (I simply postponed that visit).
I suspect I'm not much more knowledgeable about this than yourself (also no account), but my understanding is that there is no entity who can stop the transaction or freeze funds. Bitcoin is based on the blockchain, which, IIRC, is simply a distributed ledger. So instead of a bank keeping a record of accounts, transactions and balances, there are multiple peer ledgers around the world, each keeping a partial record of transactions that take place on the blockchain, but with no 'owner' or manager of the ledger. (Think of the internet itself, and of how you smile knowingly when someone hysterically demands for the internet 'to be switched off"!) The whole point of the blockchain is that because each transaction is verified against multiple copies of the ledger, and every copy needs to agree with the numbers, there is theoretically no way to stop or undermine it. The fact that the blockchain cannot therefore be defrauded is the irony in this criminal use of the technology.
I *think* all transactions themselves are publicly visible (hence the bot watching the account for the cash withdrawal), but the accounts are numbered - no names, proof of ID or any of that, so even if you can see the funds, you can't attribute them. The trick is how you would follow the laundering process to turn the those bitcoins into 'clean' money that can be spent legally. Given, though, that money laundering is a skill set in itself, pursuing that is a whole other story.
Except the cloud means you don't have to worry about large capex. If the boss is ok with the monthly cloud bill, you're good to go.
Except you don't need to worry about maxing out on resources. Compute, storage, bandwidth. Want more? Give me 15 minutes.
Except you can try deploying something without persuading the director that it's a good idea to get the sales guys in and go through the RFI/RFP process, only to find that it doesn't suit your environment.
Except the minute you decide that your current effort is chasing down a dead-end, just delete the resources and stop paying for them.
Quite a difference, then.
As I understand it (and that's not very well), Kubernetes enables you to orchestrate container fleets.
Example: My new web service will, rather than be an evil single monolith of code, be broken down into 4 microservices, relating to user account, products, checkout, order history. The typical capacity I want necessitates running 5 web servers. I need each microservice to be highly available, and not tied to the infrastructure. So, I host the microservices in containers (Docker, in this case), then deploy an instance of every one to each server. Suddenly, I've got 20 containers running (4 services x 5 hosts).
Which containers are serving which microservices?
If one (service or container) dies, what needs to happen for it to get regenerated?
What happens when I update the code for one or more of the services?
How do I easily scale the number of required containers, or instruct them to deploy to new hosts?
Enter Kubernetes, which if we're being old-school is cluster management for those containers.
*Watches someone else comment to shoot this down in flames...*
A few years ago I would have said car washing, but those hand-wash outfits have made a comeback* due to being better than monster rolling brushes and easier than jet-washing it yourself.
I went on a tour of the BBC, and was surprised to find no camera operators in the newsroom. The fairly routine (hence automate-able) pan/zoom etc on static studio occupants means that in an early-morning newsroom, the newsreader(s) are usually alone.
*May soon be reversed due to Brexit and the non-indigenous accents of most of the guys working there
It's simple, just take your CPU count, correlate with the relevant pack for the product, version and subversion, multiply the number sausages that can be powered at any one time (but divide by 0.75 if on Windows), minus the inverse square root of god's dad's boss's dog's inside leg measurement. Then write the number down on a wooden broom handle and shove it where the sun doesn't shine. Sideways for maximum effect.
Sure there is; reserved instances. The cheapskate spot-priced instances are for when you aren't bothered about the 'right now' aspect. One of their case studies is a pharma biz that uses spot pricing to run big bio simulation algorithms or some such. Comes online at quiet o'clock, stateless in case of being booted off halfway-through, reckons they save oodles.
"turn them all on in case of emergency"
I came home one night to find evidence of a break-in. I stalked around the house wielding a bottle of wine I'd just won in the pub quiz, switching lights on as I went. Once upstairs, I was down to checking the last couple of rooms when everything went dark. All I could think of was Private Hudson "What do you mean, "they" cut the power?!"
The fuse had blown.
It's not unusual. When I worked (perm) at a startup, the CEO would regularly ask us to ask ourselves "is what I'm doing worth my cost to the company?", where cost to the co was assumed to be 2 x gross salary. My ltd co revenue is less than 2 x what I could earn perm.
This thread is a perfect pub conversation (except for lacking alcohol, crisps, and ogling the barmaid). It's got science, beer, rum, coke, cheese, taste, Douglas Adams, religion, a quote from Aliens and someone saying that twins have different tastes so genetics bunkum. (As the father of twins, I can testify you have a point).
Shame it's a Monday morning.
Whenever I work on one of these legacy projects, getting sniffers on or router output is like pushing an elephant up the stairs. The reluctance and hoarding of info (I'm looking at you, network, firewall and security teams) is like treacle.
The irony is that, once in the cloud, the right call to the right place would get me the appropriate credentials in AWS or whatever, so I could find it myself. Except as the author points out, it's all on 80 and 443 anyway.
I think the author misses the point slightly, and states the bleeding obvious Of course, any implementation with any value should be resilient, but each AWS region includes multiple availability zones, each containing multiple datacentres. Any deployment with resiliency across those *should* be resilient, period. Amazon make the point that replication within a region delivers HA/DR, is fast and free. Replication between regions adds complexity, is slower (as it's over the public internet) and costs, if only because one of the core tenets of cloud is paying for data egress out of the source.
To put it another way, how many of you have your on-prem DCs spanning different regions? Off the top of my head, I would have to go back 7 jobs to find a place that did, and most of those are big enterprise shops.
I think one possible takeaway is that Amazon's position that you don't need to deploy into multiple regions is now called into question. If I worked there, I'd be pushing for a new service in the form of direct connectivity (not via internet) between regions, with a lower price point for data transfers. AWS do offer this kind of connectivity to customer sites, but presumably anything between regions would be a fat pipe not specific to any single customer.
Alternatively, perhaps the fact that so many orgs tried to all failover at once is key, in which case maybe AWS needs to review it's provisioning/overcommit policies.
"Deploy in the cloud by all means but still backup, replicate, ensure that you don't have a single point of failure."
Unfortunately, that is what they've done. This fault affects a specific region, each of which contain multiple availability zones. Each zone constitutes a logical datacentre, comprising multiple physical datacentres (between 3 and 6 in each AZ, I believe). Deployment across two or more AZs in a given region *is* removing the single points of failure. Supposedly. Didn't work this time.
AWS don't particularly recommend deploying across more than one region, because each region is effectively a completely different cloud, common in branding, usage etc, but connected only via the public internet. Replication between zones within a region is fast and free, but replication between regions is slower and costs.
Ultimately though, a well-designed AWS deployment, consisting of all the fault-tolerant bells and whistles, still has no upfront cost and is thus way more achieveable than doing it on-prem. Said bells/whistles will make nuclear outages like this the cause of the rare downtime you do get.
I've been thinking about the language aspect. I suppose one of my next work study things will, rather than focus on cloud, security, blah blah etc, be a language, in the traditional sense rather than programming. I figure the likely work options (i.e. common choices for firms to relocate to from the UK) are Dublin, Paris, Amsterdam or a choice of German cities. Dublin I'll be fine obviously, and having worked in the Netherlands and seen the prevalence of English, I think the same applies in Amsterdam. Looking at French and German economies and industries, if I was a betting man I think that makes learning to speak German the best bet - anyone any thoughts?
Where does objective critial journalism stop and pessimistic nonsense stop?
""The media does have a responsibility not to give more weight to the pessimists and technophobes than is warranted – even if doing so generates more revenue," conclude the report authors, who seem to hate the idea of journalists getting paid for telling the truth."
I could imagine some clown writing this bit for the Daily Mail, before going off on this week's sanctimonious crusade.
Lots of good points here, but I would add that the last few places I've worked, the datacentres and comms have come bundled with a list of risks as long as your arm. X isn't resilient, but we aren't pay £y millions to fix. Z hardware is end of life, but no-one can be bothered to pay to upgrade it. Datacentre A is running low on space and power. Datacentre B now has a business that stores flammable materials next to it. With the cloud, all this is not our problem. Add that to the beancounters ability to map actual use/benefit to cost via the PAYG model, and there's a lot that's attractive about the cloud.
I don't know about other providers, but AWS generally charges for download but not upload. I use this fact (along with the pay-as-you-go charging on storage) to encourage more efficient ETL and reporting. If they only store what they need to, and only spit out/download what they need to see as an end result, it's cheaper. This is in contrast to an on-prem EDW (for example), where some central project has bought/delivered the warehouse, and individual business projects don't care about efficiencies because they aren't paying for that big row of Teradata kit etc.
I just got rid of a 2014 5 series on Friday. I now cannot log in to the app on my phone.
Also, even if I managed to log in, the app checks the car's location. If it's more than 1.5km away from the phone, it refuses to provide any info 'for privacy reasons'.
The only hidden nugget on that car is Faithless's The Dance on the hard drive.
From AWS Cloud Best Practices:
"Be a pessimist when designing architectures in the cloud; assume things will fail. In other words, always design, implement and deploy for automated recovery from failure. In particular, assume that your hardware will fail. Assume that outages will occur. "
Customers aren't paying for an infrastructure that does not fail - they are paying for things like elasticity, parallelism, and the transfer of capex to opex.
Hey Beornfrith, thanks for sharing your story and being open. Although IT is generally conducive to working from home, one of the problems is getting your foot in the door first. So for example, I've never been able to work from home for the first few weeks of any new role, at least. This is partly to get the new recruit up to speed with the role, partly so others get to know you so you can all communicate when not face to face, and partly so the manager trusts the new guy, I suppose.
Maybe it would help if you offered the first x weeks of work for free? Hopefully, this wouldn't impact on your benefits, while allowing an employer to get comfortable with the idea of you working remotely, since from their perspective, they would have little to lose from giving you a trial?
Or, how about this. Learn something like dotnetnuke, develop a couple of sites (you'd need one for yourself anyway), then try finding work on 99designs or fiverr or whatever. Admittedly this could affect your benefits if you earned money one week but not the next.
As much as the following sounds a bit iffy, you could try playing the system:
1. Set up a limited company with your wife as sole director.
2. Said company then touts for work on (for example) 99designs, as above.
3. A job comes in, which is fulfilled by an unpaid volunteer (guess who?!)
4. Money is paid to the limited company. Paying that out to your wife would incur tax, but that's ok, because at least you've earned money to be taxed on in the first place, rather than losing benefits and having nothing at all in it's place.
5. You might reach a point where there is sufficient money in the company to take on an employee (again, guess who?!) at which point you drop the benefits and take a salary.
And all this time, your wife isn't doing much for the company, so no extra work for her. You get to use your brain, extra money comes into the house, and if it doesn't work, you've still got the benefits to fall back on.
*You* think you've nothing to hide.
Or, as per my conversation with a colleague:
Me: "Do you mind me knowing you're Jewish?"
Me: "Ok, it's 1939, we live in Germany and I just joined a far-right political group, now do you mind me knowing you're Jewish?"
The point is, he hadn't done anything different, or "wrong" - it was the watcher - a hypothetical me - that was dodgy.
So mail comms are collected if the sender or receiver is overseas. If you wanted to talk to some ne'er-do-well overseas about nefarious stuff, that sounds like something you could bypass.
You (baddie 1) write message in UK and commit to disk
Replicate stored data via block-level replication to overseas data source
Baddie 2 looks at replication target disk on the other end, reads message, replies and commits to disk for replication in the other direction.
What's the chances that an encrypted block-level disk repl would be intercepted, read, and the deltas from multiple replications compiled into a legible text string? From the resource constraints and bureaucracy evident here, I wouldn't expect so.
I expect there's a bunch of other ways to do it too.
That was cool. The steak and the shrimp etoufe (spelling?) were good. The two HP engineers assigned to my project didn't know each other, we all got on great, but as per my colleague's (from Kansas city) advice, I was extremely careful around politics - their very own George W was POTUS at the time.
There was, however, an odd moment where one of the HP guys recalled a childhood memory (as a 15-year-old) of standing in the back of his dad's pickup at 40mph offroad while simultaneously wedging his legs in the bars behind the cab and wielding a rifle one-handed trying to shoot a fleeing deer or some other poor beast. Properly mental stuff. Only for the other engineer to exclaim that he had almost the same experience in his own childhood. No, seriously, I'm not even joking. The only reason these guys were't wearing sidearms was because HP had a company policy of no weapons on site.
I know the Custom Support Agreement provides hotixes and updates (although only critical one, and that's as judged by MS), but does the CSA provide continued access to tech support - given that you need a premier support agreement in order to purchase a CSA? Or has 2003 tech support been killed as part of this?
Hey folks, not sure if this should be here or in 'Consuming Passions', but here goes:
I'm moving to a house where the previous owner installed some fancy ceiling speakers for multi-room audio. Assuming that these are patched to a central point, can anyone suggest an appropriate music system? I'm thinking of a server that can see my iTunes library and spotify, and maybe rip CDs for example, and some sort of client in each room to choose the locally-played music. It would also be handy if there was an ios app version of the client.
I believe he wired cat5 throughout as well, but I guess wireless clients would be more user-friendly.
I don't know if he patched the lounge speakers into the AV for integration with the TV, but that functionality might prove handy.
Any advice appreciated.
Biting the hand that feeds IT © 1998–2019