18 posts • joined 26 Jun 2010
So if I was first (Was Apple first even? Surely Nokia had some crappy pre-ovi app store) in having a store selling groceries then I could trademark that and stop any one else having a 'Grocery Store'?
An app is not a term invented by Apple it was used well before cell phones could run 'apps' and a store is pretty well known as a place that sells items. So it seems very generic to me to call your place for selling applications an App Store.... Apple did it better than anyone else and were the first to really integrate it into the phone but that does not mean that they can trademark the name IMO.
@Runcible - I like your VPN hack story...
But you did not get the point of my post - Anon did not claim the hack was hard they made the same point as me, how did they get in so easily to a security companies system? I'm criticizing the article (Not anonymous) as the author obviously did not even bother to read the arstechnica article he ripped off and pick up that salient point.
Read arstechnica again
Did you read the ars story? Highly sophisticated attack? We are not talking stuxnet here with multiple 0-day exploits...
They got in through SQL injection - the haxors best friend. But hardly difficult given the level of knowledge here. HBGary really should not have been vulnerable especially since they supposedly provide services to test for these vulns!
Rainbow tables to hack MD5 hashed passwords is not hard when they were not encrypted properly. No salting or iterative hashing used to make it difficult.
They used a flaw in linux to get root - should have been patched. Again not that hard if you know how - it was well documented.
They used social engineering... Well they had control of an email account so would look to almost anyone like the actual owner of said email account. Not quite at the level of the best cold-calling social engineering exploits.
So really... Hard for me or anyone not in the hacking fraternity but I doubt it would get max points at any hackfest. The main point of the story is how come HBGary were so easy to get into when they are a bleedin security firm!!
Will this make the news here?
Everyones heard of chess and its acknowledged to require a certain level of intelligence to play - and a very large amount of skill to be really good. However it does follow a relatively small set of rules, albeit over a very large number of possibilities, this is not something beyond a computer. QnA is extremely hard when there is a vast amount of meaning in the question - far beyond chess - but I doubt this will have anything like the effect on the popular imagination as Deep Blue beating Kasparov. Mainly because no one outside of the US has seen Jeopardy! I had heard of it - but I've never seen an episode.
I suppose I'm saying its a shame when this is far harder than Big Blues win - I'm not at all sure they will win but if they do it is extremely impressive.
Eyes to the right
If its tracking from behind then how does it know the distance between the users eyes? Different people have different eye separations - naturally and when turning to the left or right relative to the screen. I think there would be some real-life problems with this technology which I assume has not been actually tried yet.
Maybe better to use a front tracking Kinect, Move type thingy to see where the users eyes are relative to the screen. That idea is hereby disclosed on the reg ;-)
"So yeah, rainforests will logically do fine."
Good for the rainforests on a geological scale and assuming there are not lots of pesky humans chopping them down yes. Did that 5C warming and 1000ppm CO2 happen quickly or over thousands of years... Hmmm lets think?
How about good for our rainforests in the positions they are in now? Such a large global temperature change will surely massively effect the precipitation in rainforest regions? (Plus many other factors that might equalise to happy rainforests in a few millennia) I am no expert but a little common sense applied seems to change this from a global warming is good for the environment to err wait a moment lets think about it without the global warming blinkers on (Either kind)
And yes the ripples of the Global Warming conflict continue to propagate in the medium of a million commentards lack of logic.
I was optimistic and thought this was going to be a 'Government listens to scientists' shock story but no its a 'Government pays over 2million to prove bears shit in the woods' story :-(
What is the point of this product?
I'm struggling to see it - other than solve a problem that existed a good few years ago. With either a modern JVM or a decent design you would never need this app.
For example -> (Use Gencon gc!) http://www.nearinfinity.com/blogs/aaron_mccurry/tuning_the_ibm_jvm_for_large_h.html
quote --> " finally settled on 100G heap with 50% a nursery, and the full GCs are now in the 400-600ms range. I can live with that, because this gives us a huge ceiling for load, and capacity."
This was two years ago... Will have been some performance improvement from there.
If you are doing something daft on your 'social networking' website or whatever the proposed use cases are and caching vast amounts of data or using humongous session objects then I'd suggest using a java key-value pair object grid or some such.
or an appliance that manages a 160Gb cache per box ->
... As you may have guessed I'm an IBM drone so only able to propose solutions using our products but I'm sure any quick google will provide a myriad of free open source options. (like memcached, not java but plenty of clients)
I think not - they always used BigTable in conjunction with Map-Reduce. What they have developed is a less batch driven version of their indexing MR jobs and adopted some new fancy tech that we do not know a lot about yet apart from it indexes faster...
Does that mean MR is inherently poor for data analysis? No - it means its not up to the job of creating the large index Google needs very quickly to keep their results fresh. Batch analysing large data sets is still a good use-case for MR and other large corporate companies such as IBM, HP etc are investing heavily in it.
Quite often its a good thing to point out to customers when they are wrong. If done in the right way you can gain credibility as well as design a better system. All too often it seems IT services company x will build whatever crappy system the customer thinks they want (Or in reality what the services company thinks the customer thinks they want) with disastrous results.
Pitch to the customer that hey you think you need x to meet your requirements but we have a solution that meets those requirements without it and is cheaper. Then prove it with PoCs / etc and the work should be yours assuming there is no bias. In this case it seems there was a decision made possibly before tendering even (Probably on a golf course) and they just refused to listen. If I paid tax in the US I would not want to see my politicians pissing money away on overly complex solutions - so Google are right to sue.
In terms of not getting repeat business - as other people have stated it "should* be different with a government. Reality is if they are not wining and dining the right people then I doubt they will get far.
Saying the internet is a waste of time because some bored IT workers spent time staring at screens rather than congregating around the watercooler/whatever ppl did when they were pretending to work before the internet is arse. So all the trillions generated by online commerce - bank transactions and all the hardware and software to support them is nullified by a few bored people staring at screens instead of x,y or z? (Whether you see it as worthwhile or not systems like CLS's that handle a million transactions a day at a value of 10+ trillion a day are not insignificant and would not exist without the internet)
Probably one of the daftest and biggest 'misses the point entirely'isms I've ever seen on the Reg, and I've seen a few :-)
Although I partly agree with the analysis on social computing, helping people and businesses communicate was what made the internet so important. Social computing provides a similar degree of revolution on the surface but it will be very gradual and not have anywhere near the overall impact IMO.
Interesting article - shame it appears to have been OCR'd or copied in from a hardcopy by a very tired hack? Weird errors in the text...
The last para is very true, availability is about processes and people. Almost all the examples I know of where a large (Well designed, redundant) system went down is when a person made a mistake. Having a service delivery organisation that is continually trying to improve their processes is key and pretty rare.
Seems we are arguing on the same side now - availability is always possible with cheap windows or UNIX boxes. And you are right to say its a function of reliability of the underlying components to a degree, although designing the system to be parallel will make for a very highly available system even with unreliable components.
This is not the space the Mainframe operates in -> mostly workloads are not able to be distributed so the reliability of the underlying components is very important when you have one very large box in your primary datacentre and another parallel sysplexed into your backup DC. You would never put such a workload on one Windows box in a DC - even if it could handle the demands it would never be reliable enough. You no longer seem to be arguing against this point...
The physical storage MTBF is a poinless discussion since disk is not in the mainframe but would be FICONd in from an attached SAN. Same SANs (DS8000s for example) are used in pSeries and Z so its a different question. Its also an example where the sub-system is designed to be highly available even when the components it is made up of are not, so an analogy there to the overall discussion of reliability vs availability....
I won't even bother with the software side of things.... What Windows server has components with MTBF of decades with continuous heavy use?
Fine nines availability over what period, a day, month, year? We are talking about decades here and I'm saying mainframes not IBM mainframes (Ok so there is a bit of a large bias towards IBM in the market!) A client I work for has a Unisys mainframe for example that has been running non-stop for over 15 years. These are not systems made from off the shelf motherboards where cheap labour pops a chip / mem into say a Dell wintel server. Redundant power supplies, memory, IO everything you can think of is built into a mainframe, and built in by engineers in a clean room. Ok so this example is IBM only - the eFuse technology even allows bits of a CPU to be re-routed when they start to fail to avoid system failure.
No amount of configuring is going to make a Windows box any where near as reliable as a mainframe. I think though you have confused availability with reliability in your last paragraph - I mentioned before I would not deny any well designed distributed system running Windows / UNIX or any other OS or hardware you fancy could be 100% available as long as you design for failure and the workload is suited to being distributed.
Hmm IO is not near native speed and since this is a mainframe we are talking about its pretty darn important... Which is also where a lot of the R&D goes - making sure the IO paths are all blazingly fast.
For that matter how fast would all the mainframe specific CISC instructions run emulated on a RISC CPU... Not very would be my guess.
Maybe leave the solutions to the pointy haired IBMers in futture.
Rock Solid Windows!
Mainframe reliability -> "...you can get pretty close to it on a well-configured Unix/Linux or even Windows box, if you understand what you're doing"
lol! What the f* do you do to a Windows box to make it as reliable, or 'pretty close', as a mainframe! Unix maybe more reliable but MTBF for the components in any of those servers is not in the league of a Mainframe by miles. If you come up with the Chris Miller method of making cheap Windows and Unix machines that reliable you will be a billionaire in no time. Getting anywhere near mainframe levels of availability for an app on Windows or UNIX requires a distributed design - not one box - which is where the Mainframe usually sits, its a workload that cannot be distributed (Or easily anyway).
Your statement about third party drivers shows you have little understanding how reliability is designed into zOS. There are plenty of shitty bits of code running on zOS - unlike Windows and a lesser extent UNIX they could *never* take the system down.
They said it reduced methane production - a fart is mostly hot air so you can easily fart more but produce less methane.
• On average, a fart is composed of about 59 percent nitrogen, 21 percent hydrogen, 9 percent carbon dioxide, 7 percent methane and 4 percent oxygen. Less than 1 percent of their makeup is what makes farts stink.
• The temperature of a fart at time of creation is 98.6 degrees Fahrenheit.
• Farts have been clocked at a speed of 10 feet per second.
• A person produces about half a liter of farts a day.
• Women fart as much as men.
• The gas that makes your farts stink is hydrogen sulfide. The more sulfur rich your diet, the more your farts will stink. Some foods that cause really smelly farts include: beans, cabbage, cheese, soda and eggs.
• Most people pass gas about 14 times a day.
Real world gc
Err real world gc chokes at 2-3Gb.... I think not - I know of a number of production systems that use 64-bit java on the IBM JDK with heaps far larger than that. Not quite 96Gb heaps but with gencon you can have humongous heaps and have fast gc.
This bloke did some tests with a 100G heap and gc times of 400-600ms - http://www.nearinfinity.com/blogs/aaron_mccurry/tuning_the_ibm_jvm_for_large_h.html
- All ABOARD! Furious Facebook bus drivers join Teamsters union
- Comment Renewable energy 'simply WON'T WORK': Top Google engineers
- Review Samsung Galaxy Note 4: Spawn of Galaxy Alpha and a Note 3 unveiled
- Webcam hacker pervs in MASS HOME INVASION
- Nexus 7 fandroids tell of salty taste after sucking on Google's Lollipop