back to article Microsoft's new 'Adam' AI trounces Google ... and beats HUMANS

The battle for neural-network dominance has heated up as Microsoft has developed a cutting-edge image-recognition system that has trounced a system from Google. The company revealed "Project Adam" on Monday and claimed that the system is fifty times faster and roughly twice as accurate as Google's own DistBelief system. In …

  1. SVV Silver badge

    While my own brain....

    descended to the bottom of this article, undergoing similar pressures to my sanity that must have been experienced by the outer shell of the MS submarine, I experienced the formation of a new neural network in my head from which arose a most staggering conclusion : if they can design self aware undersea vehicles, why does my OS lock up / crash at random intervals after all these years?

    Or maybe the world's ocean floors are littered with the crushed remains of millions of failed attempts that we don't have any way of knowing about?

    In any case, the rather cute project name is possibly the best trollbait of all time, so I'm not going to dignify it with a response.

  2. jnffarrell1

    Baiting Google into comparing facial recognition technology?

    There is a reluctance at Google to compare faces, or other body parts with Microsoft engineers. Glass is enough PR hot water for the moment. That said, Google can probably do in the lab, what MS is doing scarily well.

    1. Anonymous Coward
      Anonymous Coward

      Re: Baiting Google into comparing facial recognition technology?

      Tsk tsk tsk - did you not notice that everyone was aggressively avoiding the term facial recognition?

      Now you ruined all that careful PR (and rightly so, IMHO - it was the first I thought of, especially in the light of a CCTV saturated UK which is just begging for such analysis to complete the Panopticon).

  3. Herby Silver badge

    Yes, but...

    Will it play Jeopardy and win? Isn't that what neural networks do?

  4. Terafirma-NZ

    is that where the Bing R&D guys went?

    um so if the can do this and apparently it can do it with text then why is bing search still behind?

    1. h4rm0ny

      Re: is that where the Bing R&D guys went?

      Even if you have a better product, it still takes a long time to dislodge an established dominant player. And I think Bing and Google search are only comparable, not that one is better than the other particularly.

  5. Steve Knox
    Paris Hilton

    Wait...

    If humans only get the categorization right about 20% of the time, how do we know what the right category for the other 80% is...?

    1. LucreLout Silver badge
      Pint

      Re: Wait...

      "If humans only get the categorization right about 20% of the time, how do we know what the right category for the other 80% is...?"

      Miscellaneous.

    2. Harry Kiri

      Re: Wait...

      If you take a photo album (dataset) and take one picture out of each person you know and write their name on it (training data). You show these pictures to a stranger and then get them to match against all of the people in the rest of the photos in your photo album (unseen test data).

      You know who they are, so you have the right category for the entire album (dataset).

      The stranger has a go and gets it right 20%. You can only know that they get it right 20% of the time if you know the right identity of all of the other data in the first place.

      In your case it would be less as inbreeding would cause a high degree of similarity in family photos.

      RIMMERWORLD!!!

      1. Fred Flintstone Gold badge

        Re: Wait...

        That reminds me of an old Red neck joke :)

  6. Shane 4

    So this will be the ultimate porn(uhh I mean research) tool then?

    Who is that hot chick or movie star, Send uploaded image to this program and bingo out pops a few names you can then google even more porn(uhh I mean info). /facepalm

    Mind you I bet most of us would end up using it at some point, Would be quite useful.

    1. Anonymous Coward
      Anonymous Coward

      A sort of Shazam for people, basically..

    2. Anonymous Coward
      Anonymous Coward

      Google Image Search

      Already does this.

      You can give it an image URL and it will spit out similar pics and links and a name if it can find one. Only needs some optimisation for one handed operation,

  7. Lionel Baden

    Great

    I can use this to figure out what the damn letters are on the captcha codes !

  8. Robert Helpmann?? Silver badge
    Childcatcher

    Projection?

    ...like a sudden bout of creative swearing or perhaps going to a window and leering at pedestrians on the street below can give a useful jolt to our own grey matter.

    It would be interesting to see how this is optimized for performance. How much random info leads to better results? What kind of "random" stuff would help? Sports scores? News sites? FaceBook? What is the neural network equivalent of cat pictures? Wait, that one already been done...

  9. grumpy feline
    Go

    Simulated Annealing

    FTW! I presume they are avoiding that term because it would invalidate the slew of dodgy patents that will arise from this.

    1. NP-HARD
      Boffin

      Re: Simulated Annealing

      I think what's being described (asynchronous approach) in the article is more like a population-based hill-climbing metaheuristic, rather than simulated annealing. Vanilla SA implementations work with only one solution at a time, making them 'synchronous'.

  10. Harry Kiri

    Eh?

    So you have a neural network, which is a generalised classifier. Yes you can add more layers but once its generalising its pointless, unless you tie each layer to mean something specific. The point about neural networks is you dont know how any of the simulating neurons contributes to classifying any of your feature space. Adding more nodes gives you more degrees of freedom but you end up with the curse of dimensionality.

    This bit about asynchronous training providing noise, thus enhancing recognition. Eh? Are you sure? If you add noise to training data you blur it to make it look like other classes damaging your classifier. It also means your classification accuracy is dependent on todays hardware architecture, a poor idea.

    The bit about jumping out of local minima, well theres been lots of approaches to this over the last 25 years, attaching inertia to your gradient descent so you'd shoot past local minima, conjugate gradient descent, etc. Ironically if this was working as suggested it would mean that the neural network was matching the training data more specifically reducing performance on unseen test data, contrary to the adding noise to training data making it better theory.

    I wonder when this paper will be published...

    1. Michael Wojcik Silver badge

      Re: Eh?

      This bit about asynchronous training providing noise, thus enhancing recognition.

      That's quite straightforward. Optimization algorithms like Expectation Maximization can converge on local minima (or maxima, depending on what the evaluation function looks like), just as any other local-gradient-following algorithm would. (Think of Newton's method, for example.) Adding noise perturbs the system, making it more likely to jump out of a local minimum if the inflection point that forms one side of the trough isn't too much greater than the minimum value itself.

      Of course the same thing can happen with the global minimum if there's a nearby local minimum and the inflection point separating them isn't much greater than the local minimum, so the effectiveness of this tweak depends on the characteristics of the curve, as well as how much noise you're adding, etc.

      It also means your classification accuracy is dependent on todays hardware architecture, a poor idea.

      In this case, the noise is due to out-of-order inputs caused by the asynchrony of the partitioning mechanism - a happy accident. The effect really doesn't depend on the source of that asynchrony, just its degree.

      The bit about jumping out of local minima, well theres been lots of approaches to this over the last 25 years

      Sure. Many of those approaches are highly deterministic, though, which means with a large, deep hierarchy of neural networks, you'll tend to train large portions of the net to respond identically to a given input. The noise created (accidentally) by the asynchrony is presumably rather more stochastic, so it might produce less self-similarity across the net. I freely admit that's just a guess - obviously we don't have the paper to read yet.

      if this was working as suggested it would mean that the neural network was matching the training data more specifically reducing performance on unseen test data

      Not necessarily; better optimization of the weights for training data doesn't imply overtraining, particularly if the training data would have arrived at a set of local minima far from the global minima without the added noise.

  11. Anonymous Coward
    Anonymous Coward

    Wow, a NEW Microsoft system gets better results than a Google system did 9 months ago. Microsoft chirps happily about it, clickbait journos get chance to use "trounce", software/hardware slowly improves as it always has... Nothing to see here.

    1. TheOtherHobbes

      Who - or what - do you think write Nadella's last strategy email?

  12. Stretch

    Yawn. Bayesian classification. Easy peesy lemon squeezy. http://en.wikipedia.org/wiki/Bayesian_inference

    "distributed implementation of stochastic gradient descent" i.e. its a small cluster of boxes running a bayesian algorithm.

  13. oldcoder

    LIke they said... techniques from the 1980's

    Another MS innovation...

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019