Cat pictures and machine learning and datasets
Machine learning and cat pictures you say.....
(The Edges2Cats section obviously)
Amazed this has not been mentioned as quite viral this week
Machine learning has become a buzzword. A branch of Artificial Intelligence, it adds marketing sparkle to everything from intrusion detection tools to business analytics. What is it, exactly, and how can you code it? Programs historically produce precise results using linear instructions. Fetch that file, open it, alter this …
You need an AI to tell you what people think of you (you're a greed ba***rd) or your company (which has no respect for anyone's privacy)? You're not someone from a reality TV show by any chance?
I am curious. 152 layers to do 1000 different classification. Now obviously "deep learning" or multi-layer NN to dispel some of the mystery around this term, is only a model of human brain function but how many layers does the brain have? I've seen one suggestion that there are only about 7 distinct tissue layers in the pre frontal cortex.
AFAICT this is nice round up of the development options available for people who want to get into to doing something like this.
I'm curious as to how this will deal with fine distinction cases.
Frinstance, say for example, I have a hobby collecting images of raw, uncooked sausages (pork especially) and images of a more 'ahem' anatomical nature. I wonder if it will delete everything or just leave the uncooked sausages?
The mind boggles
Show a machine learning algorithm enough sausages and enough willies, with good enough image analysis tools available, and it'll figure out the difference. Make sure you use a range of different looking examples from both classes though. You don't want to over train it on Richmond sausages, or it might start relying too much on pinkness.
There is no actual learning going on is there? You just set the parameters and it tallies statistics and makes decisions using your parameters. Very simplified statement as I the programming is getting complex and requires some talented humans to do it well and modern hardware to succeed at it. But calling it machine learning seems a stretch. I guess like AI, it is a marketing term now. Anything that simulates AI is AI.
The article's description of deep learning nets is actually describing convolutional nets, which are typically used in image processing. These use a fixed, smaller set of weights which are applied in a sliding window over an image (hence convolutional). These are a type of deep learning system, but do not well describe the field as a whole.
If the adapteva chips coming back next month work out OK they look a whole better fit for this sort of thing - and possibly 5 times the TFLOP per W of the best GPU processors.
And the million core version looks interesting too!
Now all we need is AI that can tell WTF it thinks it is doing...
Oh and you forgot to mention Brian2
Biting the hand that feeds IT © 1998–2020