The "Object detection" project
The hunt for image descriptions has begun! Amazon should either give cameras for free or share the proceeds.
While it's fair to say Amazon Web Services' upgraded DeepLens AI camera is essentially a mini PC with a cloud-connected HD camera, it also has many slick features. The gear is now available in Europe – the first version was only available in the US. The new edition is now in seven new countries, including the UK. DeepLens is …
Having a large enough volume of manually annotated (labeled) pictures, Amazon can already create a system of automatic annotation of newly incoming images, analyzing graphically similar images. At the same time Amazon is, likely, to face two problems, the main of which is lexical noise.
Here Amazon will have to somehow find the texts related in some way to the pictures, and structure them into patterns/ synonymous clusters in order to remove noise, following in that the example of Microsoft. For example, there is "the sentence: “The city councilmen refused the demonstrators a permit because they [feared/advocated] violence.” If the word “feared” is selected, then “they” refers to the city council. If “advocated” is selected, then “they” presumably refers to the demonstrators."
- city councilmen feared
- demonstrators feared
is lexical noise.
- city councilmen advocated
- demonstrators advocated
is lexical noise.
"Microsoft has significantly improved the MT-DNN approach to NLU, and finally surpassed the estimate for human performance on the overall average score on GLUE (87.6 vs. 87.1) on June 6, 2019." That is, Microsoft deletes lexical noise using this antecedent patterns. So this problem Amazon can solve.
The second problem is the insufficient size of the annotating texts. Amazon can solve this problem by annotating words with dictionary definitions, which it already and successfully does.
Biting the hand that feeds IT © 1998–2019