AI works with texts, it' s a search mechanism that understands questions and can find the correct answers. That is, 3D objects must be described by texts (labeled) to be of at least some interest to AI.
Boffins stole our 3D files – and gave them all to Facebook's AI eggheads, claims Lithuanian biz
Facebook used a purloined database of 3D objects for its AI projects, according to a Lithuanian company that spent years and millions of dollars compiling the dataset. UAB Planner5D has sued the antisocial media giant and Princeton University in the US, claiming the posh college illegally downloaded more than a million …
COMMENTS
-
-
Saturday 8th June 2019 00:45 GMT doublelayer
Your comment is entirely incorrect
AI works with data. In fact, AI is so ill-defined that you can have any type of input and do something generally called AI to it. I think we're mostly talking about machine learning. Machine learning has no requirement for text input. It has no requirement for questions, statements, etc. You simply provide some data, a method of training some model of some type, and some method of determining whether it is correct or not. This is why this data is so important, as it seems to have been created with attention to details that make it easier to use in machine learning without painful preprocessing.
I also have no clue how your statement would impact this situation if it was in fact correct. Let's say that they did need to provide questions. The collectors of the data could have provided those. Even if they didn't and, as before, questions were required (let me reiterate that they are not), obtaining the data sans questions in violation of terms would still be against those terms and thus illegal.
-
Saturday 8th June 2019 08:50 GMT I.Geller
Re: Your comment is entirely incorrect
Machine Learning didn't exist before I patented it. It hasn't existed for the previous 75 years and suddenly it does.
I discovered how to structure texts, which makes them kind of a program, i.e. synonymous clusters. For example there is a paragraph:
- Alice, Bob and Ilya laugh merrily. They are happy. Especially cheerful she. --
In this paragraph there are the following 10 patterns:
- Alice laughs merrily.
- Bob laughs merrily.
- Ilya laughs merrily.
- They laugh merrily.
- Alice is happy.
- Bob's happy.
- Ilya is happy.
- Alice especially cheerful.
- She is happy.
- She laughs merrily.
So, in this paragraph there is a synonymous cluster, of 5 patterns, characterizing the state of Alice:
- Alice especially cheerful.
- She is happy.
- She laughs merrily.
- Alice is happy.
- She especially cheerful.
This cluster is created by clear rules, it contains 100% information about Alice in the form of code. Hence the cluster can serve as a user manual for a computer program - what it should to do. For example, the pattern "Alice especially cheerful" can always be assigned the function of issuing a sedative to Alice.
So, Machine Learning is the search and addition of one clusters to others, some texts to others.
If you can show me another kind of Machine Learning - you are welcome! Please show me another one?
So only the addition and removal of texts on the basis of feedback exists as Machine Learning.
-
Saturday 8th June 2019 09:15 GMT JLV
Re: Your comment is entirely incorrect
You certainly haven’t discovered how to make much sense with your customary ‘Alice is Strong’ gibberish, IG dear.
You come in here and peddle your random short sentence crap every so often. As if we’re supposed to care. Still “I invented ML” sounds pretty effin stupid to me, even by your unimpressive standards, pardonnez mon French.
-
-
Saturday 8th June 2019 13:16 GMT Anonymous Coward
Re: please don't show off?
Machine Learning didn't exist before I patented it. It hasn't existed for the previous 75 years and suddenly it does.
You really need to stay on your medication.
See for example, Filed January 23, 1993, United States Patent US5316485 "Machine Learning".
-
Saturday 8th June 2019 13:58 GMT I.Geller
Machine Learning!
Who, when and how has used this nonsense? Did it bring a dime? Therefore this is nothing, zero.
My earliest AI technology has already brought hundreds of billions, read PA Advisors v Google? (And pay attention to the fate of this rogue judge who rendered this decision? Randall Ray Rader, a former United States Circuit Judge and former Chief Judge of the United States Court of Appeals for the Federal Circuit? Kicked out everywhere.)
My structuring of texts technology converts texts into contextually-and-subtextually targeted programs (related synonymous clusters); it allows to find the right clusters, add and delete. Absolutely all, 100% of common programs are not adapted for this, you cannot get-find a few lines of code and add them, it's completely impossible. However, you can do it with the clusters.
For example you can find a cluster in Dickens and add it to Tesla navigation system.
This is the only Machine Learning!
-
-
-
-
Saturday 8th June 2019 19:14 GMT I.Geller
Re: Machine Learning!
Those neural networks, which you're talking about, have no value at all because they were created within the framework of External Relations theory.
What do I mean? In order to create AI I had to understand where this theory of Analytical Philosophy came from and how to refute it (I studied Philosophy of Language (Moore, Russell, Wittgenstein) and saw that it was the only and leading theory, and SQL/ Internet came from it). The key gave me Bertrand Russell, who wrote "Why I am not a Christianin". Realizing that he was an atheist, I turned to Bible, Maimonides, Nichola of Cusa and Hegel. After studying them I understood that they proposed Differential Analysis, while External Relations offered to operate-calculate using Arithmetic, for differentiable functions' limits/ SQL is for words and not sets of phrases. The next step is obvious - I looked where it all started at and found out that it began from Geometry and Pythagorean theorem. Thus I claim that Pythagorean theorem is wrong because it's a limit (there are no straight lines in our Universe), which has no ground in the world of differential becoming (Bible, ect).
After that I developed and patented a system how to structure text, which was immediately stolen by Eric Schmidt, Sergey Brin and Larry Page. Next I started from scratch and patented this time AI, which uses Mashine Learning as the basis for AI function increment, and paragraphs as integrals for this function. Which is already stolen as well...
So, no one needs your networks because they
- do not use my structuring of texts,
- my Machine Learning,
- and paragraphs as integrals,
- and sets of dictionary definitions as constants,
- and AI-parsing but old n-gram parsing;
That is, it is impossible to reconstruct the inner world of a person using texts, into your networks. Nobody needs them and they have nothing to do with AI.
-
-
-
Saturday 8th June 2019 15:16 GMT I.Geller
Re: Machine Learning!
I'm a thinker of the old Russian school, in the West such are not anymore. A fragment of a long-exterminated, forgotten and bygone era. My patents are complex, they have many layers of ideas. As the need arises I open the next veils and reveal the new ideas. A few more to go, they all are public but you cannot see them (even if they are right before your) because you weren't at NIST TREC QA.
-
Saturday 8th June 2019 21:14 GMT doublelayer
Re: Machine Learning!
"For example you can find a cluster in Dickens and add it to Tesla navigation system."
Let's assume all the comments above in this thread are true, or meaningless. What I want to know is what was going through your mind when you made the statement quoted above. By "Tesla navigation system" do you mean the auto-driving technology in Tesla cars? Why does that need any passages, from Dickens or anyone else? When I use your miracle code to clusterize a passage from Dickens, how do I insert it into my Tesla navigation system, and what will it do. If it mentions a place name, will it try to take me there? If that place is fictional, what does it do then? If the part is a quote from one of Dickens's more unpleasant characters, will the car start driving at walls to get me killed? Or is this some type of firmware replacement that removes my autopilot but replaces it with a literary critic? Help me out here. What in the world does this mean.
Also, if you want us to read your patents, perhaps you would be so kind as to provide the patent numbers so we can find them? That would really help us to decide one way or another whether we agree with you.
-
Saturday 8th June 2019 22:03 GMT I.Geller
Re: Machine Learning!
Dickens has something on pulling stagecoaches out of the mud in the English countryside. Could happen that suddenly Tesla gets stuck, somewhere in Australia, and its AI starts looking for instructions on what to do, finds a cluster in Dickens and tries it.
This is the idea of Machine Learning: the computer searches for a piece of text/ one synonymous cluster that most contextually-and-subtextually corresponds to the task facing AI; AI determines that based on input clusters from Tesla sensors.
Each pattern in each synonymous cluster has the same power as a programming language command.
For example, finding a cluster in Dickens Tesla AI finds a pattern "pull harder to the left" and turns the steering wheel to the left, pressing on the gas. If this works then this cluster is remembered and used in the future. If not - Tesla AI is searching in other texts for other clusters.
I think Waymo uses this technology.
All depends how good you taught your AI. This is not easy! The best way is to lexically clone somebody you trust.
Go the US PTO and type Ilya Geller - four are mine, one already expired.
-
Sunday 9th June 2019 16:44 GMT doublelayer
Re: Machine Learning!
I'll go on record as saying I don't want my autodrive to do that. While Dickens was a wonderful writer, I don't want a car to:
1. Choose his advice, which applies to a vehicle that travels off-road rather than mine, which must remain on the road;
2. choose his advice,, which applies to a vehicle drawn by horses and going at much lower speeds than mine;
3. choose his advice about muddy English roads when I'm on an Australian road which is probably very dry but may have other problems like disrepair, sand, etc.;
4. choose his advice that includes information about specific directions when I am not in the location concerned;
5. consider, in any way, the writings of a fiction author who never drove a car as advice on automotive safety;
6. waste time looking for a piece of text that sort of kind of a little matches my situation instead of trying to use rules or experiential learning to get out.
-
-
-
-
Sunday 9th June 2019 11:32 GMT I.Geller
Re: Machine Learning!
Funny, but not only.
Each word of each pattern must be uniquely indexed by its meaning, i.e. it is necessary to form its tuple; where in mathematics, a tuple is a final ordered list of elements. The more a tuple - the more clearly understood a word.
That is, annotating a word dictionary definition you have to explain to the computer every word of this definition, to distinguish a synonym and add this synonym's dictionary definition.
For example, the definition of the word "red":
-- of a color at the end of the spectrum next to orange and opposite violet, as of blood, fire, or rubies.
"her red lips"
synonyms: scarlet, vermilion, ruby, ruby red, ruby-colored, cherry, cherry red, cerise, cardinal, carmine, wine, wine red, wine-colored, claret, claret red, claret-colored, blood red
You need to add definitions to all words, such as "color" and "next", all synonyms ("vermilion", etc.) and filter them through the text that you profile finding the proper.
Now you agree that OpenAI is a provocation? That someone is trying to kill AI technology by sending you all in the wrong direction?
-
-
-
-
-
-
-
-
Sunday 9th June 2019 22:35 GMT I.Geller
Grammar + AI-parsing + synonymous clusters
Grammar can be put into a parsing algorithm, the rest is a matter of lexical noise removal (which is done on the basis of context and subtext (dictionary definitions)). Read any grammar book for elementary school? And use AI-parsing + synonymous clusters?
-
-
Monday 10th June 2019 11:50 GMT not.known@this.address
Re: Your comment is entirely incorrect
Silly me, I thought machines "talked" in Binary or Hexadecimal, not English. So how exactly does writing text help a computer spot a person fallen in a room?
I'm guessing there's an interesting method to tell it the difference between, say, a fallen child, a sleeping cat or a carelessly discarded overcoat - all can "look" pretty much the same through a camera but will need considerably different reactions - which I guess is where the real skill comes in.
-
-
-
Saturday 8th June 2019 06:37 GMT JLV
Thanks for that enlightening and somewhat surprising statement. I was under the impression that the emergent Skynet is coming from scanning the vast corpus of cat videos. Or by looking for bikini pix of your FB friends. Or by tons of other equally culturally significant activities, many of which lack much text.
Thankfully it will be quoting Shakespeare instead. Well, according to you at least.
-
-
-
Saturday 8th June 2019 09:41 GMT I.Geller
One pattern - one neuron.
One set of patterns obtained from the same paragraph - one elementary neuronal network.
All elementary neuronal networks of one person - his consciousness.
That is, the neural network is a not invasive representation of human consciousness based on the analysis (structuring) of the texts used by them, where these texts form AI database.
Will you please forget all the crap you're being told about neural networks? I give you a clear definition, which is already working. There's IBM Watson, Waymo, Microsoft, etc...
-
Saturday 8th June 2019 12:35 GMT I.Geller
Why FB needs images? It answers questions.
Microsoft does answer too:
"The algorithm, called Space Partition Tree And Graph (SPTAG), allows users to take advantage of the intelligence from deep learning models to search through billions of pieces of information, called vectors, in milliseconds. That, in turn, means they can more quickly deliver more relevant results to users.
Vector search makes it easier to search by concept rather than keyword. For example, if a user types in “How tall is the tower in Paris?” Bing can return a natural language result telling the user the Eiffel Tower is 1,063 feet, even though the word “Eiffel” never appeared in the search query and the word “tall” never appears in the result."
How?
Using AI database technology:
The word "tall" has these definitions:
-- adjective
--- of great or more than average height, especially (with reference to an object) relative to width.
"a tall, broad-shouldered man"
synonyms: big, high, large, huge, towering; More
--- (after a measurement and in questions) measuring a specified distance from top to bottom.
"he was over six feet tall"
synonyms: in height, high, from head to toe/foot
"he's about 5 foot 8 inches tall"
Microsoft, having a structured into my patented synonymous clusters text (on Paris towers), can chose one of these 2 definitions.
How?
Microsoft filters these two definitions through its anchor structured text, choosing the most appropriate.
The word "height" (from the first) has its own definitions, which are to be added to one of the definitions for the word "tall", and sieved (as a structured texts) through the anchor text. The same should be done for the rest of definitions' words and synonyms.
As the result Microsoft gets a uniquely indexed AI database, which was the target of NIST TEC QA, and can make an answer in the best NIST TREC traditions - the Eiffel Tower is 1,063 feet!
FB somewhat late, and understandably so - AI technology kills FB advertising business, leaving it without money.
-
Saturday 8th June 2019 13:37 GMT Caver_Dave
FFS
My company was specifically told in 1995/6 by the UK patent office that none of the machine learning algorythms, both plain software and FPGA 'nuron' based, that we were selling to customers, could be patented as they were "just software". (Speaker identification demonstrator for a global bank, predictive maintenance for lights in underground carparks, identification of maintenance requirements on jet engines, are just the first three that spring to mind.) Then I've just performed a search and find that I Geller posted patents in the US in 1999 and since for specific applications of the ML, followed by almost every man and his dog in the US.
But then, we were a four person company in the UK with 'sensible' patent laws and then there is the US, who let you patent a rounded rectangle if you have enough money! ....
If I worked in the US for a large corp, I could have dozens of 'spurious' patents to my name, but that wouldn't make me a happier person.
-
-
-
-
Sunday 9th June 2019 20:40 GMT I.Geller
Alice feels very glad
There is a paragraph:
---
Alice waited till the eyes appeared, and then nodded. ‘It’s no use speaking to it,’ she thought, ‘till its ears have come, or at least one of them.’ In another minute the whole head appeared, and then Alice put down her flamingo, and began an account of the game, feeling very glad she had someone to listen to her. The Cat seemed to think that there was enough of it now in sight, and no more of it appeared.
---
N-gram parsing cannot get the patterns:
- Alice then nodded
- Alice had someone
AI-parsing and AI-synonymous cluster technology can.
With n-gram searching for an answer on the query "Had Alice someone to listen to her?" you cannot get the answer "Alice feels very glad". With AI technology you can. This is AI database!
-
Sunday 9th June 2019 21:01 GMT I.Geller
said the Mouse to Alice severely
There is a paragraph:
---
‘You are not attending!’ said the Mouse to Alice severely. ‘What are you thinking of?’
---
With AI-parsing and my synonymous clusters technology you can get these patterns:
- Alice are are not attending
- Mouse are not attending
- Alice are thinking of
- Mouse are thinking of
- Mouse said severely
You cannot get with any other technology but AI-database!
-
-
-
-
Sunday 9th June 2019 14:25 GMT I.Geller
FFS - AI-parsing works for a discontinuous sequence of n elements as well
In the fields of computational linguistics and probability, an n-gram is a contiguous sequence of n items from a given sample of text or speech.
My patented AI-parsing works for a discontinuous sequence of n elements as well:
United States Patent 8,504,580
"3. The method of claim 1 wherein each context phrase is a combination of a noun with other parts of speech at least one of which is a verb or an adjective."
N-gram parsing was the only one for the past 75 years and it works for contiguous sequences only, my the first novelty in the field because it works for a discontinuous sequence of n elements as well.
-
-
-
Monday 10th June 2019 07:13 GMT Somefagg0t
Totally not raving mad.
Yes, of course. Then the texts can combine with the graitons emitted by the big bang to form neural hyperloops, capable of travelling at over 800mph.
This will result in instantaneous travel between the Earth and the Moon (which, as we know, is part of Mars).
Once there, the gravitons will return to their liquid state of delicious beef gravy which will spread out across the land and freeze solid in various patterns, allowing mars satellites to photograph the result, which will be text.
Then simply repeat the process once for each iteration of the cycle. For the avoidance of doubt, a cycle is equal to one 22" wheel dirt bike of any brand but it must be under 125cc.
-
Monday 10th June 2019 08:54 GMT I.Geller
Re: Totally not raving mad.
IBM and Jeopardy.
IBM Debater.
Google.
Waymo.
"For example, if a user types in “How tall is the tower in Paris?” Bing can return a natural language result telling the user the Eiffel Tower is 1,063 feet, even though the word “Eiffel” never appeared in the search query and the word “tall” never appears in the result. Microsoft uses vector search for its own Bing search engine..."
-
-
Monday 10th June 2019 12:26 GMT I.Geller
NIST TREC QA
Yes, you hit the nail on the head. This is the problem that NIST TREC QA demanded to solve:
- how to expand the search query to hundreds and thousands of patterns,
- how to prepare text so that all patterns, all words are annotated with hundreds and thousands of patterns and words.
NIST TREC believes that the ability to find answers is Artificial Intelligence.
I proposed and patented a solution.
Looks like Microsoft may use my ideas.
-
-
-
-
-
-
-
-
-
-
Saturday 8th June 2019 21:19 GMT doublelayer
Re: Digitised objects
That's not how that works. If I have an object, I am allowed to collect information about that object. I couldn't steal and distribute the plans without permission, but I can take lots of detailed photographs and measure every line segment I can find unless I've specifically agreed not to do so (E.G. NDA on an upcoming product). Once I've done that, the data I've collected is data I can choose not to give to people. Data can be created freely, but you sometimes have to get permission to legally copy or share it.
-
-
Saturday 8th June 2019 16:12 GMT Anonymous Coward
Fourth Generation Language ("The Last One") -> expert systems -> machine learning -> "AI"
That's it really.
Going back to the old days of cartography, map makers used to smuggle trivial errors into their products making it a cinch to spot a ripped off version.
Maybe owners of Very Large Datasets should do the same ?
(It would be an interesting demonstration of the total fail of "AI" if it couldn't spot the elephant trap and remove it .....)
It's sort of a budget version of cryptographic signing .....
-
-
-
Monday 10th June 2019 17:40 GMT I.Geller
Fourth Generation Language
AI structures texts into sets of patterns (I call the synonymous clusters), where each cluster's patterns are dedicated to the same topic. For example there is a sentence
--
Alice and Bob exercise merrily, she trains a lot.
--
AI-parsing constructs these two patterns for the first clause
- Alice exercise merrily
- Bob exercise merrily
For the second clause AI-parsing constructs these two patterns:
- she trains a lot.
- Alice trains a lot
Next, AI-parsing counts weight for all patterns:
- Alice exercise merrily - 0.25
- Bob exercise merrily - 0.25
- she trains a lot - 0.5
- Alice trains a lot - 0.5
And finally comes the synonymous clusters, there are three here. One is:
- Alice exercise merrily - 0.25
- Alice trains a lot - 0.5
A certain action is assigned to "Alice trains a lot" pattern - it becomes a direct analog of a programming language command. For example, the purchase of vitamins, when this pattern is found. So, any structured text becomes a program, in a sense.
-
Monday 10th June 2019 22:18 GMT I.Geller
blockchain
Textual patterns reflect cause-and-effect relations, which computer understands from texts' fragments' timestamps. In other words, my AI database is a blockchain system.
United States Patent 8,516,013:
14. The computer system of claim 9 in which said facility configured to extract predicative phrases is further configured to assign to the subtexts information regarding the date of their origin.
This "programming language" is my AI database itself, with all its parts of speech, relations and annotations.
-
-
-
-
Saturday 8th June 2019 16:25 GMT Notas Badoff
Cat trap
Your mention of "trap streets" tickled my imagination, how an AI database of propositions could be 'trapped' for copy detection.
You know how at times for no apparent reason a cat will freak, pop up two feet into the air, then run like a terror out of a room and disappear? Now imagine your little cute and fuzzy AI robot attempting the same thing, upon seeing a cat.
Why did it do that? Oh, we don't know. Robots just do that sometimes.
-
-
Saturday 8th June 2019 20:49 GMT I.Geller
You can put a "thumb down" as much as you want, but Microsoft with me!
"The algorithm, called Space Partition Tree And Graph (SPTAG), allows users to take advantage of the intelligence from deep learning models to search through billions of pieces of information, called vectors, in milliseconds. That, in turn, means they can more quickly deliver more relevant results to users.
Vector search makes it easier to search by concept rather than keyword. For example, if a user types in “How tall is the tower in Paris?” Bing can return a natural language result telling the user the Eiffel Tower is 1,063 feet, even though the word “Eiffel” never appeared in the search query and the word “tall” never appears in the result."
-
Saturday 8th June 2019 21:29 GMT mics39
No contest
No need for popcorn here.
An outfit from some obscure ex-Soviet block country with no strategic natural resources (sorry Lithuanians) vs an all American ruthless world conquering beast with very deep pockets and an elite university entangled with the US establishment.
Oh Einstein! Why didn’t you just stay in Switzerland!
-
Sunday 9th June 2019 07:43 GMT macjules
Re: No contest
Also,
We have asked Facebook for comment and will update this story if it gets back to us. The issue is due in court for an initial scheduling session on September 9. ®
Perhaps I might suggest an alternative,
We have asked Facebook for comment but unfortunately they are not able to discuss this, or any other matter, until Mark Zuckerberg has appeared at a House Oversight Panel to explain his company's actions and complete yet another apology tour..
-
-
Sunday 9th June 2019 13:10 GMT bazza
What Were They Thinking
If those researchers at Princeton really did steal all that data, make it available, and say as much in their research papers, what on earth did they think was going to happen? Did they really think the data owner would be happy in anyway? Whichever journal published their papers should withdraw them; currently they're getting credit / revenue from what looks like being stolen goods.
-
Monday 10th June 2019 09:59 GMT Cuddles
Re: What Were They Thinking
"If those researchers at Princeton really did steal all that data, make it available, and say as much in their research papers, what on earth did they think was going to happen? Did they really think the data owner would be happy in anyway?"
I suspect the issue hinges on the word "steal". Having had a quick look at their website, it appears to be trivial to access, copy and/or download all the models without even having to sign up for the free account they occasionally mention, and in doing so there's no mention at all of any need to pay for it. Indeed, it seems to be entirely presented as a social network-type thing with all the content provided by users - I'm still not entirely sure if Planner5D have actually done anything at all other than providing a platform for people to post their own work. In fact, my guess would be that "4000+ Items" they say are available might be created by them, but the millions of objects and scenes allegedly stolen are overwhelmingly both user-generated and free to view/download.
It's a bit like putting out a bowl of free sweets with a sign saying people are free to take one if they want. If someone comes up and takes the whole bowl, it's difficult to claim that they've actually stolen anything, and a sign doesn't constitute a binding contract. And it gets even trickier if all you do is put the bowl out and ask other people to supply the sweets.
-
Monday 10th June 2019 07:44 GMT The Nazz
What puzzles me is ...
Why didn't they sue in Lithuania, or certainly the EU, in the first instance?
Their argument in the article seems wholly believable and plausible, such that one would expect Judgment in their favour.
Now recovering that Judgment value from Princeton and Facebook may require further action in the US.
Wonder what a Judge Orlowski would've made of this carry on?
-
Monday 10th June 2019 11:56 GMT Any mouse Cow turd
Aaron Swartz
I can't see how what they did is any different to the actions of Aaron Swartz, so they should at least face the same charges (which according to the gospel of Wikipedia are "Federal prosecutors later charged him with two counts of wire fraud and eleven violations of the Computer Fraud and Abuse Act, carrying a cumulative maximum penalty of $1 million in fines, 35 years in prison, asset forfeiture, restitution, and supervised release."