* Posts by I.Geller

620 publicly visible posts • joined 4 Sep 2015

Page:

Elon Musk's new idea is to hook your noggin up to an AI – but is he just insane about the brain?

I.Geller Bronze badge

Re: neurology

I mean my Lexical Clones, like clones of Plato, Presidents Roosevelt, Clinton and G.Bush, Julius Cesar, Bernard Shaw and Fedor Dostoevsky. Their texts contain traces of their internal neural networks as hierarchies of patterns. I spoke to them, read my report at NIST TREC 2003-6?

I.Geller Bronze badge

Re: neurology

This is exactly the reason why I propose a non-invasive method of external fixation of brain activity, as the creation of personal profiles (AIs). That is, if the profile can adequately answer all the questions, then there is no sense to go into the internal mechanics of the brain, better to leave it to neurosurgeons and begin to do science.

I.Geller Bronze badge

Re: Enjoy the choice

Yeah, that's a really stupid Musk's idea. The concept of non-invasive human copying well developed and has been tested as a brain-like AI.

Don't give it away, give it away, give it away now, bot busting biz tells reCAPTCHA data serfs

I.Geller Bronze badge

Re: PageRank algorithm

...a 'pure' page rank search engine...

AI is a search engine: AI answers questions through the structuring of pages (their texts), distilling their only true meaning. However, AI is not a search engine, can not come another monopoly.

I.Geller Bronze badge

Re: PageRank algorithm

The technology how to get the only true meaning of texts through

- their patterns' extraction,

- detection of these patterns' importance (as statistics on their weights),

- clearing the patterns from random, less/ not at all meaningful patterns

is discovered, patented and tested, which means one can find what one needs without any search engines service. However Google has already tasted the beauty of power over information and can/ manipulates it.

I.Geller Bronze badge

Re: Some non-flim-flam for you

"...I make my website more useful and increase the chance..."

To do this you should leave only significant patterns that really convey the true meaning of your site. I. e. you should remove all the noise that pollutes this significant patterns. Google can do THAT only using my patented noise reduction technology.

Good luck deleting someone's private info from a trained neural network – it's likely to bork the whole thing

I.Geller Bronze badge

The AI database is a blockchain database, that is

- long-ago use of information

- and the presence of rarely used templates

can result into automatic deletion of information.

I.Geller Bronze badge

What you call “model” I call "profile".

1. “Training” your data by dictionary definitions you create in each profile a huge number of truly significant patterns, which very accurately determine its semantic orientation. For this "training" must remove from the profile what I call "lexical noise", right at the stage of parsing (preliminary preparation) of the data. This deletion ensures that each profile can both be found and itself find only a narrow circle of other profiles.

(I wrote in my patent: "Such lexical noise is typically superfluous predicative definitions that do not explain the central themes contained within the digital textual information and, accordingly, removal of such noise often results in an improvement in the quality of the structured data." In another I wrote: "If Compatibility=100% - most likely only absolutely identical paragraphs/passages can be found. If Compatibility=0% - all paragraphs/passages that have even one same word and/or predicative definition are found. )

Then the presence or absence of some address matters only in combination with a variety of other patterns, since they will allow or not to overcome the compatibility threshold necessary for either receiving or transmitting information.

Therefore in order for compatibility to really work is necessary to remove all lexical noise, which is impossible without a high-quality dictionary.

2. If you train your date by "other data" and not by the high-quality dictionary then most likely you will not be able to remove its lexical noise. Indeed, this "other data" plays the role of a dictionary, defining parts of speech and the meaning of the words of your data. That is, you must be sure that the "other data" is able to adequately do it.

And now please explain me why spend time and a huge number of resources on the creation a new dictionary when there is the old and proven high-quality? Only because you don't want to pay me?

3. There is a sentence "Alice and Greg swim with joy." If a system doesn't see each word's part of speech, then the word "joy" can be taken as a noun (name) "Joy", resulting in erroneous patterns when parsing the sentence.

For instance, if the word "joy" is a noun-name, then these patterns appear:

- Alice swims

- Greg's swims

- joy swims.

If the word "joy" - an adjective, then these:

- Alice swims with joy

- Greg swims with joy.

You what may or may not happen, some system see the words 'joy" and "Joy" the same - try to type 'ilya geller" and "Ilya Geller" in Google?

I.Geller Bronze badge

Being trained using dictionary definitions computer can accurately determine parts of speech for each word and build the correct patterns, and begin to understand you. That is, computer can stop being a calculator and start thinking. This is your only chance to force information OUT of storage.

At the same time the quality of dictionary definitions is incredibly important! The more accurately they explain each word of each pattern the higher the chance that the computer will understand both its data and you, and delete what is not right.

Without the dictionary definitions you can get a calculator and not AI, the comp won't know what should stay and what must go.

I.Geller Bronze badge

Now do you understand why I couldn't get any support from the Government? Why have I struggled for 10 years? I suggested AI to be owned by us, not by third parties like the CIA (Google and FB). But the state, in the face of its Almighty agencies, is not interested in our privacy and I did not get any funding.

Fortunately, we live in the country of bestial capitalism and someone will surely start selling AI with a privacy option, given our desire to have our lives protected from invasion and pay for this. That's what I hope to.

I.Geller Bronze badge

saving millions

Dictionary is a book or electronic resource that lists the words of a language (typically in alphabetical order) and gives their meaning, or gives the equivalent words in a different language, often also providing information about pronunciation, origin, and usage.

Training your date on dictionary you get a ready-made index, plus you get a huge number of patterns that explain the meaning of the used by you words. Instead of many gigabytes of texts you can use only 20-26 megabytes, saving millions.

I.Geller Bronze badge

Simply train data on dictionary, that's it.

I.Geller Bronze badge

What a stupid idea! To train date on another date! Why not instead to train it on dictionary? Does training date on another date mean creating of a new dictionary? So why waste energy, money and time creating a new dictionary when there is already a well-established old one?

Loose tongues and oily seamen: Lost in machine translation yet again

I.Geller Bronze badge

Re: Just to further make people think ...

Type "ilya geller" in Google? And try "Ilya Geller"? Do you see any difference? So "i" and "I" are the same for Google.

I.Geller Bronze badge

Re: Just to further make people think ...

Is it possible that the computer sees "j" and "J" as the same thing? No matter how unlikely but it's possible, isn't it?

There is no room in SQL database for possibilities and probabilities, only for the certainty. The AI database is no different, all lexical noise should be purged.

I've personally seen a few cases where capital letters were ignored and all words were treated as starting with a small letter, no matter what letter they really started with, because it didn't matter.

I.Geller Bronze badge

Re: Just to further make people think ...

For example, anther sentence: “The city councilmen refused the demonstrators a permit because they [feared/advocated] violence.” If the word “feared” is selected, then “they” refers to the city council. If “advocated” is selected, then “they” presumably refers to the demonstrators.

Thus some patterns are lexical noise and should be somehow either deleted or ignored. The Microsoft team did that using a new method based on the semantic similarity between the pronoun and its antecedent candidates, which I patented as surrounding paragraphs's patterns. As the result Microsoft has significantly improved the MT-DNN approach to NLU, and finally surpassed the estimate for human performance on the overall average score on GLUE (87.6 vs. 87.1) on June 6, 2019.

As for Joy... You never know what programmers can invent drinking a beer, this is the reason quality control exists.

I.Geller Bronze badge

I mean the following: there is a sentence "Alice and Greg swim with joy." If Google doesn't see each word's part of speech, then the word 'joy' can be taken (by computer) as a noun (name), resulting in erroneous patterns when parsing the sentence.

If the word "joy" is a noun, then these patterns appear:

- Alice swims

- Greg's swims

- joy swims.

If the word "joy" - an adjective, then these:

- Alice swims with joy

- Greg swims with joy.

People can immediately understand what the word "joy" is an adjective because it doesn't begin with a capital letter. But computer must be pre-programmed accordingly. And if not?

If not there are errors and, as a result, these terrible translations: this sentence can be found then a search request "Does Joy swim?" used, even if it has nothing about Joy. Words with wrong parts of speech potentially can become the source of lexical noise, erroneous patterns and wrong translations.

This post is translated from Russian, using my skills and Google translate.

I.Geller Bronze badge

The terrible quality of translation is a direct consequence of their inability to remove lexical noise. I know this because I faced this problem many years ago.

Congrats, Nvidia and Google: You're still the best (out of five) at training neural networks

I.Geller Bronze badge

Re: Machine Learning is impossible without dictionary!

To solve new problems - and the purpose of Machine Learning is automatic, without human intervention, finding and adding new solutions - a system and method of finding these new solutions is required.

In order to find a new solution it is necessary that, at least, it is somehow described. For example by a text (or an image) because a human can comprehend only texts and images.

Thus, sooner or later, everything comes down to the finding of texts and images. However computer recognition of images is far from perfect and textual annotations can help. Therefore, sooner or later, everything comes down to the finding of texts.

I presented my patented system and method of structuring and finding texts, which I called AI because NIST TREC thinks that such a system (if it works) is AI.

I.Geller Bronze badge

Machine Learning is impossible without dictionary!

Machine Learning helps computer to find answers to questions, which is the essence of AI. But questions and answers are always texts! And it is impossible to understand any texts without knowing dictionary definitions of its words.

Dictionary:

1. Helps computer to understand parts of speech for words, to create the right patterns (as combinations of words). For example, in the sentence " Alice and Bob swim." there are two patterns:

- Alice swims

- Bob swims.

Without parts of speech computer may extract only one "Alice and Bob swim" pattern, which is an error.

2. Helps to create tuples that give computer the ability to understand texts. (A tuple is a sequence (or ordered list) of patterns.) That is, dictionary definitions multiply texts' sizes and number of their patterns, and the computer can find the right patterns as an answer to questions asked.

3. You can see how Google or Yandex translate without dictionary: they practically do not work or translate very poorely.

Machine Learning is impossible without dictionary!

I.Geller Bronze badge

A combination

I don't think the use of AI solely, in isolation from traditional software solutions is reasonable. Most likely the optimal use is a combination of AI and software. So teaching AI you need to be flexible and think what you are doing.

I.Geller Bronze badge

Machine Learning is the addition of structured chunks of text, contextually and subtextually targeted to the execution of certain tasks, which this AI (for some reason, even having a set structured texts) cannot do. That is, this structured piece of text is a kind of program, which says what to do.

For example, to perform a maneuver Tesla finds somewhere a paragraph:

-- The car abruptly slows down and starts turning right. At the same time it has to turn on the turn signal to the right. --

In this paragraph suppose Tesla removes lexical noise and identifies three synonymous clusters:

I. the car abruptly slows down

II. the car starts turning right

III. the car turns the signal turn on to the right

Now Tesla follows this instruction and checks the result. If the feedback gives the positive score Tesla remembers this paragraph and use it in situations when the sensors show the same.

After that Tesla follows these instructions and checks the result. If the feedback gives positive result Tesla remembers the paragraph and use it in situations when the sensors show that it is necessary.

You can see that Argo AI And Waymo agree with that my definition of Machine Learning:

"At the end of a test day, all the data gets ingested into a data center from the vehicles and the good stuff is analyzed and labeled. Raw data by itself doesn’t have much value for training the machine learning systems that form the core of modern AV systems. The objects in the data that are of interest including pedestrians, cyclists, animals, traffic signals and more. Before any sensor data can be used to train or test an AI system, all of those targets need to be labeled and annotated by hand so that the system can understand what it is “seeing.”

Tesla’s Autopilot losing track of devs crashing out of 'leccy car maker

I.Geller Bronze badge

Re: Artificial intelligence

AI finds answers, no more or less.

I.Geller Bronze badge

Waymo and Argo:

At the end of a test day, all the data gets ingested into a data center from the vehicles and the good stuff is analyzed and labeled. Raw data by itself doesn’t have much value for training the machine learning systems that form the core of modern AV systems. The objects in the data that are of interest including pedestrians, cyclists, animals, traffic signals and more. Before any sensor data can be used to train or test an AI system, all of those targets need to be labeled and annotated by hand so that the system can understand what it is “seeing.”

https://www.forbes.com/sites/samabuelsamid/2019/06/19/argo-ai-and-waymo-release-automated-driving-data-sets/

I.Geller Bronze badge

Re: Musk has set aggressive targets for Autopilot

I mean my AI database, which

- uses AI-parsing,

- employs blockchain technology,

- annotates by dictionary definitions,

- deletes lexical noise,

- constructs synonymous clusters.

In short - AI database uses structured texts, which substitute programmms.

I.Geller Bronze badge

Re: Musk has set aggressive targets for Autopilot

Not known how long ago Musk begin to dabble with AI technology, particularly with AI database.... So it may turn out that in fact he has been closely engaged in AI for many years.

I.Geller Bronze badge

...has lost about 10 per cent of its staff... they have all the needed hardware in place and are simply waiting for the software to catch up...

There is my patented relational blockchain AI database, which Tesla can use right now (right after Tesla bought a license). All Tesla needs to do is to install and start using it. That is, Mr. Musk is right dismissing those who are not able to change themselves and understand this new AI technology, those who continue to write code manually, not seeing that everyday language/ texts can (without their participation) be structured into (in some sense) programs and replaced all their code.

SQL Server 2008 finally shuffles into the home for retired relational databases

I.Geller Bronze badge

Re: SQL uses n-gram parsing, unable to retrieve the above phrases and weights

I came from Philosophy of Language and develop Internal Relations theory of Analytic Philosophy. AI-parsing came straight from there and SQL n-gram from External theory.

I feel myself quite ready to discuss Moore, Russell and Wittgenstein, as well as Poincare, Bradley, Hegel, Spinoza, Nichola of Cusa and Maimonides, up to St.Paul, John and Ecclesiastes. They are whom I studied, this is my field.

I.Geller Bronze badge

Re: And? I use whatever I want.

Well, if you insist? Good - I re-introduced Alice and Bob.

I.Geller Bronze badge

Re: I introduced you all to the indefatigable Alice and Bob.

And? I use whatever I want.

SQL uses n-gram parsing, unable to retrieve the above phrases and weights. So - goodbye SQL! Goodbye Larry E, SAP, IBM and 99.(9)% of all IT.

I.Geller Bronze badge

Re: AnzoGraph DB

You are right. Sometimes I want to do something crazy, like Jura... For example, to write about noun-pronoun phrases. Is it quite in his spirit?

I said: "A contextual phrase is a 'predicative definition' characterized by combinations of nouns and other parts of speech such as the verb and adjective (e.g. city-be-in)." For example, a paragraph about the same idle Alice and Bob:

- Alice and Bob are coming. She enjoys to walk.

The patterns:

- Alice is coming - 0.25

- Bob is coming - 0.25

- Alice and Bob are coming - 0.5

- Alice enjoys - 0.25

- Alice walk - 0.25

- Alice enjoys to walk - 0.5

- she enjoys to walk - 0.5

- she enjoys - 0.25

- she walks - 0.25.

- she enjoys to walk -0.25

The numbers-weights (statistics) indicate the phrases importance - the more weight the more important the phrase. (This is my Differential Linguistics.)

There is a nouns-names phrase here ("Alice and Bob are coming"), its relatively higher weight (0.5) emphasize its greater importance.

I strongly believe Microsoft used this strategy structuring the sentence “The city councilmen refused the demonstrators a permit because they [feared/advocated] violence.” This sentence is a part of a paragraph or is surrounded by paragraphs, which have their own synonymous clusters about the city councilmen and the demonstrators. From them Microsoft can conclude what is going on: if the word “feared” is selected, then “they” refers to the city council. If “advocated” is selected, then “they” presumably refers to the demonstrators.

As the direct result Microsoft has significantly improved the MT-DNN approach to NLU, and finally surpassed the estimate for human performance on the overall average score on GLUE (87.6 vs. 87.1) on June 6, 2019. That happened a few days after I introduced you all to the indefatigable Alice and Bob.

I.Geller Bronze badge

Re: AnzoGraph DB

What exists and is called "AI" is worthless until a trustworthy Patent Search (based on this AI concept) is created. Only it's the litmus paper, the fact that can confirm or deny the validity of this AI mechanism.

Do not forget that the AI answers questions? And that a Patent Search will allow the most objective assessment of the quality of this AI? The US patent Database is the most researched, the most extensive and accurate database on the planet.

Without this test SQL remains the only reliable method for storing and searching information.

I.Geller Bronze badge

Re: AnzoGraph DB

You probably have not heard of the US patent 6,199,067 and Pa Advisors v Google? Please read?

I.Geller Bronze badge

It's great to have facts! They give you emotions, and you give them the facts...

I.Geller Bronze badge

Re: AnzoGraph DB

Yes, I have a finished product. But now I have to prove the validity of it. To do this I need a serious test. Have you an idea how much it will cost structuring of, for example, Patent Database of the United States? I know, that's why I sit tight hoping somebody would come and risk a small fortune. Structuring is actually quite expensive pleasure.

I.Geller Bronze badge

Try Google? Is it hard? You tried my first patent on AI (PA Advisors v Google).

My new are as simple, the only thing you have to do is turn on your computer, that's it.

I.Geller Bronze badge

Re: Yes, I do claim that.

Read United States Patent 8,447,789?

I can explain here only basics, general and patented ideas.

I.Geller Bronze badge

Yes, I do claim that.

Yes, I do claim that.

There is a sentence "Alice and Bob walk".

N-gram parsing produce only one phrase

- Alice and Bob walk.

"In the fields of computational linguistics and probability, an n-gram is a CONTINUOUS sequence of N items from a given sample of text or speech".

My AI-parsing produces three phrases here:

-- Alice walks

-- Bob walks

-- Alice and Bob walk.

Thus my AI database technology includes SQL n-gram parsing and adds a new fixture.

I.Geller Bronze badge

Re: AnzoGraph DB

Money makes the world go around

...the world go around

...the world go around

Money makes the world go around

It makes the world go 'round.

No money.

I.Geller Bronze badge

Re: You may order one right now!

Listen! This whole AI story has its roots in the replacing n-gram parsing with AI-parsing. There is a sentence:

- Alice and Bob like strawberries.

N-gram parsing delivers only one phrase:

-- Alice and Bob like strawberries.

AI parsing gets two phrases:

-- Alice likes strawberries.

-- Bob likes strawberries.

That's the only difference between SQL and Artificial Intelligence technologies. Nothing else.

I.Geller Bronze badge

AnzoGraph DB

There is another database though: Cambridge Semantics' AnzoGraph DB.

Its basic RDF model consists of the subject-predicate-object triple. So, if there is a triple "Alice loves champagne" AnzoGraph DB does not see these patterns:

- Alice loves

- Champagne is loved

I, however, patented subject-predicate-object triple for AI database, as well as all other kinds of doubles, triples, quadruples, etc.

That is AnzoGraph DB is not a databases since it loses information and cannot be trusted. However my patented AI databases lose nothing and is completely trustworthy. Plus my finds information in its context and subtexts, while AnzoGraph DB cannot ("That means that we have no way to identify the origin of a particular triple or record when it was asserted. Adding triples into a triple store loses context that is useful for many applications" https://www.cambridgesemantics.com/blog/semantic-university/semantic-web-design-patters/semantic-search-semantic-web-2-2/).

I.Geller Bronze badge

Yes, you can forget about SQL as a bad dream. You can buy AI database instead.

I.Geller Bronze badge

SQL database is not a database everybody want! At all! And certainly not a relational database everybody seek for! Indeed, all records in SQL are pre-prepared, i.e. sorted manually, the uniqueness is absent, relations between entries are established manually. Isn't that a shame? I smell sulfur... the darkest middle ages...

AI relational blockchain database is radically different! Everything is done automatically, all records are automatically annotated with texts, the blockchain hierarchy is automatically built and all records are automatically sorted. For example, all text entries are automatically annotated with dictionary definitions (which makes them all absolutely unique), and numbers, symbols, images are annotated with text. Isn't that the miracle you so long were waited for? You may order one right now!

Got an 'old' Tesla? Musk promises 'self-driving' upgrade chip ship by end of 2019

I.Geller Bronze badge

Re: "Full self driving"

This is my technology:

Microsoft’s MT-DNN Achieves Human Performance Estimate on General Language Understanding Evaluation (GLUE) Benchmark

https://blogs.msdn.microsoft.com/stevengu/2019/06/20/microsoft-achieves-human-performance-estimate-on-glue-benchmark/

What else do you need?

I already once invested everything, in 2002. Read NIST TREC?

I.Geller Bronze badge

Re: "out of date limits database"

Nothing will be missed! This is the AI database, I created it specifically to no information was lost. I guarantee it!

I.Geller Bronze badge

Re: "Full self driving"

If I had the funding I would have done it. It's mostly routine work, the technology itself is easy.

AWS's upgraded DeepLens AI camera zooms in on Europe

I.Geller Bronze badge

Two problems

Having a large enough volume of manually annotated (labeled) pictures, Amazon can already create a system of automatic annotation of newly incoming images, analyzing graphically similar images. At the same time Amazon is, likely, to face two problems, the main of which is lexical noise.

Here Amazon will have to somehow find the texts related in some way to the pictures, and structure them into patterns/ synonymous clusters in order to remove noise, following in that the example of Microsoft. For example, there is "the sentence: “The city councilmen refused the demonstrators a permit because they [feared/advocated] violence.” If the word “feared” is selected, then “they” refers to the city council. If “advocated” is selected, then “they” presumably refers to the demonstrators."

Then either

- city councilmen feared

or

- demonstrators feared

is lexical noise.

Then either

- city councilmen advocated

or

- demonstrators advocated

is lexical noise.

"Microsoft has significantly improved the MT-DNN approach to NLU, and finally surpassed the estimate for human performance on the overall average score on GLUE (87.6 vs. 87.1) on June 6, 2019." That is, Microsoft deletes lexical noise using this antecedent patterns. So this problem Amazon can solve.

The second problem is the insufficient size of the annotating texts. Amazon can solve this problem by annotating words with dictionary definitions, which it already and successfully does.

I.Geller Bronze badge

The "Object detection" project

The hunt for image descriptions has begun! Amazon should either give cameras for free or share the proceeds.

Page: