* Posts by IlyaG.

117 posts • joined 30 Aug 2019

Page:

Whoa, bot wars: As cybercrooks add more AI to their arsenal, the goodies will have to too

IlyaG.

For example, in a SQL database two numbers 78.12 are exactly the same, if they are not explained in some way. By annotating AI technology makes them unique and different, into AI database. One for temperature, one for pressure: one explanation includes patterns of temperature, and another on pressure.

IlyaG.

Re: Understand?

You can, of course, say that the principle is associated with Leibniz... But listen, I fight Moore, Russell and Wittgenstein. So, I decided to refer to Moore.

IlyaG.

1. Google and FB create uniqueness by stealing patterns. let's say 50-100 patterns per a word. They sell what they steal to advertisers, who want to know to whom and what they can sell.

AI creates a much better uniqueness by annotating texts by structured texts. For example, each word AI technology describes by 1,000 thematically targeted patterns, or much more.

2. Google and FB can not steal without Internet, this is the only place where they can steal.

AI annotates locally, using publicly available dictionaries and encyclopedias. AI technology does not need to steal anything.

IlyaG.

Understand?

The creation of uniqueness is the basic concept behind AI, the solution to Moore's unsolved problem of the Identity of Indiscernibles.

SQL knows no uniqueness!

My technology AI creates the uniqueness by describing everything by structured texts.

IlyaG.

Re: This will not end well

I'm serious, AI technology exists.

"3. The method of claim 1 wherein each context phrase is a combination of a noun with other parts of speech at least one of which is a verb or an adjective."

8,504,580***

I believe the best is three words (plus an article), three parts of speech plus an article.

If three, each pattern's word has a unique dictionary and/or encyclopedia definition, 30-500 and much more patterns long. Therefore each context phrase is described by 120-2.000 (and much more) patterns.

A few, 2-100 (and...), context phrases constitute a synonymous cluster; which is described by... let's say 240-10.000 (and...) patterns each. Therefore, each the cluster is absolutely unique and can be found, in no time and very cheap: milliseconds and a very tiny fraction of one the US cent.

Search query is filtered through a user profile, let's say 30-1.000 (and...) patterns.

All the above means you can find precisely what you want, very fast and very cheap.

*** AI parsing:

"...at least some of the paragraphs comprising multiple sentences, and at least some of the sentences comprising multiple clauses identified based upon figures of speech and punctuation; obtaining a first set of respective context phrases from the received paragraphs, which context phrases are obtained from the respective clauses and are indicative of the context of the respective paragraphs; obtaining respective weights of the context phrases using parameters related to frequency of occurrence of a context phrase relative to other context phrases or to absolute number of occurrences of a context phrase therein..."

8,504,580

It is a standard, produces the same result always, as you can see from this sentence:

-- Alice, Ruslan and Tom laugh happily. Ruslan is funny!

There are four patterns here:

- Alice laugh happily

- Tom laughs happily

- Ruslan laughs happily

- Ruslan is funny

There is this synonymous cluster here:

- Ruslan laughs happily

- Ruslan is funny

The dictionary definition for the name Ruslan:

"Ruslan (Russian: Руслан) is a Russian-Slavicmasculine given name used mainly in Russia and in other CIS states. ... The name is a Russian variant from the Turkic word arslan/aslan, which is translated as lion. The name Eruslan is another variant of the form Ruslan."

After AI-structuring it becomes... let's say 30 patterns. Each the patterns' word also has a unique dictionary definition, which can be added: the name Ruslan is described by... let's say 5.000 patterns?

The word "laugh": https://www.merriam-webster.com/dictionary/laugh

The word "happily": https://www.merriam-webster.com/dictionary/happily

The word "is": https://www.merriam-webster.com/dictionary/is

The word "funny": https://www.merriam-webster.com/dictionary/funny

The word "name": https://www.merriam-webster.com/dictionary/name

...........................................................

Thus everything becomes uniquely described into a structured format, can be easily found and nothing is lost. Not a word, not a sign or number!

>>>>>>>>>>>>>>>>>>>>>>>>>>>>

Can this be applied to malware? How do I know?

IlyaG.

Sponsor my AI database? Cybercrime becomes simply impossible and you will be happy?

Otherwise, you will spend a lot more on AI viruses.

- You can make money,

- Sleep calmly,

- And help benefit humanity by returning to us our privacy

or

- endlessly fight AI viruses.

IlyaG.

Re: When a Race is a Rout the Only Successful Route for the Vanquished is a Humble Pie Feast<s>?</s>

...Where the hell have you been all of this time, and what the hell have you not been doing, Rob?...

Corporations shut me up, started a massive misinformation campaign with the involvement of all media and the most powerful the Government agencies, just to prevent AI technology has become known. After all, they all lose everything...

So Rob is not alone.

IlyaG.

Re: A Vision

The only way to avoid the catastrophe is to remove everything that you have valuable offline, leaving only what costs nothing and can be easily replaced.

And if I were you I would do it as soon as possible.

IlyaG.

Since Internet will soon be replaced by an AI database, personal profiles will become the basis for identification. They are completely unique, two identical cannot be. And so when one asks a permission to access information there is an opportunity to find out in prior who wants to see it.

IlyaG.

The best way, I think, is not to leave anything of value online, only structured data. All originals only offline! Only offline, nothing online! The structured part is sufficient for searching, but to view the originals one needs to get a special permission. Otherwise you got yourself a very serious problem.

IlyaG.

Yes, the AI is able to attack. How?

AI gathers information, translates it into textual format and finds a solution into AI database, then attacks.

Can a human resist an AI attack?

I think not...

Service call centres to become wasteland and tumbleweed by 2024

IlyaG.

Re: Important!

"3. The method of claim 1 wherein each context phrase is a combination of a noun with other parts of speech at least one of which is a verb or an adjective."

8,504,580***

I believe the best is three words (plus an article), three parts of speech plus an article.

If three plus, each pattern's word has a unique dictionary and/or encyclopedia definition, 30-500 and much more patterns long. Therefore each context phrase is described by 120-2.000 (and much more) patterns.

A few, 1-100 (and...), context phrases constitute a synonymous cluster; which is described by... let's say 240-10.000 (and...) patterns each. Therefore, each the cluster is absolutely unique and can be found, in no time and very cheap: milliseconds and a very tiny fraction of one the US cent.

Search query is filtered through a user profile, let's say 30-1.000 (and...) patterns.

All the above means you can find precisely what you want, very fast and very cheap.

*** AI parsing:

"...at least some of the paragraphs comprising multiple sentences, and at least some of the sentences comprising multiple clauses identified based upon figures of speech and punctuation; obtaining a first set of respective context phrases from the received paragraphs, which context phrases are obtained from the respective clauses and are indicative of the context of the respective paragraphs; obtaining respective weights of the context phrases using parameters related to frequency of occurrence of a context phrase relative to other context phrases or to absolute number of occurrences of a context phrase therein..."

8,504,580

It is a standard, produces the same result always, as you can see from this sentence:

-- Alice, Ruslan and Tom laugh happily. Ruslan is funny!

There are four patterns here:

- Alice laugh happily

- Tom laughs happily

- Ruslan laughs happily

- Ruslan is funny

There is this synonymous cluster here:

- Ruslan laughs happily

- Ruslan is funny

The dictionary definition for the name Ruslan:

"Ruslan (Russian: Руслан) is a Russian-Slavicmasculine given name used mainly in Russia and in other CIS states. ... The name is a Russian variant from the Turkic word arslan/aslan, which is translated as lion. The name Eruslan is another variant of the form Ruslan."

After AI-structuring it becomes... let's say 30 patterns. Each the patterns' word also has a unique dictionary definition, which can be added: the name Ruslan is described by... let's say 5.000 patterns?

The word "laugh": https://www.merriam-webster.com/dictionary/laugh

The word "happily": https://www.merriam-webster.com/dictionary/happily

The word "is": https://www.merriam-webster.com/dictionary/is

The word "funny": https://www.merriam-webster.com/dictionary/funny

The word "name": https://www.merriam-webster.com/dictionary/name

...........................................................

Thus everything becomes uniquely described into a structured format, can be easily found and nothing is lost. Not a word, not a sign or number!

IlyaG.

Re: Back to mainframes...

IBM Watson is a crude forgery, foolish programmers are trying to do what requires thinking.

IBM Watson does not work: "In the above example, the words "in" and "the" are useless words (noisewords). By default, Watson™ Explorer Engine considers each word equally (all the words in the search are equally important). To perform a search that truly reflects the user's intentions, noisewords should be removed."

IBM Watson intentionally loses information! It cannot be trusted!

So I still remain the only who has passed NIST TREC QA, found how to structure texts and find information. Which is, in fact, Artificial Intelligence.

IlyaG.

Re: The right technology is everything!

Tell me what else can I refer to? What else can I use if I'm the only one really doing AI? And my patents are the only publication on AI? I have nothing more to refer to.

IlyaG.

Re: The right technology is everything!

Forget about AI in English without articles!

4. The method of claim 3, wherein each predicative phrase is a combination of a noun, verb, adjective, and an article.

8,447,789

IlyaG.

The right technology is everything!

For example:

-- Alice sweats. The heat weakens her.

and these templates are derived:

- Alice sweats

- The heat weakens

- she is weakened

- Alice is weakened.

And you got the search query

- Does electrifying heat weaken?

Using IBM technology and throwing out the article, the query doesn't match the paragraph, because

- heat weaken

and

- electrifying heat weakens

are two different patterns!

However, using my AI technology

- The heat weaken

and

- electrifying heat weakens

are the same patterns because the article and the word "electrifying" are the same, see thesaurus at https://www.thesaurus.com/browse/the?s=t.

A wrong technology, like this of IBM, is used by calling centers, my is the right one.

IlyaG.
Mushroom

The right technology is everything!

I mean that no call center is likely does use AI technology! They're all following IBM's lead by disfiguring language, using only AI-parsing and ignoring the rest of AI technology.

The ultimate purpose of AI technology is to create and index synonymous clusters, establish cause-and-effect links between them (the so-called "blockchain" technology). AI-parsing alone is nothing, as OpenAI shows.

IlyaG.

The right technology is everything!

For example. they at IBM said:

"In the above example, the words "in" and "the" are useless words (noisewords). By default, Watson™ Explorer Engine considers each word equally (all the words in the search are equally important). To perform a search that truly reflects the user's intentions, noisewords should be removed."

https://www.ibm.com/support/knowledgecenter/en/SS8NLW_12.0.0/com.ibm.watson.wex.fc.nlq.doc/c_wex_nlq_noiseword_dictionary.html

"Should be removed?" However, these words - " in " and " the " - were used! They were important to a person! And they were barbarically removed...

How can you trust, for example, IBM knowing what they are doing?

The right technology is everything! Call centers need it! Wrong technology leads to failure of IBM Watson and call centers which don't respect language, they and IBM are doomed.

IlyaG.

The technology is used only partially, there is no construction of AI database. That is, only pattern extraction is used and there is no

1. dictionary indexing,

2. construction of synonymous clusters with their indexing relative to each other,

3. and no accounting for timestamps, there is no causality.

As a result AI is not used.

IlyaG.

NIST TREC QA

How does it work? There is an AI database. It consists of two interconnected parts:

1. Original texts,

2. Their structured representations.

- A client asks a question.

- In the structured part an answer is found or a counter question is asked. (Which is regulated by the instrument of Compatibility (Topology, Set Theory): if it's low a counter question is asked.)

- From the original part the answer is found and given to the client. It can be modified, based on its timestamp and the client's question.

That's the technology.

Obviously, the larger the database the more it contains records of conversations and the better it works.

The technology is tested at NIST TREC QA and patented. IBM tested it at Jeopardy!

IlyaG.

Re: No more call centers? What a wonderful world we live in!

Very different. With AI database it can make and correct mistakes, ask if it doesn't understand, exactly like a person.

IlyaG.

Important!

There is AI technology, which is advanced enough and can't be screwed up by a user: my AI database technology. If synonymous clusters correlated properly, blockchain connections among them is properly established, then the AI database is quite resistant to training and may carry on very sensible conversations.

Facebook: Remember how we promised we weren’t tracking your location? Psych! Can't believe you fell for that

IlyaG.

Re: Fb tracks everything

The fundamental business idea of AI technology is that being structured (into synonymous clusters) information/ texts can very accurately find people's personal profiles, which is the same structured (into synonymous clusters) information/ texts.

That is, no intermediaries, no search engines are needed! The information itself can very precisely find the users who want it. Google and FB just are not needed.

IlyaG.

OpenAI

1) annotates patterns (and not their words),

2) annotates by full texts, which the texts are the direct analogues of articles from encyclopedias; and which costs a lot!,

3) OpenAI structures texts on its own computers,

4) does not create synonymous clusters, unable to expand search queries to hundreds and thousands of correlated search patterns,

5) doesn't use blockchain technology/ cannot trace causalities.

My AI technology

1) annotates patterns' word, i.e. multiple unique dictionary definitions for each pattern - a unique representation for each pattern and its synonymous cluster,

2) it annotates using both dictionary definitions and articles from encyclopedias,

3) users structure texts on their computers and submit synonymous clusters only, for which they gain 100%-absolute privacy, searching into AI database,

4) the technology creates synonymous clusters and expands search queries to hundreds and thousands of search patterns,

5) using blockchain my AI technology tracks causalities.

IlyaG.

Look at OpenAI? They're asking for billions on what using synonymous clusters is worth a fraction of one cent. Tens and hundreds of billions of times cheaper!

Not to mention the fact that an accurate search without dictionary-indexed synonymous clusters is almost impossible.

I know what I'm talking about, I managed to beat IBM at NIST TREC QA, having only $4,000.

I could not answer only Definition questions because the structuring of 6.3 million texts cost then hundreds of thousands, possibly millions, and I had only $4,000...

IlyaG.

Of course, AI-parsing makes sense only when constructing synonymous clusters. For example, this paragraph:

-- Alice and Tom are petting a cow. They love animals. She strokes it lovingly.

For instance these patterns are obtained:

- and Alice is petting a cow

- and Tom is petting a cow

- a cow is petted

- Alice loves animal

- Tom loves animal

- Alice and Tom love animal

- they love animal

- She strokes lovingly

- Alice strokes lovingly

- a cow is stroked lovingly.

For Alice this synonymous cluster is constructed:

- and Alice is petting a cow

- Alice loves animal

- She strokes lovingly

- Alice strokes lovingly.

Tom:

- and Tom is petting a cow.

They:

- Alice and Tom love animal

- they love animal

And for cow:

- a cow is petted

- a cow is stroked lovingly.

AI-parsing as such is a means, not a goal: by itself it is useless because

1) patterns extracted through it may contain lexical noise,

2) and after removing it these patterns need further processing to translate them into a searchable format.

For example the pattern:

- a cow is petted

is not in the original paragraph! It's constructed and contains an explicit indication of an implicitly implied action: the cow is petted by Alice and Tom. Without the constructing this pattern, the paragraph may not be found as an answer to the question "Was this cow stroked by Alice and Tom together?", in the context of a conversation about what Alice and Tom.

So be extremely careful with all these supposedly AI search and talking products that they sell - for them AI-parsing is their ultimate and final goal, not a means to construct synonymous clusters. They're selling probably bullshit. Buy nothing from them! till they demonstrated how they construct synonymous clusters.

IlyaG.
Mushroom

Re: Privacy attack from everywhere

Not only that! AI is privacy technology! AI gets all the information from texts offline, it doesn't need to spy online. That is, AI constructs profiles in local devices and personal profiles are the sole property of their owners.

IlyaG.

n-gram was a standard.

Merriam-Webster:

: something established by authority, custom, or general consent as a model or example

: something set up and established by authority as a rule for the measure of quantity, weight, extent, value, or quality

an n-gram is a contiguous sequence of n items from a given sample of text or speech = the measure of extent = model.

AI-gram = the measure of value (weight) and quality (parts of speech) = model.

n-gram was used by absolutely everybody, it was a standard.

Agree?

IlyaG.

"...obtaining respective weights of the context phrases using parameters related to frequency of occurrence of a context phrase relative to other context phrases or to absolute number of occurrences of a context phrase therein..."

8,504,580

-- Alice and Tom sing, she is happy!

- and Alice sings 0.25

- and Tom sings 0.25

- Alice is happy 0.25

- she is happy 0,25

You cannot get another patterns and weights!

"In the fields of computational linguistics and probability, an n-gram is a contiguous sequence of n items from a given sample of text or speech. The items can be phonemes, syllables, letters, words or base pairs according to the application. The n-grams typically are collected from a text or speech corpus."

n-gram was a standard because it used to be the only one method, all should use it.

n-gram can be called a "standard" informally because most people adhere to it because nothing else existed.

Why my AI-parsing cannot be recognized by the Government? AI-parsing replaces n-gram.

IlyaG.

It is really a standard, a method always giving the same result under the same circumstances. Can it be an industry standard, like n-gram? Can Federal?

The bottom line is that AI parsing always produces the same result when analyzing the same text. The same patterns and synonymous clusters.

Why Federal? Because the Federal government, speaking about standards. refers to this definition:

"delimiter: 1. A character used to indicate the beginning and end of a character string, i.e., a symbol stream, such as words, groups of words, or frames. 2. A flag that separates and organizes items of data." (Federal Standard 1037C - Telecommunications: Glossary of Telecommunication Terms, https://www.its.bldrdoc.gov/fs-1037/dir-011/_1544.htm)

If it does I think it's logical to make AI-parsing a Federal standard.

IlyaG.
Mushroom

Yes. AI technology is into the structuring of texts, bringing them to a computer-friendly format. Understanding these texts computer can find the necessary information and

1. Display it;

2. Use as an instruction and act.

IlyaG.

Do you mean the sentence

-- Alice and Tom train cheerfully, she runs briskly.

has thirty seven phrases? Or only twelve?

IlyaG.

delimiter: 1. A character used to indicate the beginning and end of a character string, i.e., a symbol stream, such as words, groups of words, or frames. 2. A flag that separates and organizes items of data.

https://www.its.bldrdoc.gov/fs-1037/dir-011/_1544.htm

The essence of AI-parsing is in splitting a sentence into clauses, and constructing phrases by parts of speech, in accordance with delimiters. For example in a sentence

-- Alice and Tom train cheerfully, she runs briskly.

coma is a delimiter, and AI-parsing gets four phrases

- Alice trains cheerfully

- Tom trains cheerfully

- Alice runs briskly

- she runs briskly,

and n-gram parsing maximum two:

- Alice and Tom train cheerfully

- she runs briskly.

At the same time, AI-parsing is objective, it always gets the same results for everyone. Other Federal Standards - for example, length and weight standards - also always give the same results; that allows me to claim that I discovered and patented another standard which is to eventually become Federal.

Sure I am... I overdo it calling my discovery a Federal Standard now, because it has not yet become one. I do so for a polemical reason, aggravating the situation to an extreme that requires immediate action. Like trying to prove that I'm an impostor and don't have really anything.

IlyaG.

This is the only kind of publications that "they" failed to choke off. Publications in refereed magazines... No, "they" don't let me.

Commenting I hope that I will be read by those who want to make an unimaginable amount of money and use my ideas set forth in the patents. I'm not allowed by the same people who robbed me and left me to die in poverty and illness.

IlyaG.

Anyway, you know that the basis of AI is a Federal Standard. You also know about PA Advisors v Google - I was the one who started it, 20 years ago.

IlyaG.

Perhaps this is the policy of some Government agencies run by officials who made money on Google corruption? Who else can run the U.S. Patent Office? Who else could have enough clout to ban a Federal Standard?

IlyaG.

Indeed, I have patented a Federal standard and no one is saying anything.

IlyaG.

Sorry, I wanted to say millions.

I patented a Federal standard on how to structure texts.

IlyaG.

The use of my property, which is a Federal Standard that belongs to me, results in the absence of any reasonable need to spy on us.

IlyaG.

There are three delimiters (comas) here.

Thus I patented the Federal Standard.

So hats off! I patented the Standard!

Oracle's Mark Hurd hits pause as co-CEO, says he needs time to deal with health issues

IlyaG.

I hope Oracle will soon cease to exist and he will have nowhere to go back. AI database will soon displace SQL!

MongoDB Doubles Down on Sales - developers are looking for databases to help them analyze unstructured data.

Hello my dear friend Safra Catz!

#MeToo chatbot, built by AI academics, could lend a non-judgmental ear to sex harassment and assault victims

IlyaG.

Re: Uhh...

AI makes Google and FB's theft of personal information an insane occupation. What for? To structure the stolen information real money should be spent, billions and billions! Knowing that the resulting array of structured information is flawed? Knowing that only part of personal information is gotten?

Personal information acquires real market value only if it is made in a personal device and captures all personal information, rather than a fragment of it. Then it can be properly structured, synonymous clusters can be referred each other, cause-and-effect relationships are built, advertisers can precisely know whom they want.

This is so obvious it doesn't even need proof.

Death to Google! Deaf to FB!

IlyaG.

Re: Hmmm

In principle, the technology for earning money on AI technology differs little from the creating AI technology.

A) A text and all its annotations are structured:

1. AI-parsing obtains all patterns from all the text's clauses, weights are calculated.

2. Dictionary definitions (subtexts) are used to remove wrong (lexical noise) patterns.

3. Synonymous clusters are constructed and correlated (taking into account timestamps, if any).

B) After that a search query is filtered through a personal profile, enriched with dozens, hundreds and thousands of patterns, and search within a database of structured texts (this one is included) is performed.

C) Money is collected.

IlyaG.

Re: Hmmm

...Tay was designed to learn via unsolicited anonymous input...

1. Human is arranged so that all information in him is interdependent, there are causal relationships in it. That is, synonymous clusters are interlinked, in regard to their timestamps.

2. I highly doubt that in making Tay structured dictionary was used at all, for removing lexical noise and indexing of patterns. I don't know what was created... But clearly not AI!

3. Without a proper BIOS, which I call Lexical Clone and which is synonymous clusters + timestamps, AI cannot be taught: it does not know the true causality of what it is taught, is unable to build correlations by itself.

IlyaG.

Re: @IlyaG.

Alas, I'm that poor Yorick!

IlyaG.

Re: Hmmm

The mechanism for creating a Lexical Clone, and Tay was one, is extremely complex. Firstly, it is necessary to build cause-and-effect relationships, as relations between synonymous clusters taking into account their timestamps (blockchain technology). Secondly, a mechanism is needed to remove that which does not fall under these cause-effect relationships. Thirdly, is needed a mechanism for selecting new cause-and-effect relationships (i.e. new synonymous clusters), which are not in this AI database.

Page:

Biting the hand that feeds IT © 1998–2019