* Posts by Il'Geller

82 posts • joined 11 Oct 2019

Page:

Twitter wants help with deepfakes, and Microsoft Azure will rent out new AI chips for its cloud users, and more

This post has been deleted by a moderator

Boffins harnessed the brain power of mice to build AI models that can't be fooled

Il'Geller

Yes, it is. For the last 75 years only n-gram parsing technology has been used, which led to a purely mechanical parsing of texts, followed by a purely mechanical search for words (not information). Now it is possible to apply AI-parsing instead, which provides meaningful patterns and helps to find meaningful information.

"Meaningful" means the ability to represent a human's mental sphere externally, without invasive study of his brain mechanics. That is, there is a AI-distinction between medical aspects and the study of cognitive abilities. Mouse are not needed anymore, as well as any animals.

Il'Geller

1. If you are financed?

2. What for?

Il'Geller

Money, only money! If science does not make money, it’s nothing: your pure science is a kind of Go or poker, suitable only for spending time. Brain analysis is for pastime.

Il'Geller

What does the brain and neuroscience have to do with AI? AI is based on language understanding. Indeed, why waste time and effort to understand the cause? When the consequences can be easily analyzed? Especially since over the last couple of thousand years all trying to understand how the brain works has not led to any success.

Google brings its secret health data stockpiling systems to the US

Il'Geller

SQL and Google

...and claiming it doesn’t need their consent either....

Google still practices SQL approach: Google annotates, explains patient data from the outside, only what Google has access to. For example, by his queries or unrelated patterns (extracted from the texts found by the patient and known to Google).

And those information, those texts that are not available to Google - for example, which are read in places inaccessible to Google - are simply not noticed. But in a health-related cases the inability to get any details is just dangerous! This is not the Internet search, when Google can make naive eyes and say " Google did not find this and output only what it found."

AI technology, as I've told you a million times, clarifies data from within, using all data in personal devices. That is, the patient annotates everything in there, and ALL related to his health data will be structured and used. (Not just those that Google saw.)

AI technology ensures that everything, on which human health depends, will not be lost! Google and SQL don't.

Is this paragraph from Trump or an AI bot? You decide, plus buy your own AI for $399

Il'Geller

"...GPT-2 is a large transformer-based language model with 1.5 billion parameters, trained on a dataset of 8 million web pages. GPT-2 is trained with a simple objective: predict the next word, given all of the previous words within some text."

And? What for? To mimic Trump?

AI is a purely commercial project from the very beginning! AI is looking for groups of patterns that are both contextually and subtextually aimed at a practical purpose. For instance they are commands for a driverless car or information for a financial broker.

AI is trained using a very good indexed dictionary, 25Mb big.

Il'Geller

OpenAI has no idea that texts contain not only contexts, but subtexts as well. That is, not only what is seen explicitly and can be read, but also what is meant implicitly! For example, all any text's words have dictionary definitions, they are tied to other texts' synonymous clusters (taking into account their timestamps). These implicit definitions and connections are also texts' part, although they cannot be seen or read.

The technology for subtext recovery exists, it is simple: computer needs to select for each word its uniue dictionary definition, which is in the context within before and after this word, in harmony with the rest of the text's dictionary definitions. The same should be done with links to synonymous clusters, finding those texts that are consonant with the given.

Not necessary to train date on terabytes of other date! Enough on several bytes of dictionary. This alone saves millions and billions of dollars, plus AI becomes intelligent, starts to understand and ceases to be a toy (mimicking Trump).

Microsoft's phrase of the week was 'tech intensity' and, no, we're not sure what it means either

Il'Geller

Microsoft may very soon lose its main business, because it will not withstand the competition with AI. The slogan "Tech Intensity" is gaining unprecedented relevance for Microsoft, it must decide how to handle with AI.

Microsoft manufactures and sells software products, which are created by programmers; where the work of programmers is to translate texts (so-called specifications), into a structured format (that is, into programming code). Thus Microsoft produces and sells "translations."

AI is able to "translate" the same, but without the participation of people; "translating" texts in what I call "synonymous clusters". For example there is a paragraph:

-- Press the blue and white button. Then press blue again.

A programmer (human) must manually code ("translate" this specification); AI structures it into several patterns:

- and press the blue button

- and press the white button

- then press the blue button again,

using AI -parsing and - indexing.

There are two patterns here, which compose a synonymous cluster on the blue button:

- and press the blue button

- then press the blue button again.

AI "understands" the cluster, can easily find and execute it. Microsoft does the same using people, which is much more expensive.

Enjoy a tipple or five? You might need this AI system to tell you when it's time for a new liver

Il'Geller

Re: @ll'Geller - It's not pointless.

The idea of Artificial Intelligence:

1. A personal profile is created, based on structured tests; this is NLP part, where you need it as such.

2. A search query, which may consist of 1-2 words, is expanded to complete-and-meaningful patterns; the technology of which is outlined in my US Patent 6.199.067 (PA Advisors v Google).

3. This search pattern is then filtered through the profile, it's enriched with hundreds and thousands of explanatory patterns.

4. Data is searched.

5. The information found is used by AI. (For example, by Waymo and Uber driverless cars.)

Machine Learning technology helps to refine the queries by the addition of texts.

That's it, nothing more or less.

Il'Geller

Re: @ll'Geller - It's not pointless.

"...NLP may provide opportunities to detect cognitive impairment in ESLD..."

Machine Learning is not NLP! These are two different things: NLP just helps structure information, translates it into a computer-friendly and readable format, that's all.

AI uses structured information (texts) to search for another information, formulating (expanding to several hundred and thousands of patterns) search queries. In turn, feedback-based Machine Learning refines these searches (based on attracting new texts).

Finally, the AI ​​acts or does not in accordance with the information found.

They, however, simply compare emails: "...where they compared the emails..."

Il'Geller

Machine Learning technology is designed to improve information search results, and not to compare texts and find some anomalies in them. Thus, an attempt to use this technology in this way is pointless.

What could go wrong? Redmond researchers release a blabbering bot trained on Reddit chats

Il'Geller

Re: "make sure you're alive when you die"

I tried to sell "Eternal Life" product, as the creation of Lexical Clones (personal AI databases)... 15 years ago. I was not able to attract any attention then, though.

Il'Geller

Re: So...

Quite expensive because

1) you should annotate everything into Sesame Street with dictionary definitions,

2) find other texts related to Sesame Street texts, extract synonymous clusters from them, build blockchain relationships and annotate Sesame Street.

Only than you can say that Sesame Street is understood and computer becomes AI.

Il'Geller

punctuation marks

Not that simple! Grammatical punctuation is responsible for emotional orientations of texts. But these emotions, in turn, are transmitted by the texts' subtexts, i.e. by what is meant implicitly, while only the texts' contexts are explicit.

I found out that the subtexts are 1) dictionary definitions on the texts' words and 2) synonymous clusters from related to the given other texts. That is, only by sifting the subtexts through the texts' contexts, it's possible to find out what the punctuation marks should be.

NSA to Congress: Our spy programs don’t work, aren’t used, or have gone wrong – now can you permanently reauthorize them?

Il'Geller

Artificial Intelligence - a gift from above for the NSA! The introduction of AI leads to the creation of AI database, where all information is easily controlled in a legally permitted manner, and there is no need to control its users! If the information is malicious in nature, it is instantly possible to identify and remove it.

This news article about the full public release of OpenAI's 'dangerous' GPT-2 model was part written by GPT-2

Il'Geller

Seems like OpenAI has begun to find texts-sentences-phrases by their meanings, that is to index patterns' words by their dictionary definitions (rather than searching for contexts for the patterns), using introductory sentences as search queries. In addition, OpenAI began to substitute in the found texts' words, choosing the most suitable according to introductory sentences, based on their dictionary definitions/ meanings.

Remember the Uber self-driving car that killed a woman crossing the street? The AI had no clue about jaywalkers

Il'Geller

...since there was no classification label for a person not using a proper crossing point...

Uber marks patterns with timestamps and creates scenarios that contain cause-and-effect relationships.

This post has been deleted by a moderator

Hey, corporate types. Microsoft would really love to pick your brains about Project Cortex

Il'Geller

Mathematics

There is Mathematics that can not be ignored, but Google, Bing and the rest of them ignore it. As a result, they are not searching for information, but generalize what their users do, that is they give out the results for which people vote (so called popularity, which is external to information).

AI technology is mathematically verified: this technology works with language as a differential function, considers paragraphs as integrals and really (from inside, internally) finds paragraphs. AI is the only true search technology, it does not depend on the results that people get.

Il'Geller

Almost impossible to annotate patterns and words outside of our personal gadgets! No enough texts and no chronology; thus the data, which Microsoft can get without our permission and participation, won't be trustworthy. Microsoft should allow us to do it ourselves and at our computers, and process the results with our permission only.

Watch Waymo's totally driverless self-driving car cruise around, how the US military wants to use AI ethically, etc

Il'Geller

Timestamps and chronology (for synonymous clusters?)

I am not sure that Google (Waymo) uses timestamps and chronology for synonymous clusters... While they can make driving much easier, because they allow a car's AI to anticipate possible difficulties, being ruled by what DeepMind calls "scenarios". That is, seeing an old woman on the road the AI can call from its memory all the scenarios, at once (uniqueness!) select one according to the information gotten from its sensors, and make its decision. For example to rotate right and avoid the lady, rather than to slow down.

However, DeepMind uses scenarios, doesn't it?.. And Google said something completely indigestible and inarticulate about its alleged "quantum" computer... So, Google marks patterns (synonymous clusters?) by timestamps. Impossible to work when you can only guess!

Il'Geller

Yes, it's possible.

It's AI, this is a personalized technology, it's you with all your guts in cyberspace. That is, the computer over time becomes more and more you, through remembering how and what synonymous patterns are associated with what timestamps in your life. And then the choice of advertising (which you see) is dictated by these timestamps: if there is a temporal correlation of your trips to the gym with your gluttony, then advertising for sports equipment gets priority and should come out on the screen of your computer.

THIS is happening now as well, but now you are owned by the spying Google and FB. AI technology allows you to own yourself, as profiling is now possible offline.

This post has been deleted by a moderator

Il'Geller

4. Blockchain.

4. Establishing a time hierarchy between synonymous clusters, the so-called blockchain technology, to identify cause-and-effect relationships.

For example, for somebody, who already ate a whole cake, makes no sense to advertise a new cake, but rather a simulator for the rapid weight loss. This is a cause-and-effect relationship, as a blockchain between synonymous clusters, which are aggregated by meanings together (into paragraphs).

Il'Geller

Re: The uniquenesses.

A unique action can be assigned to a group of patterns. For example, a group is received from Waymo's sensors (an old woman crosses the street), and this description is opposed to the action of "braking". So, the whole problem boils down to

1. Descriptions on everything what happens (labels, annotations) in the form of texts are needed.

2. Removal from texts of lexical noise is the must; where the noise is typically superfluous patterns, which do not explain the central themes contained within the texts and, accordingly, removal of it results in an improvement in the quality of the structured texts.

3. Creating a synonymous cluster is needed as well.

5. Assigning of synonymous clusters to certain mechanical actions.

Savings in this case is due to the cost of programmers.

Il'Geller

The uniquenesses.

Google "quantum" computer, Deepmind's AlphaStar AI bot, Waymo are the direct result of AI-indexing and creating synonymous clusters.

1. "Extended", "prolonged" dictionary definitions on patterns' words are created. That is, most of words (in dictionary) have synonyms, and these definitions are chained one to another (as paragraphs), and thus chains (approximately 10 bytes per 2-5 words) are used, which are filtered through the surrounding contexts and subtexts. The idea is to create the true uniquenesses: after the formation of synonymous clusters these uniquenesses convey the true essence of what is said, and can be instantly found.

2. The uniquenesses are used to form synonym clusters, which replace all programming languages commands.

3. Thus one does not need programmers and can immediately apply direct speech, as AI.

In NIST TREC I had to find one unique phrase in over 6 million texts, which I did. IBM Watson in Jeopardy! had to find one unique phrase in millions of texts, which was done.

How? By the uniquenesses.

Thought you were good at StarCraft? DeepMind's AI bot proves better than 99.8% of fleshy humans

Il'Geller

Re: Bit light on details

...it was as much a straight contest between human and AI players as you could hope for...

If the texts are structured into many synonymous clusters, they become a replacement for human-written programs. And then AI can really become a serious adversary, having them at hand. Especially if there is a set of texts, articles, comments, posts, etc., which contain additional information and can be used for further machine learning.

DeepMind and Google use the same technology for their unmanned Waymo.

Il'Geller

Re: Textual retrieval.

I think everything is much simpler: the technology (used by DeepMind), does not belong to either Google or DeepMind. And secondly, this technology destroys the main Google business, because unlike what Google does now it brings an extraordinary accuracy of finding information, while Google now doesn't search and relies on the espionage and theft. That is, Google is afraid of losing its own business by switching to a new technology, at a time when the old still brings him billions.

Do you see how the new works? Re-read the article. And Google continues to spy...

Il'Geller

So, you may suffer the same fate speaking with an AI.

1. All depends on BIOS, on the basis of which a particular AI is created. Since AI is texts which always have some bias, then AI may well answer "If you have to ask, I'm not going to tell you!"

For example, I tried to ask an AI, created on the basis of Dostoevsky's books, about Fyodor's participation in the revolutionary organization. Dostoevsky, very emotionally, refused to speak on this subject. So, you may suffer the same fate speaking with an AI.

2. AI answers questions, this is what it does since it originated as a result of my involvement in NISR TREC QA. Your desire to learn how to play is a series of answers to your questions. So the answer is " Yes!", AI can teach you to play."

Il'Geller

Searching for information

Searching for information, that's where there are "constantly changing variables that require the intuition of experience in order to not get blind by the sheer amount of data and cut to the right solution in as short a time as possible."

I have a very good reason to suspect that DeepMind educates computer using textual annotations , that is using labels DeepMind both marks and comments on successful and not-so-strong moves, finds what is required and helps to win. So far I've not meet even once any detailed description of DeepMind technology, only the most general words and meaningful winks. For example, I feel that Google uses DeepMind as a cover, masking its developments in the field of AI, particularly the field of text structuring and AI-indexing.

Il'Geller

Re: Textual retrieval.

Using text labels (to index DeepMind strategies in games), that is marking and commenting on successful and not-so-strong moves, DeepMind must inevitably index not only the texts' whole patterns, but the words that make them up.

Indeed, time and accuracy are absolutely decisive in any game (not to mention driverless a car), and therefore the problem of the uniqueness of patterns and how good they convey meanings becomes an absolute imperative (I'd say, the must). Summarizing, I assume that DeepMind indexes according to the the patterns' words unique dictionary definitions, not for the whole patterns.

However, I found not a word on the actual technology DeepMind has... Thus I can only speculate.

Il'Geller

Textual retrieval.

"We've yet to see any research or evidence that the strategies learned from a domain like StarCraft can be applied in the real world, though."

- DeepMind machine learning technology uses a database, where DeepMind stores its strategies.

- Indeed, DeepMind strategies must be saved somewhere, mustn't they?

- But any database is a collection of data organized especially for rapid search and retrieval.

- Thus DeepMind strategies must inevitably be somehow indexed (in order for them to be found).

~ So, how does DeepMind indexes its database?

- Google, the owner of DeepMind, indexes by textual patterns (for Waymo): "Those images with vehicles, pedestrians, cyclists, and signage have been carefully labeled, presenting a total of 12 million 3D labels and 1.2 million 2D labels"; where these "labels" are texts.

- Google also said: " Google Introduces Huge Universal Language Translation Model: 103 Languages Trained on Over 25 Billion Examples" - Google trains its data using texts.

- Therefore I can assume that DeepMind uses text-tagged strategies playing its games, saves these strategies using some textual labels.

- Then "the strategies learned from a domain like StarCraft can be applied in the real world" if they boil down to textual retrieval.

- Indeed, DeepMind is trying its hand at medical (textual) search.

Sticks and stones may break your bones but robot taunts will hurt you – in games at least

Il'Geller

Which means that if the robot really wants to seriously offend, it must have access to the profile of its victim, know his (also annotated by dictionary definitions) patterns. That is, having its standard set of insults, the robot must compare these insults with groups of patterns from the victim's profile, based on Compatibility score, see cause-and-consequence relationships, select the most appropriate and try to insult.

If the Compatibility is low the robot should search somewhere, find a fresh insult and apply it. Which is called "Machine Learning".

Il'Geller

For example, to highlight emotions of Bernard Shaw, Plato or Dostovsky and talk to them, I needed to annotate patterns of their books with a few layers of dictionary definitions. Otherwise, I could not get connected: lexical noise went off scale, their emotionally and thematically verified answers were lost among random noise. You can see what I'm talking about using Google translator that mixes nonsense with excellent translations: my patented methodology is employed only partially.

Emotions are hidden into the use of subtexts!

Il'Geller

Based on my experience in, I highly recommend making "extended" dictionary definitions, namely: you should add other definitions to this, using synonymous relationships. I advise you to also add definitions on all the words in the given definition. In this case, the contexts and subtexts of the given paragraph (and its surrounding paragraphs) should be used as filters, anchors to highlight the "correct" dictionary definition tree. If you don't... the results can be damned unsatisfactory. If you do the above, you will get a real Artificial Intelligence, which understands you, thinks and talks.

These added, on synonyms, definitions I call "layers". In my experience the optimum - at least two layers; very good four or five.

Il'Geller

excessive accuracy

In fact, the "catching" of emotions is very simple, if an individual AI database exists. Emotions are primarily conveyed as subtexts of words and patterns, as structured dictionary definitions and paragraphs of somehow related (to the one under the consideration) contextually-and-subtextually texts. Such "chains". aggregates of patterns allow the capturing and computer understanding of emotions with excessive accuracy.

Il'Geller

..."Emotion is very powerful, and we're at the early days of knowing how to use it in design of real systems, including robots."...

Honestly, distilling subjective emotions is quite simple, you just need to remove all lexical noise and leave only meaningful sets of patterns, which convey these emotions. People do this by understanding the only correct each word's dictionary definition (for each template), and AI (computer) can do the same by indexing by dictionary definitions. That is, by calling to train your data by dictionary I urge you to create uniquenesses, which covertly convey the emotions.

Google claims web search will be 10% better for English speakers – with the help of AI

This post has been deleted by a moderator

Il'Geller

Google search results will be disproportionately better soon.

I'm sorry, let me explain what the above explanations have to do with yesterday's changes in Google's algorithm? The fact is that Google

- either should change its business model and allow us to own our profiles (create them ourselves at our devices),

- and allow owners of information (ads, websites, documents, posts, etc.) to profile their property by themselves,

- or profile everything using AI-indexing technology and Google quantum computer.

If Google decides to allow us to have everything - Google instantly goes out of business, because why should we pay it?

Therefore, Google has made a bet on its quantum computer and AI-indexing, because this is the only option left. Since AI indexing is used, as a consequence Google search results will be disproportionately better soon.

Il'Geller

Re: 10%?

“A computation that would take 10,000 years on a classical supercomputer took 200 seconds on our quantum computer,” says Brooks Foxen, a graduate student researcher who works with John Martinis of the University of California, Santa Barbara and Google.

How?

Annotating by dictionary definitions - this is my speculation! I do not know but guess - Google can reduce a pattern's context to a very small number of unique addresses (very few, 1-3, bytes). That is, each pattern (and its synonymous cluster) is assigned by a unique address, which can be found instantly.

Then Google may well get rid of lexical noise and instantly give a very small number of correct results. For example 10-20, instead of tens and hundreds millions. Indeed, the uniqueness of the annotated patterns allows THIS to be done.

Il'Geller

Re: 10%?

The search you want assumes that all patterns are indexed and can be easily found. Google has proven that with AI indexing (by dictionary definitions) technology it's possible! Please see the results obtained by Google when working with its "quantum" computer?

Such the search is possible as well if search queries are expanded from one-three-very-few words to several hundred and thousands of patterns. The same AI technology, due to annotations by dictionary definitions and the creation of synonymous clusters, does this. See the results obtained by Google when working with its "quantum" computer?

Google: We've achieved quantum supremacy! IBM: Nope. And stop using that word, please

Il'Geller

Study philosophy!

There are two branches in Philosophy. One branch ended on Hegel, and I am its only living representative. The second is Marx, Moore, Bertrand Russell and Wittgenstein.

Modern computer, SQL, Google with Facebook came from Russell and Co., this is the philosophy of the constants, 1 and 0.

Quantum computer, AI are descended from Hegel and me, they are based on becoming, on what is between 1 and 0.

Otherwise, Russell and Co. are Arithmetic, Geometry and Algebra.

Quantum computer and AI are Differential Analysis and Lobachevsky Geometry.

Il'Geller

IBM can do the same!

How does Google do THIS? Google works with patterns, each of which consists of several (two-five) words. Next, Google creates their absolute uniqueness by annotating each of their words with a set of some dictionary definitions, which convey their unique meanings. This uniqueness allows Google to instantly find the right patterns, without going through millions the same patterns in search for the right context: the context is literally "captured" as these unique dictionary definitions. So, "Yes", Google can do the work of many thousands of years almost instantaneously.

Il'Geller

Re: AI indexing

"In RL, a software agent takes sequential actions aiming tocmaximize a reward function, or a negative cost function, that embodies the target problem. Successful training of an RL agent depends on balancing exploration of unknown territory with exploitation of existing knowledge." https://www.nature.com/articles/s41534-019-0141-3.pdf

Il'Geller

AI indexing

For some reason, no one pays attention to the fact that Google said that it applies artificial intelligence technology to what they call quantum computing. Yes, AI indexing actually allows them to accelerate billions and trillions of times.

The sound of silence is actually the sound of a malicious smart speaker app listening in on you

Il'Geller

I've been living in the cold for 10 years, observing you all saying nonsense.

This post has been deleted by a moderator

Il'Geller

Any espionage in cyberspace should immediately become a criminal and punishable act! Indeed, Artificial Intelligence technology allows to do without espionage and theft.

Page:

SUBSCRIBE TO OUR WEEKLY TECH NEWSLETTER

Biting the hand that feeds IT © 1998–2019