back to article In the red corner: Malware-breeding AI. And in the blue corner: The AI trying to stop it

The magic AI wand has been waved over language translation, and voice and image recognition, and now: computer security. Antivirus makers want you to believe they are adding artificial intelligence to their products: software that has learned how to catch malware on a device. There are two potential problems with that. Either …

Silver badge

127.0.0.1

Your device is protected.

0
0
Silver badge

AI/ML

Annoying Idiocy / Millennial Leachers

1
4
Silver badge

Bus and Ostrich

If you want to see the bus that looks like an ostrich (of course you do) then it's about three quarters of the way down this page:

http://www.popsci.com/byzantine-science-deceiving-artificial-intelligence

The rest of the page is quite interesting too.

7
0
Bronze badge

Scary

Sounds like the equivalent of loading a bacterium on a petri dish with increasing doses of antibiotic.

2
0
Silver badge

Re: Scary

This is actually very good research. The next generation of AI malware producers will be using source code snippets. This is a lot like fuzzing, where data is munged in order to produce a crash or a hang.

3
0
Bronze badge

Re: Scary

Sounds like the equivalent of loading a bacterium on a petri dish with increasing doses of antibiotic.

More like loading mutating bacteria on a petri dish with increasing doses of "mutating antibiotics"; you get an arms race - kind of what's happening in the real world with antibiotic-resistant bacteria (cf. the Red Queen effect).

0
0
Anonymous Coward

Maybe

People posting selfie images and profile photos should consider adding ostrich etc., masking to make snooping tricky

2
0
Silver badge
Coat

Re: Maybe

But then scammers would devise an AI that would compare sets of adjacent pixels in a picture to look for signatures of Ostrichization. Where will it ever end?

1
0
Silver badge

Re: Maybe

It's not supposed to. The next step would be to create a less-obvious Ostrichization, then to detect it, then to make it less detectable, and so on, until either they can't Ostrich it any better or they beat the noise floor, by which point the detector would fail on account of false positives.

1
0
Silver badge

Hmmmm...

I see Prince Robot IV got an office job then...

1
0

Re: Hmmmm...

And it looks like he got the crack in his screen fixed.

0
0
Gold badge
Unhappy

So it's Core War played with "real" virtual processors between machines

Imagine that.

And it's only taken 33 years for someone to try it.

Core War here

The joker of course is have you developed a system perfectly adapted for finding only the malware that the attacking ML system produces.

BTW there is also a Linux GCC optimizer that builds optimally efficient assembler instruction sequences for very frequently executed code. IIRC it was limited to 5 instructions, but recent versions can do sequences up to 7 instructions long (this is one of those combinatoric explosion problems)

0
0
Silver badge

Re: So it's Core War played with "real" virtual processors between machines

I believe you mean combinatorial explosion. For a while, I was thinking Traveling Salesman when you mentioned it, but perhaps Sudoku, Chess, and maybe Go are better example. Basically, the complexity increases on an extreme scale—geometric or factorial, say—for each step up. Easy to see why we probably won't see an 8-instruction optimizer except for maybe RISC instruction sets.

0
0
Bronze badge

Re: So it's Core War played with "real" virtual processors between machines

The joker of course is have you developed a system perfectly adapted for finding only the malware that the attacking ML system produces.

That's an excellent point, and one which you can be sure is not lost on the designers of this system (or adversarial ML in general). I could imagine ways of getting around this, though. First of all. you would have to ensure that the malware detector does not "forget" earlier attempts at evasion. This could be done, for example, by continually bombarding it with all thus-far generated malware attacks. That's the easy part. Getting the malware generator to diversify wildly is likely to be much harder. It probably needs to be "seeded" with exploits from the real world, not to mention the designer's imagination in full black-hat mode.

1
0
Bronze badge
Facepalm

Pattern matching is dumb, thus anomaly detection, with history and rollback.

You can't train a detection system for patterns it hasn't seen yet, but you can put traps, like trip-wires and honey pots, and other anomaly detection in-place, and use a rolling audit of seemly OK previous behaviour for alerts and to dynamically re-train detection systems to quarantine later similar malware before it can do much or any damage. Having OS enforced application level permissions would also help, including faking access, to "honey pot" trick malware into revealing itself better.

If I was writing malware, I'd probably use random salted compressed and encrypted launch/payload sections, including deceptive "buggy" code/data and resource access, to defeat easy binary-pattern and behaviour detection.

0
0
Bronze badge

Re: Pattern matching is dumb, thus anomaly detection, with history and rollback.

If I was writing malware, I'd probably use random salted compressed and encrypted launch/payload sections, including deceptive "buggy" code/data and resource access, to defeat easy binary-pattern and behaviour detection.

So perhaps the malware generator could discover and deploy this strategy (with a bit of nudging, perhaps) - and the malware detector could then attempt to mitigate against it.

0
0
Bronze badge
Boffin

Waste of time

Why go to all the trouble? Write the code any way you like. Social engineering works really well.

Want a longer....? Just type your account number here....

Oh, wait a moment... The social engineering requires actual intelligence.

Never mind....

1
0
Silver badge

More of the same old nonsense in not a viable option for future delivery of surreal derivatives

The aim of the game is to fudge the file, changing bytes here and there, in a way so that it hoodwinks an antivirus engine into thinking the harmful file is safe. The poisonous file slips through – like the ball carving a path through the brick wall in Breakout – and the bot gets a point.

In just exactly the same way that mainstream media presents both state and non state actor scripts for virtual realisation and program reaction to create a chaotic future for "experts" to stabilise?

Yes, it sure is, bubba. But that stated secret is always best kept safe and secure and away from and widely unknown by the masses, because of the very real live danger to elite executive systems administration that such knowledge delivers.

Now that it is out there in spaces and places which cannot be commanded or controlled by formerly convenient and/or conventional means and memes, is the Great Game changed with novel leading players with authorisations to either create new future projects and more magical systems and protect old legacy systems leaders or simply destroy perverse and corrupted old regimes if they/it chooses to remain disengaged and silent whilst peddling its arms to the ignorant slaves which be identified in this enlightening tale ........ Silent Weapons for Quiet Wars

1
0
Anonymous Coward

Sounds like Charles Stross was on the right track with the near-future he outlined in "Rule 34".

1
0
Silver badge
Terminator

The only winning move is not to play.

Seriously, stop relying on A/V.

We need more sophisticated and accessible rights-dropping. We need applications to drop rights to disk access outside designated subdirectories.

Give me ultra-light jails where I've dropped rights to all sorts of things like disk areas, opening of listening ports etc.

Reduce the impact of a compromise and the incentive to compromise rapidly diminishes.

0
0
Silver badge

Re: The only winning move is not to play.

You forget things like Return-Oriented Programming where malware can simply use other programs (who are MEANT to access the places it needs) to do its dirty work FOR them.

0
0

This is a must read research by Endgame. I would wait to see the conclusion of this cat and mouse game in the context of antivirus. Since some malware doesn't work in a pattern, how would the AI response to that?

0
0

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Forums

Biting the hand that feeds IT © 1998–2017