Yet more drivel from the Department of the Bleedin' Obvious. Like who'd have thunk it? Sound? Evolution could sure take a leaf out of these guys' book. Oh, wait...
Programmers trying to teach AI bots how to play video games may be missing one vital component in their models: sound. Noise is an important feature in video games. Music helps create certain moods. A string of dissonant sounds, for example, makes spooky soundtracks that are appropriate for zombie games. The sounds effects of …
There are good reasons why evolution went with ears or similar organs on most higher life forms.
Light is perceived in a mostly linear fashion whereas sound goes round corners and through things, being able to hear as well as see increases survivability. IIRC up to 60% of a cat's processing power is used in interpreting the world as a compound audiovisual map. That's why cats are curious and poke around any change in their environment.
I guess it depends on what the AI has to do. It was only yesterday I way playing Fallout 76 (yes, one of the ever dwindling numbers it seems, but I like it) and the music to signify an arse kicking (one way or another) was playing before any visual threat. So I had enough of a heads up to expect a visit from something that was not going to be friendly.
So if AI bot just has to get from A to B, then mood music is not so important. But if it has to potentially expect nasties and deal with them on a (semi-)random basis then it could give an inkling of warning. Though that's a bit of a fudge which could be game specific. At least it should listen out for nasties stomping around like a pregnant concrete elephant whilst also mutterihng/groaning/etc...
Massive blinkers in software development and programming. OK, sometimes you need it to focus. But I'd be rich if I got a penny for every response to a feature request being "buy why would you want that/want to do that?"
Well, I got it once when asking if an AI had any ability to read the world around it "Why would you want that" was the reply. Guess what, 15 years later, we have personal assistants that can (rudimentarily) search wiki/google for answers to questions... that's "external" data to them (even more so when calendar searching and editing, sound lookups etc).
Now they are realising some need camera or screen space vision. Because without context or ability to see what a user is talking about, they are left in the... dark.
How else would you know whether the BFG comes from left or right? Specific door and lift timings tell you "Ah, my deathmatch target is in that room, he got the BFG now". When you hear a plasmagun you knew the fun begun.
The same goes for RTS games: Even the old C&C Generals obeyed the "play sound from left or right" rule so you knew which direction to scroll.
I still remember the first time I heard the grunts of a Pink Gorilla demon in the dark corridor where it first makes its appearance in Doom (level 3 or 4). Listening through my Gravis Ultrasound card and loud speakers the noise made me nervous. So nervous that when I actually glimpsed the demon I panicked and shot it point blank with the rocket launcher, killing myself before I could get a good look at it. Scary!
Is it just me, or for the bit about "changing pitch depending on 'hotter' or 'colder'" really just mean "we gave it a 'distance to goal' value but via the audio channel instead of just directly feeding it the number".
Doesn't it kind of defeat the point of making it work off sound if the sound itself changes directly in response to how well the AI is doing, instead of just the volume and location, or as direct feedback for an event?
Exactly - they could have given the cue in many different ways. That they happened to use what we perceive as sound is irrelevant - to the bot it's just another set of numbers. Feeding the bot the usual in-game sounds might make a difference, but that isn't proved by this experiment.
Biting the hand that feeds IT © 1998–2019