The difference between the LLM that Microsoft is using and a traditional search engine is that a search engine will find pages and you decide how relevant each of the results is to your query. The LLM obfuscates that somewhat by boiling it down with a plausible description - unless it provides a valid list of references (and ChatGPT has been known to make them up), what are the sources it has used to provide that information, and has its prediction method validly come up with the answer?
An example - in a 6502 forum, one member asked ChatGPT what the best method was for breadboarding a 1MHz clock oscillator for a 6502.
ChatGPT suggested using a 555 - now, that’s a classic timer that has been used for decades as an astable oscillator, but 1MHz is pushing it to the limit (possibly past its limit). So ChatGPT had linked the concept of oscillator to the 555, which in many cases would be reasonable.
It then went on to suggest passive component values, and what to connect to each pin of the 555. This uncovered two more problems - the components suggested gave a frequency nowhere near 1MHz, and while some pin connections were correct, others were completely wrong - it’s as if it had learned a pattern for how to describe component values, presumably from a range of online examples, and either the training data was wrong, or it just didn’t have strong network connections to associate the correct components with the desired frequency, and as for the pin descriptions, presumably it learned how to describe connections in general but the prediction mechanism couldn’t make any sense out of the specifics. However, the whole thing was framed in a plausibly written set of instructions, and if followed (rather than looking in a readily available data sheet, easily found with any search engine, that has the actual circuit), you’d be scratching your head trying to work out why it didn’t work.
If you didn’t know it was bollocks you would probably think it was a reasonable answer. When you know about a subject and you see the answer is bollocks, you worry about how poor the answers might be on subjects you don’t know much about.