if we continue our research into advancing 'machine learning' towards real artificial intelligence (whatever that means) and on towards sentience (again, whatever that means)
Very little active ML research, and not much AI research, is aimed at "advancing ... towards sentience" (or what you're calling "real artificial intelligence", which I suppose we can gloss as what some people call "Strong AI" or "human-like AI"). At improving ML so it can take on more tasks normally delegated to humans, sure; but making a human-like machine intelligence has largely fallen out of favor in the research community. Where it persists, it seems to mostly be attempts to better understand human cognition by creating ever-more-complex machine analogs.
And "sentience" is probably not a useful term here. Etymologically and traditionally it simply means "feeling" or "capable of perceiving sensation", and as such applies to a vast range of entities, including arguably any cybernetic system - so we're surrounded by sentient machines already. More narrowly and recently it's been used to mean "having a sense of self", which is trickier, because our models of self (philosophical, psychological, and neurological) are conflicting and unsatisfactory, but again that very likely applies to lots of more-complex organisms and arguably to some machines as well, which contain logical state information that represents the functioning of their material incarnations.
In some quarters, for a century or so, "sentient" has been used to mean something like "a human-like sense of self and capacity for cognition", but that's a strained usage at best, and seems to come largely from SF writers trying to sound impressive.
A better term is probably "sapience", which etymologically means "wisdom" and is used as a term of art in philosophy to distinguish thinking beings - Dasein in Heidegger's sense - from all other entities. Even with sapience, though, it's really not clear what we mean when applying it to machines - and it's particularly unclear what attributes of it people like Musk are concerned with. Are they worried about machines that can desire? That can imagine? That can emote?
I think the starting point to any reasonable discussion must be an agreed-upon definition of the terms - especially 'intelligence'.
Good luck with that. European-derived philosophy - which is a relatively homogenous school of thought, compared to the entire range of human intellectual endeavor - hasn't come to any consensus there. And computer science doesn't show any signs of doing better. Since people are still arguing over the Turing Test (and generally completely failing to understand it in the first place), I wouldn't look to the tech disciplines to agree on the matter either.