Basically, client side code execution is always a risk, because it provides a mechanism to run code from a server that you have no control of, on your system.
The problem is that without it, a lot of our interactions on the web would look like they did back in HTML2 and earlier, where all that could be done had to be done as tables, and any complicated tasks had to be rendered into a pixmap on the server side, and sent to the browser to display.
My view is that Javascript provides too much control. AFAICT, as originally specified, it was supposed to be interpreted. This should have made it quite difficult to issue a stream of machine code instructions which has not been generated by the interpreter or JIT compiler. And if you can make it generate specific vulnerable code, fixing the interpreter or JIT compiler to prevent this is much easier than fixing the processor (it is interesting that the IP protocol compiler in the Linux kernel could be manipulated to generate code to demonstrate meltdown!)
Of course, injecting executable machine code directly into a machine via buffer-overruns or in images or other binary blobs through poorly written client side processes would still be a vector for executing malicious code, and if you have direct control to import and execute code through a direct user session on a system, then there is nothing much you can do to protect yourself from processor flaws. Running a non-x86 architecture would provide some mitigation, but only from vulnerabilities that affected x86 processors.
It is at this point that having trusted executables, preventing you from running imported code, could be a help, but that would not work with anything that used self-modifying, or JIT compiled code or on a system that is used for development (if you can compile code on a system, it is extremely difficult to negate processor flaws).