It's all about the big numbers
The address space is not wasted, it is just vast. At my previous place, I had a fixed IP, a /64 allocated out of the /32 (I think) that my provider got assigned, out of the, say, /16 assigned to Canada. This way, anyone in the world just needs to know about the /16 to route packets for me to "roughly Canada", the (say) Toronto Internet Exchange just needs to know that 16 bits further down it went to my old provider, etcetera. It limits route tables to really manageable entries and you can still have tons and tons of top level ISPs.
The /64 in my house gets subdivided as well: privacy IPv6 addresses, MAC-based IPv6 addresses, and then a couple of DHCP ranges that for example routed between my regular network and a bunch of docker swam networks on Raspberry PIs (don't ask ;-)). I wasn't using my whole address space, but I certainly had another subdivision going on.
Having 340282366920938463463374607431768211456 addresses makes large scale routing really efficient by purposely going sparse. It's a bit of a twist from IPv4, but it makes a ton of sense. 128 bits is so mindbogglingly big (picture all of space and a sign "you are here") that it enables these sort of strategies and still be future proof, even though the allocation strategies seem wasteful at first sight.
(glad you asked. I'm on a rural PtP LTE connection now. Behind five-hundred layers of "Carrier Grade NAT". Which is how we all will end up on IPv4, with no option to, say, run a webserver on your home router).