Re: How does this speed refactoring?
Hello, I’m one of the creators of Unison. It’s true that 512 bits (64 bytes) is a bit larger than what today’s CPUs typically use for pointers (8 times larger to be exact), but this alone is not going to contribute significantly to the memory footprint of the typical Unison program (user data is going to do that). We think this is a fair price to pay for the abilities we get from content-addressing code.
Regarding renaming... if you have e.g. a Java library where you’ve named something x, and lots of user code refers to it as x, then if you rename it to y and republish your library you’re going to break everyone’s code. Names are really important in traditional languages. But in Unison, the name is just metadata. You can rename a function from x to y, republish your code, and everyone else’s code still works! Because they weren’t referring to the name anyway. Their code was referencing the hash.
Because of the hashes, Unison knows a lot more about the structure of your codebase than a typical IDE, so we can make refactoring a very controlled experience. They typical workflow in most languages is that you make a change and your codebase is broken until you finish propagating that change (manually) throughout. But a Unison codebase is never broken that way, even in the middle of a refactoring.