@Alan Sharkey: As with all things IT - it depends, and as you hang around here you know that already! Workload, quality of drivers, etc etc and of course the inevitable "Act of $DEITY".
I'm late to the party because although I've been impressed with other people's use - they have been bloody expensive and I totally agree with TP's assessment of those who stick low grade flash into a production server with an incompatible workload and wonder why it dies.
I removed one of the two 1TB Tosh spinning rusts out of my laptop and popped in a Crucial_CT512MX100SSD1. Creative use of a sysrescuecd, cp -a, gparted and a horrific fstab got me back up and running.
So far "erase count" is embarrassingly low after four months. This is on a laptop that runs a shit load of stuff (including MariaDB, PostreSQL, Apache et al) and is installed with a compiler - something like 1GB of source gets converted into the latest updates in a monthly session (mmm Gentoo). I'll start moving stuff back to the SSD and see what effect it has with time.
That fstab in full with truncated UUIDs:
# <fs> <mountpoint> <type> <opts> <dump/pass>
UUID="f1" /boot ext2 relatime,discard 1 2
UUID="16" / ext4 relatime,discard 0 1
UUID="0b" none swap sw 0 0
UUID="1f" /var/lib/libvirt ext4 defaults,relatime 0 0
UUID="14" /var/lib/docker btrfs defaults,relatime 0 0
UUID="50" /var/log ext4 defaults,relatime 0 0
UUID="f6" /portage ext4 defaults,relatime 0 0