As far as I am aware theres no limit on concurrent reads but there is a limit on concurrent writes (1).
I use SQLite for APIs (using slim framework) and its blazing fast when loaded into RAM. Saves buckets of cash on disk IO cost when used this way. Even for much larger databases since RAM is cheaper than disk IO (especially on Azure) usually.
Obviously if you have terabytes of data this isnt practical, but most people dont.
Biggest database I have loaded into RAM is 51GB and the API it serves handles 200k requests a day with ease.
Using a combination of write queuing and read caching the concurrent write limit generally isnt an issue though.
Nginx is very useful for caching which helps prevent problems occurring when the database is locked for writing.
Using an AMQP solution helps with setting up a jury rigged write queuing system.
That said, I generally dont subscribe to the "more boxes is more scale" way of thinking in terms of databases.
This particular set up is a hard sell for shops used to the Microsoft mentality that calls for more boxes (and therefore more licenses...hmm part of the plan MS?) to increase scale...but I see that as a waste of resources since its unlikely that are actually maxxing a box out before you throw another in. What you really have is a bottleneck. Something that can usually be resolved with more (or faster) NICs and faster storage.