nvarchar(max) is not as bad as you might think in terms of performance. If the data is small (for example 'Y'/'N') then it is stored in row, not off-page.
If only things were so simple... MS-SQL server does try to do some clever things when it comes to the (max) data type columns, however as soon as there are too many of them in a table definition then the MS-SQL optimiser seems to go "I give up, you made this mess, you deal with the consequences".
There are various table options that can be used, either to force large data columns out of row storage or to ask MS-SQL to try and store large data column values in-row ("text in row" table option). The table option "text in row" is also set to be removed in a later version of MS-SQL so relying on it is very much not a good idea.
Testing these options are a little tiresome as because the data storage method is only changed when the column data is updated, which is a simple enough query but is not quick and locks the entire table just in case anyone tries this and wants to continue with other testing while the update is applied.
From my observations and testing, using Microsoft SQL 2016, as soon as a column is specified as nvarchar(max) query performance goes to hell. Changing the column datatype to something sensible like nvarchar(30) makes a considerable performance improvement.
In short, nvarchar(max) and so on are not evil as such as there can be good and genuine uses, but performance wise it's like using variants all the time.