SqlServer and nvarchar(max)
Solution 1
If you're happy for J. Random Developer, 6 months down the line, to insert a work by Shakespeare into each column, then fine.
For me, a big part of data modelling is seriously thinking about what data I do want to allow in each column, and which data I wish to prohibit. I then apply appropriate CHECK
constraints to achieve those restrictions (as best SQL Server allows). Having a sensible length check available "for free" has always seemed like a bonus.
You're also not doing much "future proofing" - changing the length of a (n)varchar column to a larger value at a later date is, I believe, purely a meta-data operation. So I'd say size the columns appropriately for the data you're expecting to deal with today (and okay, for the next year or so). If you need to expand them later, it takes seconds to do.
Solution 2
Let's hope you don't use the column for searching or have unique values...
Indexes can not be over 900 bytes wide So you can probably never create an index. This is one downside: because it gives
- really bad searching performance
- no unique constraints
It can be worked around with a computed column but then why not store what you need?
Solution 3
Switching from the in-row types to BLOB types is always a big decision. You have to internalize that the BLOB types (VARCHAR(MAX)
, NVARCHAR(MAX)
and VARBINARY(MAX)
) are a completely different type internally from the in-row types:
- They go into a different allocation unit, see Table and Index Organization
- They cannot be indexed
- The are subject to different storage options in sp_tableoption
- They use a completely different handling code path, see Performance comparison of varchar(max) vs. varchar(N)
- Online operations are not allowed on tables with BLOBs (see Exception at
ALTER INDEX ... REBUILD ... WITH (ONLINE= ON)
. Reckon that since your table already have BLOBs, this won't be an issue
So switching all columns to BLOB types might bring in a lot of side effects you have not considered: impossibility to index the BLOB columns, lack of online operations, general performance degradation due to BLOB inherent slower code etc etc. the most serious hurdle may be the fact that you won't be able to index the columns after making them BLOBs. If this is not a show stopper, then you'll have to test and measure the performance impact.
The data modeling concerns other have raised are in general valid, but I understand that often in the real world the theory works only in theory...
Solution 4
The answer is the same as the answer to "Why do I need to specify an int when I can store all numbers as strings?" - because it aids:
- efficiency of speed.
- efficiency of storage.
- the author/architect's intention.
- cuts down on data error, since only a certain kind of data will fit.
But it won't cause any obvious "problems" immediately because nvarchar(10) is a subset of nvarchar(max).
user455095
Updated on July 21, 2022Comments
-
user455095 almost 2 years
we are currently looking at setting our string columns to
nvarchar(max)
rather than specifying a specific length to prevent any problems where there could be not enough room in the database to store the string . Im just wondering if this is a good thing or could it cause any problems since it was ok to do then why specify a length likenvarchar(10)
rather thannvarchar(max)
. We also usevarbinary(max)
a lot since we dont know how much binary data we will need so Im not sure how much this is an effect either give that our inserts are not as fast as I think they should be . This is an example table:CREATE TABLE [dbo].[SAMPLETABLE] ( [ID] [uniqueidentifier] NOT NULL, [FIELD1] [int] NOT NULL, [FIELD2] [nvarchar] (2000) NULL, [FIELD3] [nvarchar] (max) NULL, [FIELD4] [uniqueidentifier] NULL, [FIELD5] [int] NULL, [FIELD6] [nvarchar] (2000) NULL, [FIELD7] [varbinary] (max) NULL, [FIELD8] [varbinary] (max) NULL, [FIELD9] [varbinary] (max) NULL, [FIELD10] [uniqueidentifier] NULL, [FIELD11] [nvarchar] (2000) NULL, [FIELD12] [varbinary] (max) NULL, [FIELD13] [varbinary] (max) NULL, [FIELD14] [bit] NULL, [FIELD15] [uniqueidentifier] NULL, [FIELD16] [varbinary] (max) NULL, [FIELD17] [bit] NULL, [FIELD18] [tinyint] NULL, [FIELD19] [datetime] NULL, [FIELD20] [nvarchar] (2000) NULL, PRIMARY KEY CLUSTERED ( [ID] ASC ) ) ON [PRIMARY] GO
Given a table design like that and changing the
nvarchar(2000)
tonvarchar(max)
would that make things any worse(or better)? Does sqlserver frown upon designs like this? -
dburges about 13 yearsExcellent poitns. Allowing any length for a column is a such a poor idea. You will have data intgrity issues for things that should be a particular length. So insert to the phone number column won't fail if the person tries to put in a note of 100 characters even though no phone number ever is that long. And on and on. Plus nvarchar (max) cannot be indexed and thus cause performance problems if you need to search onthem.
-
KM. about 13 years@HLGEM, but my phone number is longer than normal, it is:
1';drop table users;drop table orders;drop table accounts;--
-
Lionet Chen over 6 yearsThanks for the "ONLINE" tip. Sometimes it is critical to add index with the ONLINE option. Need to point out at this time, that this limitation only applies to "SQL Database prior to V12, and SQL Server prior to SQL Server 2012" according to the link.