Optimal way to concatenate/aggregate strings
Solution 1
SOLUTION
The definition of optimal can vary, but here's how to concatenate strings from different rows using regular Transact SQL, which should work fine in Azure.
;WITH Partitioned AS
(
SELECT
ID,
Name,
ROW_NUMBER() OVER (PARTITION BY ID ORDER BY Name) AS NameNumber,
COUNT(*) OVER (PARTITION BY ID) AS NameCount
FROM dbo.SourceTable
),
Concatenated AS
(
SELECT
ID,
CAST(Name AS nvarchar) AS FullName,
Name,
NameNumber,
NameCount
FROM Partitioned
WHERE NameNumber = 1
UNION ALL
SELECT
P.ID,
CAST(C.FullName + ', ' + P.Name AS nvarchar),
P.Name,
P.NameNumber,
P.NameCount
FROM Partitioned AS P
INNER JOIN Concatenated AS C
ON P.ID = C.ID
AND P.NameNumber = C.NameNumber + 1
)
SELECT
ID,
FullName
FROM Concatenated
WHERE NameNumber = NameCount
EXPLANATION
The approach boils down to three steps:
Number the rows using
OVER
andPARTITION
grouping and ordering them as needed for the concatenation. The result isPartitioned
CTE. We keep counts of rows in each partition to filter the results later.Using recursive CTE (
Concatenated
) iterate through the row numbers (NameNumber
column) addingName
values toFullName
column.Filter out all results but the ones with the highest
NameNumber
.
Please keep in mind that in order to make this query predictable one has to define both grouping (for example, in your scenario rows with the same ID
are concatenated) and sorting (I assumed that you simply sort the string alphabetically before concatenation).
I've quickly tested the solution on SQL Server 2012 with the following data:
INSERT dbo.SourceTable (ID, Name)
VALUES
(1, 'Matt'),
(1, 'Rocks'),
(2, 'Stylus'),
(3, 'Foo'),
(3, 'Bar'),
(3, 'Baz')
The query result:
ID FullName
----------- ------------------------------
2 Stylus
3 Bar, Baz, Foo
1 Matt, Rocks
Solution 2
Are methods using FOR XML PATH like below really that slow? Itzik Ben-Gan writes that this method has good performance in his T-SQL Querying book (Mr. Ben-Gan is a trustworthy source, in my view).
create table #t (id int, name varchar(20))
insert into #t
values (1, 'Matt'), (1, 'Rocks'), (2, 'Stylus')
select id
,Names = stuff((select ', ' + name as [text()]
from #t xt
where xt.id = t.id
for xml path('')), 1, 2, '')
from #t t
group by id
Solution 3
STRING_AGG()
in SQL Server 2017, Azure SQL, and PostgreSQL:
https://www.postgresql.org/docs/current/static/functions-aggregate.html
https://docs.microsoft.com/en-us/sql/t-sql/functions/string-agg-transact-sql
GROUP_CONCAT()
in MySQL
http://dev.mysql.com/doc/refman/5.7/en/group-by-functions.html#function_group-concat
(Thanks to @Brianjorden and @milanio for Azure update)
Example Code:
select Id
, STRING_AGG(Name, ', ') Names
from Demo
group by Id
SQL Fiddle: http://sqlfiddle.com/#!18/89251/1
Solution 4
Although @serge answer is correct but i compared time consumption of his way against xmlpath and i found the xmlpath is so faster. I'll write the compare code and you can check it by yourself. This is @serge way:
DECLARE @startTime datetime2;
DECLARE @endTime datetime2;
DECLARE @counter INT;
SET @counter = 1;
set nocount on;
declare @YourTable table (ID int, Name nvarchar(50))
WHILE @counter < 1000
BEGIN
insert into @YourTable VALUES (ROUND(@counter/10,0), CONVERT(NVARCHAR(50), @counter) + 'CC')
SET @counter = @counter + 1;
END
SET @startTime = GETDATE()
;WITH Partitioned AS
(
SELECT
ID,
Name,
ROW_NUMBER() OVER (PARTITION BY ID ORDER BY Name) AS NameNumber,
COUNT(*) OVER (PARTITION BY ID) AS NameCount
FROM @YourTable
),
Concatenated AS
(
SELECT ID, CAST(Name AS nvarchar) AS FullName, Name, NameNumber, NameCount FROM Partitioned WHERE NameNumber = 1
UNION ALL
SELECT
P.ID, CAST(C.FullName + ', ' + P.Name AS nvarchar), P.Name, P.NameNumber, P.NameCount
FROM Partitioned AS P
INNER JOIN Concatenated AS C ON P.ID = C.ID AND P.NameNumber = C.NameNumber + 1
)
SELECT
ID,
FullName
FROM Concatenated
WHERE NameNumber = NameCount
SET @endTime = GETDATE();
SELECT DATEDIFF(millisecond,@startTime, @endTime)
--Take about 54 milliseconds
And this is xmlpath way:
DECLARE @startTime datetime2;
DECLARE @endTime datetime2;
DECLARE @counter INT;
SET @counter = 1;
set nocount on;
declare @YourTable table (RowID int, HeaderValue int, ChildValue varchar(5))
WHILE @counter < 1000
BEGIN
insert into @YourTable VALUES (@counter, ROUND(@counter/10,0), CONVERT(NVARCHAR(50), @counter) + 'CC')
SET @counter = @counter + 1;
END
SET @startTime = GETDATE();
set nocount off
SELECT
t1.HeaderValue
,STUFF(
(SELECT
', ' + t2.ChildValue
FROM @YourTable t2
WHERE t1.HeaderValue=t2.HeaderValue
ORDER BY t2.ChildValue
FOR XML PATH(''), TYPE
).value('.','varchar(max)')
,1,2, ''
) AS ChildValues
FROM @YourTable t1
GROUP BY t1.HeaderValue
SET @endTime = GETDATE();
SELECT DATEDIFF(millisecond,@startTime, @endTime)
--Take about 4 milliseconds
Solution 5
Update: Ms SQL Server 2017+, Azure SQL Database
You can use: STRING_AGG
.
Usage is pretty simple for OP's request:
SELECT id, STRING_AGG(name, ', ') AS names
FROM some_table
GROUP BY id
Well my old non-answer got rightfully deleted (left in-tact below), but if anyone happens to land here in the future, there is good news. They have implimented STRING_AGG() in Azure SQL Database as well. That should provide the exact functionality originally requested in this post with native and built in support. @hrobky mentioned this previously as a SQL Server 2016 feature at the time.
--- Old Post: Not enough reputation here to reply to @hrobky directly, but STRING_AGG looks great, however it is only available in SQL Server 2016 vNext currently. Hopefully it will follow to Azure SQL Datababse soon as well..
Related videos on Youtube
Comments
-
matt about 2 years
I'm finding a way to aggregate strings from different rows into a single row. I'm looking to do this in many different places, so having a function to facilitate this would be nice. I've tried solutions using
COALESCE
andFOR XML
, but they just don't cut it for me.String aggregation would do something like this:
id | Name Result: id | Names -- - ---- -- - ----- 1 | Matt 1 | Matt, Rocks 1 | Rocks 2 | Stylus 2 | Stylus
I've taken a look at CLR-defined aggregate functions as a replacement for
COALESCE
andFOR XML
, but apparently SQL Azure does not support CLR-defined stuff, which is a pain for me because I know being able to use it would solve a whole lot of problems for me.Is there any possible workaround, or similarly optimal method (which might not be as optimal as CLR, but hey I'll take what I can get) that I can use to aggregate my stuff?
-
Mikael Eriksson over 11 yearsIn what way does
for xml
not work for you? -
matt over 11 yearsIt does work, but I took a look at the execution plan and each
for xml
shows a 25% usage in terms of query performance (a bulk of the query!) -
Mikael Eriksson over 11 yearsThere are different ways of doing the
for xml path
query. Some faster than others. It could depend on your data but the ones usingdistinct
is in my experience slower than usinggroup by
. And if you are using.value('.', nvarchar(max))
to get the concatenated values you should change that to.value('./text()[1]', nvarchar(max))
-
Alexandre Leites over 11 yearsYour accepted answer resembles my answer on stackoverflow.com/questions/11137075/… which I thought is faster than XML. Don't get fooled by query cost,you need ample data to see which is faster. XML is faster,which happens to be @MikaelEriksson's answer on the same question. Opt for XML approach
-
JohnLBevan over 9 yearsPlease vote for a native solution for this here: connect.microsoft.com/SQLServer/feedback/details/1026336
-
devinbost about 7 yearsThere's a useful performance analysis here: sqlperformance.com/2014/08/t-sql-queries/… BTW, the XML approach BLOWS-UP when your data has emojis or surrogate characters unless you jump through a number of hoops.
-
devinbost about 7 yearsSo, @MikaelEriksson, in my comment above, I hope I answered your question -- at least for myself personally.
-
-
milivojeviCH over 11 yearsDon't forget to put an index on that
id
column once the size of a table becomes a problem. -
QMaster about 10 yearsI checked the time consumption of this way against xmlpath and i reached about 4 milliseconds vs about 54 milliseconds. so the xmplath way is better specially in large cases. I'll write the compare code in an separate answer.
-
Romano Zumbé over 9 yearsIt is far better since this approach only works for 100 values maximum.
-
Serge Belov over 9 years@romano-zumbé Use MAXRECURSION to set the CTE limit to whatever you need.
-
Solomon Duskis over 7 yearsSurprisingly, CTE was way slower for me. sqlperformance.com/2014/08/t-sql-queries/… compares a bunch of techniques, and seems to agree with my results.
-
Solomon Duskis over 7 yearsAnd after reading how stuff/for xml path work (stackoverflow.com/a/31212160/1026), I'm confident that it's a good solution despite XML in its name :)
-
Ardalan Shahgholi over 7 yearsThis solution for a table with more than 1 million record doesn't work. Also, we have a limit on recursive depth
-
milanio about 7 yearsI've just tested it and it works like a charm in Azure SQL Database
-
milanio about 7 yearsI've just tested it and now it works fine with Azure SQL Database.
-
Morgan Thrapp about 7 years
STRING_AGG
got pushed back to 2017. It's not available in 2016. -
user about 7 years
STRING_AGG()
is stated to become available in SQL Server 2017, in any compatibility level. docs.microsoft.com/en-us/sql/t-sql/functions/… -
GoldBishop about 7 years@ArdalanShahgholi you may want to limit your Range of data to be operated on. Otherwise, you will need to use an XML form for the concatenation. Recursive CTE's are layered evaluations, so a 1M record action would result in roughly 2M records as a result, before you do the final limitation.
-
GoldBishop about 7 years@slackterman Depends on the number of records to be operated on. I think XML is deficient at the low counts, compared to CTE, but at the upper volume counts, alleviates the Recursion Dept limitation and is easier to navigate, if done correctly and succinctly.
-
devinbost about 7 yearsFOR XML PATH methods blow up if you have emojis or special / surrogate characters in your data!!!
-
Hrobky almost 7 yearsThank you, Aamir and Morgan Thrapp for SQL Server version change. Updated. (At the time of writing it was claimed to be supported in version 2016.)
-
nurettin over 6 yearsmssql in azure now has
string_agg
-
Magne over 6 yearsYes. STRING_AGG is not available in SQL Server 2016.
-
Tom about 6 years+1, you QMaster (of the Dark Arts) you! I got an even more dramatic diff. (~3000 msec CTE vs. ~70 msec XML on SQL Server 2008 R2 on Windows Server 2008 R2 on Intel Xeon E5-2630 v4 @2.20 GHZ x2 w/ ~1 GB free). Only suggestions are: 1) Either use OP's or (preferably) generic terms for both versions, 2) Since OP's Q. is how to "concatenate/aggregate strings" and this is only needed for strings (vs. a numeric value), generic terms are too generic. Just use "GroupNumber" and "StringValue", 3) Declare and use a "Delimiter" Variable and use "Len(Delimiter)" vs. "2".
-
Tom about 6 yearsWhile I'm awestruck that you (or anyone) could come up with this solution, the XML version (first by "slachterman" and them much more read-ably by "QMaster" below) still (barring having SS 2017 and therefore
String_Agg
), IMHO, beats it hands down: 1) It's ~40X faster (for me SQL Server 2008 R2 on Windows Server 2008 R2 on Intel Xeon E5-2630 v4 @2.20 GHZ x2 w/ ~1 GB free w/ my sample), 2) (Perhaps more importantly) It avoids 2 extra copies of almost the same code (in the sub-Selects). -
Hrobky almost 6 yearsPlease specify SQL dialect or version since when is it supported.
-
Art Schmidt over 5 yearsThis works in SQL Server 2012. Note that a comma-separated list can be created with
select @test += name + ', ' from names
-
Frédéric about 5 yearsThis code results in xml-encoded text (
&
switched to&
, and so on). A more correctfor xml
solution is provided here. -
Reversed Engineer over 4 years+1 for not expanding special character to XML encoding (e.g. '&' doesn't get expanded to '&' like in so many other inferior solutions)
-
Dan Oberlam over 4 yearsThis uses undefined behavior, and is not safe. This is especially likely to give a strange/incorrect result if you have an
ORDER BY
in your query. You should use one of the listed alternatives. -
TheEsnSiavashi over 4 yearsCan you do this with JSON?
-
phili_b almost 4 yearsI've tried without temporary table and if it's fast for simple query but it's very slow with complex query. At the opposite if I put the result of my complex query, like this answer, in a temporary table it's very fast.
-
Matthew Rodatus almost 4 yearsThis type of query was never defined behavior, and in SQL Server 2019 we found it to have the incorrect behavior more consistently than in prior versions. Don't use this approach.
-
phili_b over 2 yearsWhen I wrote about "temporary table", I meant "intermediate table". I didn't test with #temporary table.
-
Traderhut Games almost 2 yearsI must say... SCORE! In my tests, this reduced the time my query took from 2.3 seconds, to 25 ms (2600 rows returned from a 80K or so source rows..) Thanks! SQL Server 2016, Enterprise...