Why precision is decreasing when multiply sum to other number

11,234

Solution 1

Aggregating a numeric(18, 8) with SUM results in the datatype numeric(38, 8).

How the resulting datatype is calculated when multiplying something with numeric can be found here: Precision, Scale, and Length (Transact-SQL)

The datatype for your constant -1 is numeric(1, 0)

Precision is p1 + p2 + 1 = 40
Scale is s1 + s2 = 8

Max precision is 38 and that leaves you with numeric(38, 6).

Read more about why it is numeric(38, 6) here: Multiplication and Division with Numerics

Solution 2

If you read SUM's reference page, you'll see that on a decimal column it yields a type of NUMERIC(38,6). You need to cast the result of the SUM to NUMERIC(18,8) for it to work the way you want.

Executing SELECT CAST(SUM(Qnty) as numeric(18,8)) * 2.234 FROM #temp yields 0.00000013404 as you'd expect.

Share:
11,234

Related videos on Youtube

Victor Chekalin
Author by

Victor Chekalin

Updated on September 15, 2022

Comments

  • Victor Chekalin
    Victor Chekalin over 1 year

    I have encountered with following bug (or feature) in SQL Server.

    When I use SUM (*column*) where column has a numeric(18, 8) type and multiply it to any other number (integer or decimal) the result precision is reducing to numeric(18, 6).

    Here is the example script to demonstrate.

    CREATE TABLE #temp (Qnty numeric(18,8))
    
    INSERT INTO #temp (Qnty) VALUES (0.00000001)
    INSERT INTO #temp (Qnty) VALUES (0.00000002)
    INSERT INTO #temp (Qnty) VALUES (0.00000003)
    
    SELECT Qnty, 1*Qnty
    FROM #temp
    
    SELECT (-1)*SUM(Qnty), SUM(Qnty), -SUM(Qnty), SUM(Qnty) * CAST(2.234 as numeric(18,8))
    FROM #temp
    
    DROP TABLE #temp
    

    The result of second SELECT query

    0.000000    0.00000006  -0.00000006 0.000000
    

    As you can see then I multiply SUM the result is 0.000000

    Could anyone explain the strange behavior?

    UPD. I executed this query in SQL Management Studio on 2000, 2005 and 2008 SQL Server.

  • Gabe
    Gabe over 11 years
    I read that whole thread and didn't find anything in there that would explain the observed problem.
  • Victor Chekalin
    Victor Chekalin over 11 years
    Thanks. But why max scale is exactly 6? Why not 4 or 8?
  • Mikael Eriksson
    Mikael Eriksson over 11 years
    @VictorChekalin According to the link I added " we decided preserve a minimum scale of 6 in both multiplication and division".
  • Ingo
    Ingo over 11 years
    then you havent read the explanations from Mat Henaire: ..., the result should be a numeric(77, 20). However, this is above SQL Server's maximum precision, so the result would need to be truncated. An implementation decision was made to sacrifice some scale to gain more digits on the left side of the decimal point, up to a minimum of scale 6. So the numeric(77,20) is actually converted to numeric(38, 6). The same behavior happens for division. That decision is based off the assumption that digits on the left side of the decimal point are usually more important than the ones on the right side.
  • Ingo
    Ingo over 11 years
    which again is what Mikael Eriksson explains in his answer above
  • Gabe
    Gabe over 11 years
    Why didn't you just post that in the first place (with a link to the source)? Unfortunately it still doesn't apply to the OP because his numbers are all numeric(18,8) rather than numeric(38,10).
  • jezza101
    jezza101 about 7 years
    It does apply, the maximum is still exceed when the operations occur.
  • David
    David over 5 years
    Thank you. Found the following article useful (codeproject.com/Articles/769823/…)