Aggregating negative decimal values produces incorrect results.
table:
| charge_item_name (string) |
charge_type (string) |
charge_amount (decimal(16,2) |
| WHT |
tax |
-300.00 |
| WHT |
tax |
-300.00 |
| WHT |
tax |
-300.00 |
| WHT |
tax |
-240.00 |
| WHT |
tax |
-300.00 |
| WHT |
tax |
-300.00 |
| WHT |
tax |
-300.00 |
| WHT |
tax |
-300.00 |
| WHT |
tax |
-300.00 |
| WHT |
tax |
-300.00 |
| WHT |
tax |
-300.00 |
| WHT |
tax |
-200.00 |
| WHT |
tax |
-300.00 |
| WHT |
tax |
-300.00 |
Spark SQL query:
SELECT itemchargingtype, itemdescription, count(*) AS `count`, sum(itemamount) AS `sum`
FROM `transaction_items`
where itemdescription = 'WHT'
GROUP BY itemdescription, itemchargingtype
result:
| charge_type |
charge_item_name |
count |
sum |
| tax |
WHT |
14 |
1,202.88 |
After disabling
snappydata.sql.hashAggregateSize=-1
snappydata.sql.useOptimizedHashAggregateForSingleKey=false
it produces correct values.
result:
| charge_type |
charge_item_name |
count |
sum |
| tax |
WHT |
14 |
-4040.00 |
Aggregating negative decimal values produces incorrect results.
table:
Spark SQL query:
result:
After disabling
snappydata.sql.hashAggregateSize=-1
snappydata.sql.useOptimizedHashAggregateForSingleKey=false
it produces correct values.
result: