Concat of values from Same table

Hi great to see the board is still going and Merry Xmas to all!

I’m guessing this will need to be done at Universe level, even if its actually possible.

I have a transnational schema with a Unique transaction id, that transaction can have individual items within it i.e. Sale Total of £1000 made up of 2 transaction items of £800 Shoes and £200 socks.

So my data currently returns as follows…

Unique ID / Total Amount / Description
123 / £1000 / Shoes
123 / £1000 / Socks

Because there are 2 items, this is causing the row to duplicate. This IS intended functionality as we also want to be able to split out the individual items seperately. However in this case i want to show £1000 total and have a single object that tells you what this is made up of i.e. £1000.00 is made up of Shoes, Socks

Unique ID / Total Amount / Description
123 / £1000 / Shoes, Socks*

*Even better would be to include the amounts i.e. £800 Shoes, £200 Socks

Is this possible to do? I think you would Groupby in SQL. The Description is its own table

TIA

think about a crosstable
with ID in rows and item-description in columns and sum of amount as measure

What’s your backend database?

Thanks for the response, its a SQL server DB

A crosstable will currently show the ID twice on row 1 and row 2, the total amount of £1000 twice and the item description on row 1 would be Shoes, on row 2 would be socks.

What I’m after is to show the item description concatenated on a single row. 1 ID, 1 Total amount and the description as Building, Contents on the same row

If the construct of your cross table is correct…

image

You can do it either in WebI or in a custom/FHSQL query.

In WebI, you need a few variables, something along the lines of:

v_Field_Max_In_BreakField: = max( [Field Name] ) in ( [Break Field Name] )

v_Field_Concat: = [Field Name] + ", " + previous( self ; [Break Field Name] )

v_Field_Concat_Max: = [v_Field_Concat] where( [Field Name] = [v_Field_Max_In_BreakField] )

Use [Break Field Name] and [v_Field_Max_In_BreakField] in your data block.

In SQL, it depends on which version.

SQL 2017+ you can use the STRING_AGG function:
select [Break Field Name], string_agg( [Field Name] , ', ’ ) as Field_Concat
from [Table Name]
group by [Break Field Name]

Prior versions:
select [Break Field Name],
( stuff ( (
select cast( ‘, ’ + [Field Name] as varchar(250) )
from ( select distinct [Break Field Name], [Field Name] from [Table Name] ) a
where [Break Field Name] = [Table Name].[Break Field Name] for xml path (’’), 1, 2, “” )) as b
from [Table Name]

Thanks for this but because there could be a varied number of different item types across the transactions this would cause me a varied number of columns to accommodate the different item types. I need be able to include all the detail for each transaction in one cell really

Thank you for this, in this case am i correct in thinking that the Break Field would be the unique transaction ID and the Field Name the Item Description?

Yes, that should work. You could also create a variable to concatenate your description and amount, if you want to display both together.

This works great until I bring the Total Transaction Amount in and it splits them into 2 rows again. I’ve put a max around the Total Transaction Amount and I think that’s done it.

I need to do some more testing and consider whether it worth trying to achieve this at Universe level, as I feel its possibly safer that way, unfortunately we are still running SQLSVR 2012 but we are in the process of upgrading.

Thank you for your guidance, its been a massive help!

A few things to try:
Add the transaction amount to the break in the variables you created

Create a concatenated variable of your id and amount and use that as the break field

Create a variable max( [trans amt] ) in( [break field] ) and use it in your table with the id and concatenated field.

Unless you use this a lot, probably not something I would build at the universe level. One-off builds in the universe clutter it up. Maybe try FHSQL instead? You’ll also need to test performance to see which is more efficient, report level vs query.