Great responses from Anton, Navneeth, and Martin here. I'd like to add some practical nuance on when to use each approach, especially around performance.
Martin's point about the VIEW approach is spot-on — it's the architecturally correct solution for batch operations. But I think it's worth breaking down all three with a clear decision framework:
Why insert_recordset Doesn't Support Arithmetic:
`insert_recordset` only supports direct field-to-field mapping and constants — no computed expressions. It's by design, because the statement is optimized for pure batch operations at the database level. update_recordset works differently; it evaluates expressions in the update context.
Three Approaches — Pick Based on Your Scenario:
1. POST-INSERT UPDATE (Simplest, Good for Most Cases)
Best when: moderate data volumes, code clarity matters, the view doesn't exist yet
x++
insert recordset impBankAccCustVendTransSettle
(BankTransDataAreaId, BankTransRecId, Voucher, SettleAmountCur, UtilizedCashDisc)
select crosscompany DataAreaId, RecId, Voucher, SettleAmountCur, UtilizedCashDisc
from bankAccTrans
// ... all your joins
;
// Calculate derived field immediately after
update_recordset impBankAccCustVendTransSettle
setting SettleAmountIncDisc = SettleAmountCur - UtilizedCashDisc;
```
Fast batch insert + update
No temp tables or views
Easy to debug and maintain
2. VIEW WITH COMPUTED COLUMN (Best Practice for Performance)
Best when: large data volumes, the calculation is reusable, you need true batch efficiency
Create a view with the computed column:
```x++
// In your data model / view definition
select DataAreaId, RecId, Voucher, SettleAmountCur, UtilizedCashDisc,
(SettleAmountCur - UtilizedCashDisc) as SettleAmountIncDisc
from vendSettlement;
```
Then use `insert_recordset` directly from the view:
```x++
insert recordset impBankAccCustVendTransSettle
(BankTransDataAreaId, BankTransRecId, Voucher, SettleAmountCur, UtilizedCashDisc, SettleAmountIncDisc)
select crosscompany DataAreaId, RecId, Voucher, SettleAmountCur, UtilizedCashDisc, SettleAmountIncDisc
from vendSettlementView; // View handles the arithmetic
```
True batch operation (database-level optimization)
Calculation is part of the data model (reusable)
Best performance for large volumes
Requires creating/maintaining a view
3. LOOP WITH INSERT (Row-by-Row — Use Only When Necessary)
Best when: complex conditional logic, you need record-level validation/transformations
```x++
while select crosscompany DataAreaId, RecId, Voucher from bankAccTrans
// ... your joins
{
impBankAccCustVendTransSettle.BankTransDataAreaId = bankAccTrans.DataAreaId;
impBankAccCustVendTransSettle.BankTransRecId = bankAccTrans.RecId;
impBankAccCustVendTransSettle.Voucher = bankAccTrans.Voucher;
impBankAccCustVendTransSettle.SettleAmountCur = vendSettlement.SettleAmountCur;
impBankAccCustVendTransSettle.UtilizedCashDisc = vendSettlement.UtilizedCashDisc;
impBankAccCustVendTransSettle.SettleAmountIncDisc =
vendSettlement.SettleAmountCur - vendSettlement.UtilizedCashDisc;
impBankAccCustVendTransSettle.insert();
}
```
✓ Full flexibility for complex logic
✗ Slowest option — row-by-row processing
âš Use RecordInsertList if you go this route to improve efficiency
Quick Decision Tree:
- Large volume + reusable calculation? → Use VIEW (Martin's approach)
- Moderate volume + one-time need? → Use POST-INSERT UPDATE (simplest)
- Need complex logic per row? → Use LOOP (but minimize usage)
For your specific scenario with the vendSettlement joins, either approach 1 or 2 will work well. If this becomes a recurring load process, I'd lean toward the
VIEW approach for true batch efficiency.
Hope this helps! Feel free to reach out if you hit any issues with the implementation.