Jump to content


Explorer78

Member Since 23 Sep 2004
Offline Last Active Aug 07 2009 01:50 PM
-----

Topics I've Started

SEL-ONLY & DELETE

12 April 2006 - 08:04 AM

I have come across something, that seems pretty odd to me.
The environment is : Kernel 4.6 on Unix, connecting to Oracle 9i.
Question is, has anyone else come across a similar situation?

Problem is in a Screen Function.
An LS has been defined with the primary file in Delete mode.
The primary file has 4 key fields.
Values in 3 fields are supplied/populated in the Default logic of this LS.

Then a SEL-ONLY done on the 3rd key field.

Eg: key fields are CLNO, ACT, CSEQ & ASSNO.
Values in CLNO, ACT & CSEQ are specified and a SEL-ONLY done on CSEQ.

Problem is, I am getting a read on a different CSEQ value in ARNE of 2nd file in the LS,
after going through all the records for the specified CSEQ.

For the solution, I can replace SEL-ONLY with SQL, but I would like to know if this a known concern experienced by other people as well.
If yes, then I am looking at a system wise scan of SEL-ONLY with primary file in Delete mode and replacing with SQL.

LSCALL & Cursor - Performance

13 February 2006 - 03:45 AM

Hi All,

I intended to keep this small, but to clarify whats happening, I had to write as much as I did. I appreciate the efforts you people will take to go through this.

I have been recently trying to tune a Month End Batch processing Function and testing came out with some eye - opening results for me.

Here is the setup
Scenario 1:
--------------
The Function has 2 nested LS.
The Main driving LS selects about 500,000 rows
In that LS it does some 7 other file reads/updates and validations, during which it DSELS almost 95% of the rows.
The nested LS does reads/updates on some other 8 Files.
The result, some 2000 odd rows changed and other 2000 rows added to two files.

Stats for executing this function on two instances (similar load) has some very disproportionate results.
Once it ran for about 55 mins and the other time it ran for only 10 mins.
This is run times from live environment, where it is ensured that during Month End Processing nothing else is running.
Baffles me on why it would be so drastically different.

Scenario 2:
--------------
I broke down the function in scenario 1 to 3 LS.
As it is obvious, DSEL 95% of rows out of 500,000 is lot of wasted effort.
So, I put down proper SQL to select only the relevant rows.

I inserted a LS to select the rows that are to be investigated/changed (about 12000).
This LS has only 1 file with a single column which gives me the Primary Key to drive the other 2 LS.
In After Read No Error of this file in LS01, I do a LSCALL to the other 2 LS.

These two LS's (2 & 3) are the same as they were in Scenario 1, except with a Where in the Driving SQL (for primary key).

I expect this to run and finish a lot faster than scenario 1.
What I found is, this new setup was processing 1 primary key in 1 sec. At this rate, to do the same required calculation/updations as in Scenario 1 it would take 2.5 hours!!!

I did this in test environment, which is obviously slower than live. But to get the stats right, I ran this when there was no activity on the server.
To get a correct benchmark, I ran the function in Scenario 1 as well. It finished in 10 mins approx, where as scenario 2 is obviously not going to speed things up as expected.
-------------------------

What makes me think is the way ProIV is processing rows and the cost of database operations behind the scenes.
As it seems:
Reading through 500000 rows (with an in-efficient SQL) and doing (7*500000) file reads and other (8*12000) file reads/writes
IS FASTER THAN
Reading through 12000 rows and doing (7*12000) + (8*12000) file reads/writes.

Anyone faced a similar situation before?
Why are run times so apart in live for the same function doing the same job on similar no. of rows.
Why is Scenario 2 slower? Or have I done anything wrong?
Or any ideas to help me speed things up.

Any suggestiongs/help is appreciated.
Regards