Hi guys,
here's my first question to the forum- so please be gentle.
We're running a Glovia ( ERP ) system, written in Pro-iv superlayer on a unix / oracle platform.
we have a suite of 4 functions which we need to continually loop. i.e.
function 1 is called from an external unix script, which then sets @LFUNCT to function 2 and then terminates.
function 2 performs its tasks, then sets @LFUNCT to function 3 and then terminates.
function 3 performs its tasks, then sets @LFUNCT to function 4 and then terminates.
function 4 performs its tasks, then sets @LFUNCT to function 2 and then terminates.
We noticed slow response as the month progressed and our system admin guys said that the memory used by this process was ever increasing.
Does this mean that once a function terminates, it's memory allocation is not made available until the actual process is terminated ?
If so, is there any method known, to free up the memory without terminating the looping functions ?
Any advice would be appreciated
Thanks
Nick

@LFUNCT & memory usage
Started by NickPartridge, Mar 09 2006 02:59 PM
6 replies to this topic
#4
Posted 09 March 2006 - 05:45 PM
You have a number of options.
1. Report the fault to your software supplier (Glovia I assume). On the understanding that you have a maintenance contract, they may fix it (I very much doubt) or upgrade you to the latest version of the kernel.
2. Amend the functions so that they process all the records (for their particular feature) before moving onto the next function. Therefore each function would only execute once.
3. Amend the primary function to include all the functionality of the other 3, thus creating a super function. The other 3 functions would then be redundant.
Memory leaks are in some cases caused when file handles are closed and the memory is not released. I've seen this once before and it actually turned out to be a fault with the underlying database, not PROIV, so you cannot actually rule out there being an issue with Oracle itself. I would also check this.
1. Report the fault to your software supplier (Glovia I assume). On the understanding that you have a maintenance contract, they may fix it (I very much doubt) or upgrade you to the latest version of the kernel.
2. Amend the functions so that they process all the records (for their particular feature) before moving onto the next function. Therefore each function would only execute once.
3. Amend the primary function to include all the functionality of the other 3, thus creating a super function. The other 3 functions would then be redundant.
Memory leaks are in some cases caused when file handles are closed and the memory is not released. I've seen this once before and it actually turned out to be a fault with the underlying database, not PROIV, so you cannot actually rule out there being an issue with Oracle itself. I would also check this.
Things should be made as simple as possible, but not simpler
#5
Posted 10 March 2006 - 12:00 AM
Is this external script being called from a cron, and if so, is it called multiple times (ie, once a day)?
If so, and an 'OFF' is never performed, you could have multiple processes running at the same time.
If you know the duration of the runtime of the suite of functions, you could use a crontab, to say, run
hourly (if they take an hour) for the external script to call function 1..2..3..4 then do an
@LFUNCT='OFF' to terminate the process.
The next hour would kickoff function1 again, and repeat the process. I'm not sure if this would need
to be function2, but you could have a 2 line crontab to accomodate for this.
ie:
0 0 * * * <script> Function1
0 1-23 * * * <script> Function 2
I'm thinking perhaps the 'memory' being used might just be the cpu usage itself; since the process
never dies, it keeps running and running and running and running.. if you release the process, the
OS can maybe get a breather do some internal housekeeping.
If so, and an 'OFF' is never performed, you could have multiple processes running at the same time.
If you know the duration of the runtime of the suite of functions, you could use a crontab, to say, run
hourly (if they take an hour) for the external script to call function 1..2..3..4 then do an
@LFUNCT='OFF' to terminate the process.
The next hour would kickoff function1 again, and repeat the process. I'm not sure if this would need
to be function2, but you could have a 2 line crontab to accomodate for this.
ie:
0 0 * * * <script> Function1
0 1-23 * * * <script> Function 2
I'm thinking perhaps the 'memory' being used might just be the cpu usage itself; since the process
never dies, it keeps running and running and running and running.. if you release the process, the
OS can maybe get a breather do some internal housekeeping.
#7
Posted 13 March 2006 - 11:08 PM
Nick
Do you have anything else compiled into the kernel which is being called via a LINK command.
The only other time I've seen this was when we were calling a 3rd party API. After a couple of thousand calls the kernel size would increase dramatically and the whole process would just core dump. Turned out they had some serious memory leaks in their APIs..
Cheers
Mike
Do you have anything else compiled into the kernel which is being called via a LINK command.
The only other time I've seen this was when we were calling a 3rd party API. After a couple of thousand calls the kernel size would increase dramatically and the whole process would just core dump. Turned out they had some serious memory leaks in their APIs..
Cheers
Mike
Reply to this topic

0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users