
Insufficient memory for file area
#1
Posted 25 March 2009 - 11:37 AM
A specific Function is being genned (under 4.6 / Oracle if relevant).
The gen continues for a long time, then the message above is given.
It appears that the gen is "successful" (i.e. the Gen records are created) but the Function does not execute properly.
I'm guessing that the message is a problem with the gen process, rather than a gen error in the Function. From the length of time, I would guess that the gen process is getting itself into a loop.
Any ideas or previous experience would be useful at this stage.
I'm going to request a support call, so will update if there is any official response.
#3
Posted 26 March 2009 - 10:19 AM
However there's not a lot of point in making random changes just to see if we can get it to gen, when there might be some specific we can focus on.
Given the complexity and size of some of our Functions we have a good deal of experience on workspace gen errors!
For example we know from previous experience that you can get workspace errors with a single logic that exceeds around 800 lines of reasonably complex code. However much you simplify the rest of the Function it will never gen unless you deal with this particular logic.
In this case we have been trying to simplify some of the tables, in case it is then total number of fields (that normally shows up on the Symbol Table Size).
No good news so far!
#5
Posted 27 March 2009 - 04:22 PM
If malloc is failing, then PRO-IV is either leaking memory or trashing memory during the gen process. A simple ps will show if memory is leaking. Finding out if it is trashed probably can only be done with PRO-IV support help. Good luck.ProIV Support response: "The message is produced when a call to malloc fails when trying to acquire memory from the OS for file control structures".
--
Kevin English
#6
Posted 27 March 2009 - 05:52 PM

Any way you can compare with the most recent version of this function that was OK and see what the difference are?
#7
Posted 30 March 2009 - 11:41 AM
Extracted from ps -l results running every couple of seconds during the gen...A simple ps will show if memory is leaking.
--
Kevin English
S C PRI NI ADDR SZ WCHAN TIMES 0 154 20 6a15e300 498 8351d868 0:00R 247 239 20 6a15e300 6874 - 0:05R 237 237 20 6a15e300 23618 - R 248 240 20 6a15e300 49570 - R 255 241 20 6a15e300 65954 - R 239 237 20 6a15e300 88786 - R 255 241 20 6a15e300 122438 - 1:08R 255 241 20 6a15e300 134998 - 1:15R 255 241 20 6a15e300 154310 -R 239 237 20 6a15e300 163374 -R 247 239 20 6a15e300 177538 - 1:39R 246 239 20 6a15e300 191866 -R 255 241 20 6a15e300 209170 - 1:56R 255 241 20 6a15e300 219314 - 2:02R 250 240 20 6a15e300 232750 - 2:09R 255 241 20 6a15e300 241974 - 2:14S 149 154 20 6a15e300 260042 8351d868 2:25
#8
Posted 30 March 2009 - 01:59 PM
I'm not sure what platform you're on but I believe the memory occupied (SZ) is normally in pages and on Linux a VM page is 4Kb, IIRC.
So, assuming that applies, your one PROIV process has consumed about 1Gb of memory which may well be some configured limit. It does seem to me pretty likely that, as Kevin said, you've hit some bug that directly or indirectly causes runaway acquisition of memory.

#10
Posted 30 March 2009 - 05:15 PM
The Solaris is a 64 bit box, but only running 32 bit ProIV.
My theory (for what it's worth) is that there is an internal pointer or value during the gen process that hits a temporary value that's ok with the larger values allowed in a 64 bit version but not in a 32 bit version.
The only way to test that would be to get a 64 bit 4.6 kernel for Solaris & try that; but I don't think such a beast exists. If it did, and the resulting code ran on another 32 bit machine then it would be proof that the issue is a limitation in the gen process.
(Sorry for boring the rest of you with this! - thanks to Kevin & Richard for their inputs - they were very useful).
#11
Posted 31 March 2009 - 10:49 AM
Pro-IV ver 5.5, ORACLE 9i on Windows environment
We found that if we addedd fields to tables that some functions not even referencing the new fields could fail to gen due to size errors.
We regularly had problems with a function that has 16 LUs and 90+ files referenced - we were encountering Gen and sometimes run-time errors related to function size.
One of the solutions we've used to reduce function size - where too many files/variables present - is to remove some file lookups to a global function - that the global function is called to retrieve the relevant fields for use in the main function. This has proven successful.
We've also removed almost all usage of the Sort/Select from LUs (we only had less than 10 of these) - we now build a sort/select table within the DB. This has also reduced run-time errors/crashes in large functions.
Rgds
George
#12
Posted 01 April 2009 - 01:33 PM
What else could I have done to get the function to FGEN?
I was using Ver 5.5 and proisam
Andy
Edited by strider, 01 April 2009 - 01:34 PM.
#13
Posted 02 April 2009 - 03:41 PM
While genning on a 64 bit version seems to get around the limit when you start hitting limits like this all a technical work-around will do is postpone the refactoring to a later point. You never know when that straw will be added to the camel's back. So a quick bit of refactoring has been done now, while we have some time, to consolidate some of the file accesses and the thing now gens.
As for advice to others all I would say is if the function is looking very big at some point something bad will happen!
The sad thing about this error was that it does not generate a gen error, so my automatic gen system did not pick it up (just be warned if you are genning lists of Functions).
Reply to this topic

0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users