
Timeout and imcomplete PRO ISAM data update
#1
Posted 10 June 2004 - 09:34 AM
My customer who is using PROIV 5.5 application with PRO ISAM files want to use timeout feature. However, if user is creating a purchase order and is interrupted during enter an ordered item, the screen is terminated and exit. It causes the detail file to be update without its corresponding header file. How can I prevent the child process from update file or force the parent process to update file when encouter timeout ? The customer want the idle process to be terminated to return the available license seat for another users.
Thanks,
Surajit
#2
Posted 10 June 2004 - 10:04 AM
There may be other solutions or perhaps something present in later versions of pro-iv pro-isam - this was the solution I used in Version 1.5 and its still valid
to work around your current problem in pro-isam
Copy your PO Header and PO Detail File to make Workfiles - POHW and PODW
P.O. process
1) Call Update-1 to clear the PO Header Workfile and PO Detail Workfile - for all records keyed on the current terminal
2) P.O. processing screen - replace the current files with the Workfiles - write the P.O. with the Terminal ID as the key to the workfile.
3) Once users confirms the P.O. - execute Update-2 to copy the P.O. from the workfile to the P.O. files assigning key etc., send the P.O. no to Update-3 , this is a simple function used to UMSG to display the P.O. No. of the transaction created.
this way the screen function will only be accessing workfiles not true transactions - if connection broken then its only workfile data which is out of sync - when P.O. process next ecuted the workfiles are cleaned beforehand Update-1
Additionally, using this means that any key/counters for the P.O. process are only assigned once the screen is confirmed, therefore if connection boken than no. keys/counters will be lost/unused.
There may be other historical posting dealing with pro-isam commit processing, use the search
hope this helps
Rgds
George
#4
Posted 10 June 2004 - 09:14 PM
Here's a simple solution.
Turn off timeout processing on the handful of functions that will be a problem.
Coming in the function
#TIMEOUT = @TIMEOUT
@TIMEOUT = 0
Going out
@TIMEOUT = #TIMEOUT
Long term solution - Switch to a SQL database and use transactional processing.
hth,
Joseph
#5
Posted 11 June 2004 - 12:55 PM
I agree completely with Joseph. My inital experience with Pro IV was on Chess ver 1.10 (the Pro ISAM percursor to Glovia) and converted to Glovia ver 4.2 (and now 5.2) on Oracle 8i. So many major problems simply no longer exist. I no longer have to perform the index file rebuilds. I don't have to worry about what state the database is in when the power goes out, or if a user drops off the network. Oracle rolls back the uncommited transactions. If I want to root through a table with data that isn't in the key, I can do it using SQLPLUS and I don't have create an entire new key file and function to do it.
We've switched to using Cognos Impromptu for reporting, so the users write the vast majority of their custom reports - and publish them on our intranet (saving on printing). Aside from maintaining the joins in the data catalogs (and an occasional "where do I find this data?" question), I'm not involved.
Yes, Oracle comes with it's own set of problems (it's costly, it's a resource pig, the conversion was painful), but at this point I wouldn't hit a dead dog in the posterior with a Pro ISAM database.
Andy
#7
Posted 14 June 2004 - 12:58 PM

The one down side to memory files is that they stay resident in memory for each user until the user logs out. It is release then. Think of memory files as @$COM or @#COM variables. Each only pertain to the users session.
HTH Bill
#8
Posted 14 June 2004 - 01:13 PM
per file can also be useful if managing memory will be a big issue - default is
enough space for 1000 records.
of the poster and do not represent those of any organisation.
#10
Posted 15 June 2004 - 03:57 AM
Yes when you logout the memeory is released.
But you have to be carefull on amount of memory on your hardware, and number of users.
Since an extra XXmb per user, on a system that has 500+ users could cripple you.
ProIV should be releasing memory or at least have a command to release the memory, since this effectivly is a 'memory leak'.
I have reported this, when version 5.0 was being Beta tested, but it looks like is has not been addressed.
Rob.
#11
Posted 15 June 2004 - 03:58 AM
Chris,Although the memory file does stay resident, the space allocated can be released by clearing the file when you're finished using it. Defining a blocksize
per file can also be useful if managing memory will be a big issue - default is
enough space for 1000 records.
How do you release the space??
Setting the clear flag or deleting the records does not release the memory used...
Thanks,
Rob D.
#12
Posted 15 June 2004 - 01:12 PM
Follow up question:
How serious a threat is writing too many records in a MYK table? For instance, if you had a 100 concurrent user system would you shy away from using an MYK table if there were a potential that 5 - 10 users might use it at the same table each writing a couple thousand rows?
Regards,
Joseph
#13
Posted 15 June 2004 - 03:46 PM
A memory file is entirely local to a ProIV process/thread, it cannot be shared by multiple users/processes/threads and there is (to my understanding) no concurrency control that applies to memory file access.
I think most people's concern is that if many ProIV processes use memory files AND write extensive data to them, the 'memory footprint' of the average ProIV process in your application may increase 'significantly' leading to excessive overall use of virtual memory on a machine..
So the question is how much more virtual memory are you happy to see acquired by your processes - and that could vary greatly from one machine configuration to another.
A guess that you'll need twice as much virtual memory per process as the 'peak' total size of the memory-file records you create/update is probably not a bad place to start. Eg: if some process writes 1000 records of 512-bytes on average, assume that adds 1Mb to the process' memory footprint.
As Rob says, don't at this stage assume that any of this memory is being reclaimed or released unless a process terminates..
I'm sure others actually using this have more accurate guidelines by now

#14
Posted 15 June 2004 - 04:20 PM
Yes - you've explained my question better. The concern that I foresee (and even more so with the fact that the memory is not cleared up) is could excessive use of memory files ultimately crash the server?
All of our use of temp files currently is based on clearing out the file before using it - as opposed to after using it. If we were to looking at moving all of our temp files to memory files then there may be several different instances of memory files per client. Additionally, each memory file might have hundreds or thousands of stray records in them.
As this gets multiplied out by dozens of users, on servers running against a memory hog database like Oracle or SQL Server, I would think that too much memory could be used...
Anyway, it would be helpful to hear from folks who haved used memory map files extensively.
Regards,
Joseph
#15
Posted 15 June 2004 - 04:32 PM

HTH
Bill
Reply to this topic

0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users