Jump to content


Photo
- - - - -

Timeout and imcomplete PRO ISAM data update


21 replies to this topic

#1 Surajit

Surajit

    Expert

  • Members
  • PipPipPipPip
  • 132 posts
  • Gender:Male
  • Location:Land of Smiles :-)

Posted 10 June 2004 - 09:34 AM

Hi all,

My customer who is using PROIV 5.5 application with PRO ISAM files want to use timeout feature. However, if user is creating a purchase order and is interrupted during enter an ordered item, the screen is terminated and exit. It causes the detail file to be update without its corresponding header file. How can I prevent the child process from update file or force the parent process to update file when encouter timeout ? The customer want the idle process to be terminated to return the available license seat for another users.

Thanks,
Surajit

#2 George Macken

George Macken

    ProIV Guru

  • Members
  • PipPipPipPipPip
  • 248 posts
  • Gender:Male
  • Location:Co. Wicklow, Ireland

Posted 10 June 2004 - 10:04 AM

I've had experience of this in past in using pro-isam - you've got to design system/functions to cope with how pro-is and isam files perform commit processing. If using ORACLE DB you may not generally encounter the problem you are currently having

There may be other solutions or perhaps something present in later versions of pro-iv pro-isam - this was the solution I used in Version 1.5 and its still valid

to work around your current problem in pro-isam

Copy your PO Header and PO Detail File to make Workfiles - POHW and PODW

P.O. process

1) Call Update-1 to clear the PO Header Workfile and PO Detail Workfile - for all records keyed on the current terminal

2) P.O. processing screen - replace the current files with the Workfiles - write the P.O. with the Terminal ID as the key to the workfile.

3) Once users confirms the P.O. - execute Update-2 to copy the P.O. from the workfile to the P.O. files assigning key etc., send the P.O. no to Update-3 , this is a simple function used to UMSG to display the P.O. No. of the transaction created.

this way the screen function will only be accessing workfiles not true transactions - if connection broken then its only workfile data which is out of sync - when P.O. process next ecuted the workfiles are cleaned beforehand Update-1

Additionally, using this means that any key/counters for the P.O. process are only assigned once the screen is confirmed, therefore if connection boken than no. keys/counters will be lost/unused.

There may be other historical posting dealing with pro-isam commit processing, use the search

hope this helps

Rgds

George

#3 Bill Loven

Bill Loven

    Expert

  • Members
  • PipPipPipPip
  • 147 posts
  • Gender:Male
  • Location:Coppell, United States

Posted 10 June 2004 - 12:40 PM

;) Since you are on 5.5, think about using memory files. They work the same as Pro-Isam. When your transactions are complete, write them to your permanent files.

We use them a lot.

HTH

Bill.

#4 Joseph Bove

Joseph Bove

    ProIV Guru

  • Members
  • PipPipPipPipPip
  • 756 posts
  • Gender:Male
  • Location:Ramsey, United States

Posted 10 June 2004 - 09:14 PM

Surajit,

Here's a simple solution.

Turn off timeout processing on the handful of functions that will be a problem.

Coming in the function

#TIMEOUT = @TIMEOUT
@TIMEOUT = 0

Going out

@TIMEOUT = #TIMEOUT

Long term solution - Switch to a SQL database and use transactional processing.

hth,

Joseph

#5 ashumway

ashumway

    Newbie

  • Members
  • Pip
  • 8 posts
  • Gender:Male

Posted 11 June 2004 - 12:55 PM

Surajit,

I agree completely with Joseph. My inital experience with Pro IV was on Chess ver 1.10 (the Pro ISAM percursor to Glovia) and converted to Glovia ver 4.2 (and now 5.2) on Oracle 8i. So many major problems simply no longer exist. I no longer have to perform the index file rebuilds. I don't have to worry about what state the database is in when the power goes out, or if a user drops off the network. Oracle rolls back the uncommited transactions. If I want to root through a table with data that isn't in the key, I can do it using SQLPLUS and I don't have create an entire new key file and function to do it.
We've switched to using Cognos Impromptu for reporting, so the users write the vast majority of their custom reports - and publish them on our intranet (saving on printing). Aside from maintaining the joins in the data catalogs (and an occasional "where do I find this data?" question), I'm not involved.
Yes, Oracle comes with it's own set of problems (it's costly, it's a resource pig, the conversion was painful), but at this point I wouldn't hit a dead dog in the posterior with a Pro ISAM database.

Andy

#6 Surajit

Surajit

    Expert

  • Members
  • PipPipPipPip
  • 132 posts
  • Gender:Male
  • Location:Land of Smiles :-)

Posted 14 June 2004 - 02:04 AM

Thanks for all responses,

Budget is major constraint for my customer, last year they have considered about converting to RDBMS but decided not to go.

Bill, can you give me more information about memory files ?

Surajit

#7 Bill Loven

Bill Loven

    Expert

  • Members
  • PipPipPipPip
  • 147 posts
  • Gender:Male
  • Location:Coppell, United States

Posted 14 June 2004 - 12:58 PM

;) Surajit, Copy your file definition to a new file name, we normally use W or WK as the first to characters of the copied file, then change the file type from Pro-Isam to MKY. From that point it functions just like a pro isam file. We clear each memory file before the start of a transaction. We all is complete, changes, addidions and deleteions are written to the main tables or files. For deleteions, we add a delete falg to the end of the memory file and check it on the writes.
The one down side to memory files is that they stay resident in memory for each user until the user logs out. It is release then. Think of memory files as @$COM or @#COM variables. Each only pertain to the users session.


HTH Bill

#8 Chris Mackenzie

Chris Mackenzie

    ProIV Guru

  • Members
  • PipPipPipPipPip
  • 368 posts
  • Gender:Male
  • Location:Bristol, United Kingdom

Posted 14 June 2004 - 01:13 PM

Although the memory file does stay resident, the space allocated can be released by clearing the file when you're finished using it. Defining a blocksize
per file can also be useful if managing memory will be a big issue - default is
enough space for 1000 records.
The content and views expressed in this message are those
of the poster and do not represent those of any organisation.

#9 Surajit

Surajit

    Expert

  • Members
  • PipPipPipPip
  • 132 posts
  • Gender:Male
  • Location:Land of Smiles :-)

Posted 15 June 2004 - 02:01 AM

;) Thanks guys,

That looks quite easy, I think that the issue about memory allocation may not be serious if user is forced to logout by timeout. Then the memory is returned, am I right ?

Surajit

#10 Rob Donovan

Rob Donovan

    rob@proivrc.com

  • Admin
  • 1,640 posts
  • Gender:Male
  • Location:Spain

Posted 15 June 2004 - 03:57 AM

Hi,

Yes when you logout the memeory is released.

But you have to be carefull on amount of memory on your hardware, and number of users.

Since an extra XXmb per user, on a system that has 500+ users could cripple you.

ProIV should be releasing memory or at least have a command to release the memory, since this effectivly is a 'memory leak'.

I have reported this, when version 5.0 was being Beta tested, but it looks like is has not been addressed.

Rob.

#11 Rob Donovan

Rob Donovan

    rob@proivrc.com

  • Admin
  • 1,640 posts
  • Gender:Male
  • Location:Spain

Posted 15 June 2004 - 03:58 AM

Although the memory file does stay resident, the space allocated can be released by clearing the file when you're finished using it. Defining a blocksize
per file can also be useful if managing memory will be a big issue - default is
enough space for 1000 records.

Chris,

How do you release the space??

Setting the clear flag or deleting the records does not release the memory used...

Thanks,

Rob D.

#12 Joseph Bove

Joseph Bove

    ProIV Guru

  • Members
  • PipPipPipPipPip
  • 756 posts
  • Gender:Male
  • Location:Ramsey, United States

Posted 15 June 2004 - 01:12 PM

Chris,

Follow up question:

How serious a threat is writing too many records in a MYK table? For instance, if you had a 100 concurrent user system would you shy away from using an MYK table if there were a potential that 5 - 10 users might use it at the same table each writing a couple thousand rows?

Regards,

Joseph

#13 Richard Bassett

Richard Bassett

    ProIV Guru

  • Members
  • PipPipPipPipPip
  • 696 posts
  • Location:Rural France

Posted 15 June 2004 - 03:46 PM

I'm sure Joseph knows this, but in case anyone has misunderstood slightly..

A memory file is entirely local to a ProIV process/thread, it cannot be shared by multiple users/processes/threads and there is (to my understanding) no concurrency control that applies to memory file access.

I think most people's concern is that if many ProIV processes use memory files AND write extensive data to them, the 'memory footprint' of the average ProIV process in your application may increase 'significantly' leading to excessive overall use of virtual memory on a machine..

So the question is how much more virtual memory are you happy to see acquired by your processes - and that could vary greatly from one machine configuration to another.

A guess that you'll need twice as much virtual memory per process as the 'peak' total size of the memory-file records you create/update is probably not a bad place to start. Eg: if some process writes 1000 records of 512-bytes on average, assume that adds 1Mb to the process' memory footprint.

As Rob says, don't at this stage assume that any of this memory is being reclaimed or released unless a process terminates..

I'm sure others actually using this have more accurate guidelines by now ;)
Nothing's as simple as you think

#14 Joseph Bove

Joseph Bove

    ProIV Guru

  • Members
  • PipPipPipPipPip
  • 756 posts
  • Gender:Male
  • Location:Ramsey, United States

Posted 15 June 2004 - 04:20 PM

Richard,

Yes - you've explained my question better. The concern that I foresee (and even more so with the fact that the memory is not cleared up) is could excessive use of memory files ultimately crash the server?

All of our use of temp files currently is based on clearing out the file before using it - as opposed to after using it. If we were to looking at moving all of our temp files to memory files then there may be several different instances of memory files per client. Additionally, each memory file might have hundreds or thousands of stray records in them.

As this gets multiplied out by dozens of users, on servers running against a memory hog database like Oracle or SQL Server, I would think that too much memory could be used...

Anyway, it would be helpful to hear from folks who haved used memory map files extensively.

Regards,

Joseph

#15 Bill Loven

Bill Loven

    Expert

  • Members
  • PipPipPipPip
  • 147 posts
  • Gender:Male
  • Location:Coppell, United States

Posted 15 June 2004 - 04:32 PM

;) Joseph, I use memory files for every application. Our main app is Oracle and our remote app is pro-isam. I may have as many as 15 memory files per user. A technique that I use is that I clear each memory file before use and after use. I have completely abandonded Pro-Isam or Oracle temporary work files. Yes, I know that this creates memory leaks, as Rob said, but in my opinion, the benefits outway the risks. We have been of 5.5 since it came out. We wre 5.0 before that and started using memory files then. Our Oracle app with memory files has never crashed our server and we have a bunch of power users.

HTH
Bill



Reply to this topic



  


0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users