Jump to content


Photo
- - - - -

ProIV + Postgres = Extremely slow


16 replies to this topic

#1 mikelandl

mikelandl

    Expert

  • Members
  • PipPipPipPip
  • 101 posts

Posted 20 November 2013 - 09:38 PM

We are in the middle of changing our application from using ProIsam files to using Postgres. We have been successful in exporting data from ProIsam and importing it into Postgres. We are now starting to run various functions and make the necessary changes (i.e. remove sort/select, dsel, etc.). 

The speed we have been experiencing is terrible at best. A simple test report which reads an order header table and reads a customer master table for each order header record takes approximately 10 seconds to run in Postgres versus approximately 1-2 seconds in ProIsam. We are issuing our own SQL select statement to drive the cycle and filter out unneeded records and ProIV is taking care of reading the secondary (customer) table (direct mapping from order header to customer table). I have enabled logging in Postgres and have noticed that a savepoint is being created each time a read of the customer table is taking place. This doesn't make sense to me since this is a report function and all files/tables are in look mode. There is no chance of data being modified/inserted/deleted so why is ProIV doing this? I am not absolutely sure that this is the one and only cause of the slow performance but it certainly can't be helping.

Has anyone come across this before? Is this normal? ProIV's "solution" is to create a view which does result in much faster performance but there is no way we can do this for 5000+ functions.



#2 mikelandl

mikelandl

    Expert

  • Members
  • PipPipPipPip
  • 101 posts

Posted 21 November 2013 - 06:01 PM

I found the issue or at least part of it. In odbc.ini on the server we had the wrong protocol version setup. Changing it to the correct one has made a big improvement. I am still curious about all of the savepoints though. If anyone has any thoughts on that I would be interested to hear them.



#3 mikelandl

mikelandl

    Expert

  • Members
  • PipPipPipPip
  • 101 posts

Posted 21 November 2013 - 08:40 PM

After more testing it looks like changing the protocol version did help quite a bit but not enough. 

When comparing execution speed for ProIsam versus a database (using ProIV) what would people consider a reasonable difference? 20%? 50%? 100%?

In a few smaller test reports the speed appears to be almost the same but I am sure this is due to the fact that each takes less than a few seconds to run and technically the ProIsam report is actually running in less time. When I started testing with "real" reports the ProIV environment running Postgres seems to be taking on average 4-5 times longer to run, even after removing any sorts and DSELs and building them into my selection statement instead. This is much better than before but still unusable as far as I am concerned.



#4 Ross Bevin

Ross Bevin

    ProIV Guru

  • Members
  • PipPipPipPipPip
  • 226 posts
  • Gender:Male
  • Location:Manzanillo, Colima, Mexico

Posted 30 November 2013 - 01:39 PM

Hi Mike,

 

We migrated from Pro-ISAM to MS SQL Server back in 2003. If I remember correctly at the time, my benchmark testing showed SQL to be 50% slower. However, because we were upgrading to the latest hardware too the system ran faster than it did before so the users were happy. This is probably not the cause of the issue but another thing we have done a fair bit of to speed things up is add multiple indices to our tables. So in the past we may have been DSELing on a non key field. When we first changed to SQL we just moved this into the SQL DYNAMIC statement. We noticed that on large SELECT's this was slower or the same as just DSELing afterwards. So we created an alternate index and made the field part of the key. This dramatically improved performance for us in reporting.

 

I know you are probably committed to Postgres but how about doing the same tests using SQL Server; MS ahs a free trial option? At least you would have another database to compare with. If SQL Server proves to be 2 to 3 times faster than Postgres then either Postgres still requires more tweaking or it's not a good solution.

 

Good luck!

 

Regards

Ross



#5 mikelandl

mikelandl

    Expert

  • Members
  • PipPipPipPip
  • 101 posts

Posted 02 December 2013 - 05:07 PM

Hi Ross,

 

Thanks for the suggestions. 

We fully intend to make use of indexes wherever possible. Unfortunately in this simple test report we aren't even doing any filtering of any kind, just sequential read of the primary table and then reading one record of a secondary table for each row returned from the initial select statement. We have an index setup on the secondary table so it should be almost a straight read (minus the time required to traverse the index tree) each time. I have a PostgreSQL consulting company taking a look at our setup this afternoon to see if they can see any issues. We mentioned to them how ProIV issues separate reads for all secondary tables in a function and they didn't sound too excited about that to say the least.

 

Unfortunately SQL Server wouldn't be an option for us as all of our customers are running on Linux.

 

One thing that I've noticed in the PostgreSQL logs is that each time a select statement is issued a savepoint is created, followed by the select statement, followed by the savepoint being deleted. Also, for each select statement a second select statement is automatically created that queries the table definition table in PostgreSQL. The consulting company said this is not something that PostgreSQL would do on it's own so ProIV must be telling it to do it. If you have the time would you be able to try something for me? Could you enable logging in SQL Server and run a basic report that reads at least 2 tables. I would assume all of the tables in the report would be in look mode. Could you then review the SQL Server log file and let me know if any strange select statements are showing up in there before the select statements for the secondary tables? Also, if any mention of savepoints or begin/commit shows up for each select statement. Thanks!



#6 Ross Bevin

Ross Bevin

    ProIV Guru

  • Members
  • PipPipPipPipPip
  • 226 posts
  • Gender:Male
  • Location:Manzanillo, Colima, Mexico

Posted 03 December 2013 - 06:12 PM

Hi Mike,

 

Sorry for the late reply; I wasn't sent this post. I have set this to "follow".

 

I will do your test in the morning when no other developers are logged on.

 

Regards

Ross



#7 Ross Bevin

Ross Bevin

    ProIV Guru

  • Members
  • PipPipPipPipPip
  • 226 posts
  • Gender:Male
  • Location:Manzanillo, Colima, Mexico

Posted 04 December 2013 - 01:40 PM

Hi Mike,

 

I ran a small Statement Report for customer PFL 0000002 and enclosed is my SQL Server trace file; Ross.trc. There are a number of secondary files read after the primary.

 

Regards

Ross



#8 mikelandl

mikelandl

    Expert

  • Members
  • PipPipPipPip
  • 101 posts

Posted 04 December 2013 - 01:44 PM

Hi Mike,

 

I ran a small Statement Report for customer PFL 0000002 and enclosed is my SQL Server trace file; Ross.trc. There are a number of secondary files read after the primary.

 

Regards

Ross

 

Hi Ross,

 

Thanks, I appreciate it but I don't see the file.



#9 Ross Bevin

Ross Bevin

    ProIV Guru

  • Members
  • PipPipPipPipPip
  • 226 posts
  • Gender:Male
  • Location:Manzanillo, Colima, Mexico

Posted 04 December 2013 - 01:58 PM

maybe this time ...

Attached Files

  • Attached File  Ross.zip   35.23KB   21 downloads


#10 mikelandl

mikelandl

    Expert

  • Members
  • PipPipPipPip
  • 101 posts

Posted 04 December 2013 - 02:31 PM

Thanks! Much appreciated.



#11 Ross Bevin

Ross Bevin

    ProIV Guru

  • Members
  • PipPipPipPipPip
  • 226 posts
  • Gender:Male
  • Location:Manzanillo, Colima, Mexico

Posted 04 December 2013 - 03:19 PM

You're welcome. If you need me to dump it to a table let me know. I can then paste it into a spreadsheet.



#12 DARREN

DARREN

    ProIV Guru

  • Members
  • PipPipPipPipPip
  • 396 posts
  • Gender:Male
  • Location:Florida,USA

Posted 04 December 2013 - 08:31 PM

Where we know we will ALWAYS be reading a given secondary file we have sometimes created a view that cobmines the primary with the secondary(s). There is obviously a slight maintenance overhead but we had one report that was running for several hours that we got down to several minutes using this technique.


Things should be made as simple as possible, but not simpler

#13 Neil Barber

Neil Barber

    Member

  • Members
  • PipPip
  • 11 posts
  • Gender:Male
  • Location:Milton Keynes, United Kingdom

Posted 05 December 2013 - 01:18 AM

I'm going to make an assumption that your PROIV/Postgres install is sandboxed, so you can (a) exclude pollution of your results from a third party user, application or network (B) make changes to configuration files and not have to throw users off the system in order to reboot, and © backup and restore to/from a known point in time.

 

If you haven't, put PROIV into debug mode. You'll find all the information you need at http://support.proiv...cumentation.htm, just search for topic id 750024, but if that doesn't work PROIV Documentation > Administrator > Windows Environment Guide > Debugging Facilities > General Trace System Options will get you there.

 

Start with TRACEALL=3, TRACEFILE=5 and TRACESQL=8 and see if that helps, then adjust as necessary. The log file can get large very quickly, so ensure it's getting written to a partition that can handle it.

 

You will, no doubt, have already analysed your application to get an idea of its transaction boundaries, but PROIV's default behavior is autocommit on, and that might not be helping here, based on its "committable LS" rules, so try running your test with it turned off for the function in question by enabling &#@SUPP-COMM in function entry logic, and disabling it in function exit logic.

 

 

Regards



#14 mikelandl

mikelandl

    Expert

  • Members
  • PipPipPipPip
  • 101 posts

Posted 05 December 2013 - 02:17 PM

Where we know we will ALWAYS be reading a given secondary file we have sometimes created a view that cobmines the primary with the secondary(s). There is obviously a slight maintenance overhead but we had one report that was running for several hours that we got down to several minutes using this technique.

 

Hi Darren,

 

Thanks. ProIV also suggested creating views. We tried it and it made a noticeable difference. Unfortunately we have 5000+ functions in our system. I'm not sure how many read multiple tables in a cycle but I have a feeling it would be in the 1000's and I don't think we have the resources to tackle a project of that magnitude.



#15 mikelandl

mikelandl

    Expert

  • Members
  • PipPipPipPip
  • 101 posts

Posted 05 December 2013 - 02:19 PM

I'm going to make an assumption that your PROIV/Postgres install is sandboxed, so you can (a) exclude pollution of your results from a third party user, application or network ( B) make changes to configuration files and not have to throw users off the system in order to reboot, and © backup and restore to/from a known point in time.

 

If you haven't, put PROIV into debug mode. You'll find all the information you need at http://support.proiv...cumentation.htm, just search for topic id 750024, but if that doesn't work PROIV Documentation > Administrator > Windows Environment Guide > Debugging Facilities > General Trace System Options will get you there.

 

Start with TRACEALL=3, TRACEFILE=5 and TRACESQL=8 and see if that helps, then adjust as necessary. The log file can get large very quickly, so ensure it's getting written to a partition that can handle it.

 

You will, no doubt, have already analysed your application to get an idea of its transaction boundaries, but PROIV's default behavior is autocommit on, and that might not be helping here, based on its "committable LS" rules, so try running your test with it turned off for the function in question by enabling &#@SUPP-COMM in function entry logic, and disabling it in function exit logic.

 

 

Regards

 

Hi Neil,

 

Correct, PostgreSQL and ProIV are on the same machine. I tried enabling &#@SUPP-COMM in entry logic of the report and disabling it in exit logic but the PostgreSQL logs still report savepoints being created before each read. I did enable ProIV logging a while ago but can't remember what I saw anymore so I will re-enable it and review it. 





Reply to this topic



  


0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users