Proiv paging
Started by siobi, Sep 21 2004 10:38 AM
26 replies to this topic
#16
Posted 23 September 2004 - 11:02 AM
You will never get any official ProIV person to comment on this "on the record". The reason is obvious from some of the comments above.
The change to 'C' was made back in the 1980's.
There have been some major changes to the product since then, but you'll notice that they're all to sub-systems (e.g. all the GUI engine, the file/database system, Bus & Tasks etc.) The Document Function type is entirely new. The Global Function stuff is new, but effectively a wrapper around the standard execution engine.
Let me state also that I have never seen any of the ProIV source code, so none of this can be taken as first hand experience. (However I have been working with ProIV since the early 1980's, including a stint at MDIS).
If the code was easily changeable then the conversion of single byte to double byte pointers would be trivial.
The change to 'C' was made back in the 1980's.
There have been some major changes to the product since then, but you'll notice that they're all to sub-systems (e.g. all the GUI engine, the file/database system, Bus & Tasks etc.) The Document Function type is entirely new. The Global Function stuff is new, but effectively a wrapper around the standard execution engine.
Let me state also that I have never seen any of the ProIV source code, so none of this can be taken as first hand experience. (However I have been working with ProIV since the early 1980's, including a stint at MDIS).
If the code was easily changeable then the conversion of single byte to double byte pointers would be trivial.
#17
Posted 29 September 2004 - 06:15 AM
Statement from PROIV
The posting from Chris Pepper contains a number of inaccuracies.
1. PROIV was NOT run through an “assembler to ‘C’ convertor”.
For those interested in the history, PROIV Version 1.2 was written in assembler, as was the early 1.3 Version on VAX/VMS, which had been machine translated from PDP11 assembler. The later 1.3 Version was a “human written” ‘C’ implementation, based on the VAX/VMS assembler version and original flow charts.
The current source for PROIV non-mainframe platforms has its roots in the Version 1.3 ‘C’ implementation.
2. The PROIV kernel IS documented.
3. It is true that some of the limits in the product are not easily changed, but that does not mean that they cannot be. Following the previous EMEA & US Executive User Group meetings we have been asked to investigate this area, which we are currently doing.
James
On Behalf of PROIV / Northgate-IS
The posting from Chris Pepper contains a number of inaccuracies.
1. PROIV was NOT run through an “assembler to ‘C’ convertor”.
For those interested in the history, PROIV Version 1.2 was written in assembler, as was the early 1.3 Version on VAX/VMS, which had been machine translated from PDP11 assembler. The later 1.3 Version was a “human written” ‘C’ implementation, based on the VAX/VMS assembler version and original flow charts.
The current source for PROIV non-mainframe platforms has its roots in the Version 1.3 ‘C’ implementation.
2. The PROIV kernel IS documented.
3. It is true that some of the limits in the product are not easily changed, but that does not mean that they cannot be. Following the previous EMEA & US Executive User Group meetings we have been asked to investigate this area, which we are currently doing.
James
On Behalf of PROIV / Northgate-IS
#18
Posted 29 September 2004 - 07:18 AM
Thank you James. I'm glad I've provoked a formal response from ProIV at last.
As I pointed out, I had no direct knowledge - but I thought that it was important to document some of the rumours that have been circulating in the ProIV world for the last 12 years or more.
This is the single most important factor for many users (including the company I work for).
The owners of ProIV have repeatedly refused to do anything about it.
It is certainly true from my own experience that when Sushil was negociating with MDIS about Superlayer one of the hopes from senior management was that he would put in some effort explaining how the bl**dy thing worked.
One of the most difficult problems is that the amount of workspace varies on different platforms / compilers / versions.
As I pointed out, I had no direct knowledge - but I thought that it was important to document some of the rumours that have been circulating in the ProIV world for the last 12 years or more.
This is the single most important factor for many users (including the company I work for).
The owners of ProIV have repeatedly refused to do anything about it.
It is certainly true from my own experience that when Sushil was negociating with MDIS about Superlayer one of the hopes from senior management was that he would put in some effort explaining how the bl**dy thing worked.
One of the most difficult problems is that the amount of workspace varies on different platforms / compilers / versions.
#19
Posted 29 September 2004 - 10:06 AM
I am not sure if I shall open up a different post topic. One other problem is that PROIV / SL seems behaves differently in different platforms.One of the most difficult problems is that the amount of workspace varies on different platforms / compilers / versions.
I have a SL screen function which work perfectly in my Window platform. However, when I export/import it in a AIX environment, it just core dump the system with segmentation error. I have to decrease the number of local /US/ and change them to Global LS Call in order to make it run. Not to say some times, function just hit workspace limiation in AIX but the same function is fine in Window.
#20
Posted 29 September 2004 - 10:59 AM
Functions start to run strangly once you get near the maximum work space.
I have spent plently of time in the past trying to find bugs, that was because the function was too big, but not big enough to give a gen error.
Rob D.
I have spent plently of time in the past trying to find bugs, that was because the function was too big, but not big enough to give a gen error.
Rob D.
#23
Posted 29 September 2004 - 07:01 PM
Tony,
It would be nice get a response from ProIV on this.
It native, in @FUN, you can see stats on the logic size, global logic size, etc. of any function. This has always, in my mind, been background info of little to no use. However, if there some guidelines about values that are borderline, it would be very easy to quickly check all functions.
However, this does not necessarily tell the full story either... Sometimes, if too many global functions are called in succession, things can start to act funny too. Although, the later 5.5 kernals are a lot more stable than the early 4.6 ones were with global functions.
Regards,
Joseph
It would be nice get a response from ProIV on this.
It native, in @FUN, you can see stats on the logic size, global logic size, etc. of any function. This has always, in my mind, been background info of little to no use. However, if there some guidelines about values that are borderline, it would be very easy to quickly check all functions.
However, this does not necessarily tell the full story either... Sometimes, if too many global functions are called in succession, things can start to act funny too. Although, the later 5.5 kernals are a lot more stable than the early 4.6 ones were with global functions.
Regards,
Joseph
#24
Posted 29 September 2004 - 09:13 PM
Yes, the stats in @FUN give you a rough idea.
But the problem is as Joseph says, that during runtime the function uses up an upsecified amount of workspace, for Global functions, Tables colums and Arrays.
Unfortunatly, there is nothing more than 'The function starts to run strangly' to say, because the things that start to happen seem total random....
Rob D.
But the problem is as Joseph says, that during runtime the function uses up an upsecified amount of workspace, for Global functions, Tables colums and Arrays.
Unfortunatly, there is nothing more than 'The function starts to run strangly' to say, because the things that start to happen seem total random....
Rob D.
#25
Posted 30 September 2004 - 02:00 PM
I, too, have run into the same issue several times, Rob.
I suspect it is related to the same issue that plagues the Windows 98 systems....you must not run more than 512mb ram or you will get "unsual" results. I have a machine that I have been running for sevaral years in with 768Mb ram and it occasionally gave me the "Blue screen of death". Now Microsoft says that it is TOO much memory and the OS cannot reliably use it. They say that it has to do with the compilers used on different functions.
As a point of practice I have made it a point to try VERY hard to break larger functions up into smaller modules. The code is easier to troubleshoot, more portable, and it RUNS better.
My $.1705 worth (adjusted for inflation!)
Glenn
I suspect it is related to the same issue that plagues the Windows 98 systems....you must not run more than 512mb ram or you will get "unsual" results. I have a machine that I have been running for sevaral years in with 768Mb ram and it occasionally gave me the "Blue screen of death". Now Microsoft says that it is TOO much memory and the OS cannot reliably use it. They say that it has to do with the compilers used on different functions.
As a point of practice I have made it a point to try VERY hard to break larger functions up into smaller modules. The code is easier to troubleshoot, more portable, and it RUNS better.
My $.1705 worth (adjusted for inflation!)
Glenn
#26
Posted 01 October 2004 - 07:27 AM
The other issue is that there are several completely different situations where an exceeds workspace message can appear:
When genning:
If a single logic or global logic is too large (this normally fails very quickly in the gen)
If the total size of the function is too large (this could be the total size of defined variables, or too much logic, or too many files). No indication is given as to which of these has failed. If it is marginal then genning the Function on a different system that allows a larger workspace can let you see the sizes of the particular components in the Fun screen).
At run time:
Effectively when there is too much recursion and presumably it runs out of stack space.
When it is marginal then a stack dump seems to appear rather than the EXCEEDED WORKSPACE message.
In certain cases it appears that if the ProIV option to trace workspace is on then Functions that previous would not gen, will gen!
When genning:
If a single logic or global logic is too large (this normally fails very quickly in the gen)
If the total size of the function is too large (this could be the total size of defined variables, or too much logic, or too many files). No indication is given as to which of these has failed. If it is marginal then genning the Function on a different system that allows a larger workspace can let you see the sizes of the particular components in the Fun screen).
At run time:
Effectively when there is too much recursion and presumably it runs out of stack space.
When it is marginal then a stack dump seems to appear rather than the EXCEEDED WORKSPACE message.
In certain cases it appears that if the ProIV option to trace workspace is on then Functions that previous would not gen, will gen!
Reply to this topic
1 user(s) are reading this topic
0 members, 1 guests, 0 anonymous users