FORGE(R) 90's Shared Memory Parallelizer (SMP) performs interprocedural analysis. Unlike parallelizing compilers that fail to parallelize the most important DO loops in your program, it can handle loops that call subroutines. SMP's strategy is to parallelize for high granularity by analyzing outermost loops first. It analyzes array and scalar dependencies across subprogram boundaries by tracing references through the database up and down the call tree. The result is a parallelized source code with compiler-specific directives inserted for scoping variables and for identifying Critical and Order regions of code.
John M. Levesque
President Applied Parallel Research, Inc. 550 Main Street Suite I Placerville, CA 95667 USA 916-621-1600 fax:916-621-0593 levesque@apri.com