double res[MAX]; int i;
#pragma omp parallel
{
#pragma omp for
for (i=0;i< MAX; i++) {
res[i] = huge();
}
}
The for loop will be executed in parallel. huge() is some method which can take too long to get execute. OpenMP supports a shortcut to write the above code as :
double res[MAX]; int i;
#pragma omp parallel for
for (i=0;i< MAX; i++) {
res[i] = huge();
}
We can also have a schedule clause which effects how loop iterations are mapped to threads. For example:
#pragma omp parallel
#pragma omp for schedule(static)
for(i=0;I<N;i++) {
a[i] = a[i] + b[i];
}
Different styles of scheduling are:
schedule(static [,chunk])
Deal-out blocks of iterations of size “chunk” to each thread.
If not specified: allocate as evenly as possible to the available threads
schedule(dynamic[,chunk])
Each thread grabs “chunk” iterations off a queue until all iterations have been handled.
schedule(guided[,chunk])
Threads dynamically grab blocks of iterations. The size of the block starts large and shrinks down to size “chunk” as the calculation proceeds.
schedule(runtime)
Schedule and chunk size taken from the OMP_SCHEDULE environment variable.