When you compiled and loaded the code in the three examples before (ODEs in compiled languages - definition in R, ODEs in compiled languages - definition in C and ODEs in compiled languages - definition in fortran) you are able to run a benchmark test.
library(microbenchmark)
R <- function(){
out <- ode(y = yini, times = times, func = caraxis_R,
parms = parameter)
}
C <- function(){
out <- ode(y = yini, times = times, func = "caraxis_C",
initfunc = "init_C", parms = parameter,
dllname = dllname_C)
}
fortran <- function(){
out <- ode(y = yini, times = times, func = "caraxis_fortran",
initfunc = "init_fortran", parms = parameter,
dllname = dllname_fortran)
}
Check if results are equal:
all.equal(tail(R()), tail(fortran()))
all.equal(R()[,2], fortran()[,2])
all.equal(R()[,2], C()[,2])
Make a benchmark (Note: On your machine the times are, of course, different):
bench <- microbenchmark::microbenchmark(
R(),
fortran(),
C(),
times = 1000
)
summary(bench)
expr min lq mean median uq max neval cld
R() 31508.928 33651.541 36747.8733 36062.2475 37546.8025 132996.564 1000 b
fortran() 570.674 596.700 686.1084 637.4605 730.1775 4256.555 1000 a
C() 562.163 590.377 673.6124 625.0700 723.8460 5914.347 1000 a
We see clearly, that R is slow in contrast to the definition in C and fortran. For big models it's worth to translate the problem in a compiled language.
The package cOde
is one possibility to translate ODEs from R to C.