Functional compositon is slower than multiple iterations?



I was doing a basic example of understanding functional composition over using maps and was quite puzzled by performance results

const doubleIt = (x) => x*2
const render = (x) => `<li>${x}</li>`
const composeRenderAndDouble = function (x) {
    return render(doubleIt(x));
const arr = Array(1000).fill(0)

const mergeArraysAndFunctionalManyMaps = (arr) =>
const mergeArraysAndFunctionalOneMap = (arr) =>

mergeArraysAndFunctionalManyMaps(arr)// executes in 0.788ms
mergeArraysAndFunctionalOneMap(arr)// executes in 1.184ms

I have tried running the code several times and every single time multiple maps is faster than one map with functional composition inside.

Running in the Node environment(v10)
I would presume that multiple maps will be slower as we are performing multiple iterations over the same array?


In my tests, for 1000 elements, one-map is dramatically faster (so the opposite of your result), but for 10,000 elements, many-maps is faster (like for you).

Once I get to 100,000 elements, many-maps is seems to be reliably the faster all the way up to 10M (at 100M, node crashes).

So I’m only seeing the difference at one order of magnitude (10k). I don’t have a good answer on why, but I would look into how it’s impacting memory/gc, or how evaluating a closure calling a closure impacts performance.


Really, interesting results. I did rerun it and it seems to give me different results:

​​​​​1.012ms​​​​​ at ​​​mergeArraysAndFunctionalOneMap(arr)​​​ ​quokka.js:13:0​

​​​​​0.719ms​​​​​ at ​​​mergeArraysAndFunctionalManyMaps(arr)​​​ ​quokka.js:14:0​

​​​​​452.359ms​​​​​ at ​​​mergeArraysAndFunctionalOneMap(arrBigger...​​​ ​quokka.js:16:0​

​​​​​614.309ms​​​​​ at ​​​mergeArraysAndFunctionalManyMaps(arrBigg...​​​ ​quokka.js:17:0​

Bigger array is 100,000 elements. Hm… I still don’t really understand why is this happening - how does node optimises multiple maps to be faster on small number of elements?