## Algorithms are NOT the Same between Languages

When asked whether algorithms are the same for all programming languages, my simplistic answer is ‘No’. Yet, it’s actually much more nuanced than that.

Often “No” is an oversimplification. I used it this week as a TL;DR, but it’s worth noting the core principles behind it.

Algorithms in the same programming paradigm are naturally structured the same. Hence, constraining the paradigm naturally constrains the language. The rest answer to the rest is a sometimes.

## Background

Computation is a mathematical exercise. That is a meta-mathematical invariant. So I’ll leave that there :)

Programming languages all reduce human readable content down to machine code in some way. The nature of 3GL and above is syntactic regularity. It doesn’t do too well translating natural language, inconsistent in its use, into machine code directly.

An algorithm expressed in a mathematical form will have some “equivalent” in a programming language. To get it there, we intuitively constrain the “mathematical” space into that of the paradigm with an appropriate symbolism and map that into keywords of the language.

## Doff Ye Cap to Godel?

Godel’s first incompleteness theorem proves mathematics can’t be both complete and consistent at exactly the same time. With most programming languages consistency rules. To gain consistency (a loop concept used in one part of the editor is the same as a loop used in another) we have to give up on completeness, which is fine, as the language is a regular grammar anyway. Not every combination of terminal character is valid. Hence, it’s naturally incomplete. It’ll do :)

The default looping construct in, say, functional programming is recursion. This maps very closely to mathematically natural structures called recurrence relations.

For example:

Is presented in functional languages as Haskell:

`factorial 0 = 1 factorial t = t * factorial(t-1) `

By contrast, whilst you can do something like this as a C# method:

`// Stuff abovepublic int factorial( int t ) { return t == 0? 1 : t * factorial( t — 1 );} `

Many, if not most people I’ve met on my proverbial travels actually write this iterative version:

`public int factorial( int t ) { int result = 1; for( int i = t; i > 0; i — ) { result *= i; } return result;}`

In this case, both of these have the same combinatoric time complexity, but the recursive function takes up slightly more memory (to store the function stack frame). In more advanced cases (e.g. binary, n-ary trees) the way you write it fundamentally changes the time complexity of the algorithm.

Additionally, there isn’t necessarily a way to define mathematical constructs in computational languages. Computers even technically struggle with fractions. Indeed, Excel doesn’t calculate or store fractions correctly all the time.

## Summary

This issue is often the way it is between paradigms. Indeed, in declarative languages it’s even worse as there usually isn’t a definite concept of a loop, nor variable assignment. Languages like XSL, TSQL and PL/SQL and even semi-functional languages like SML bolt on non-standard operations to allow assignment. Such as cursors, ‘select’ into a variable, while loops etc. Hence, where a programming construct has been introduced, it’s an attempt to make it more complete (arguably, over-complete. Sometimes losing consistency and introducing side-effects into non-side effect languages is risky). Yet, there is enough of an overlap to abstract out the concepts and learn from the math downwards.

Written by