Summary by Dan Luu on the question about whether for statically typed languages, objective advantages (like having measurably fewer bugs, or solving problems in measurably less time) can be shown.
If I think about this, authors of statically typed languages in general at their beginning might not even have claimed that they have such advantages. Originally, the objective advantage was that for computers like a PDP11 - which had initially only 4 K of memory and a 16-bit adress space - was that something like C or Pascal compilers could run on them at all, and even later C programs were much faster than Lisp programs of that time. At that time, it was also considered an attribute of the programming language whether code was compiled to machine instructions or interpreted.
Todays, with JIT compilation like in Java and the best implementation of Common Lisp like SBCL being at a stone’s throw of the performance of Java programs, this distinction is not so much relevant any more.
Further, opinions might have been biased by comparing C to memory-safe languages, in other words, when there were perceived actual productivity gains, the causes might have been confused.
The thing which seems more or less firm ground is that the less lines of code you need to write to cover a requirement, the fewer bugs it will have. So more concise/expressive languages do have an advantage.
There are people which have looked at all the program samples in the above linked benchmark game and have compared run-time performamce and size of the source code. This leads to interesting and sometimes really unintuitive insights - there are in fact large differences between code sizes for the same task between programming languages, and a couple of different languages like Scala, JavaScript, Racket(PLT Scheme) and Lua come out quite well for the ratio of size and performance.
But given all this, how can one assess productivity, or the time to get from definition of a task to a working program, at all?
And the same kind of questions arise for testing. Most people would agree nowadays that automated tests are worth their effort, that they improve quality / shorten the time to get something working / lead to fewer bugs. (A modern version of the Joel Test might have automated testing included, but, spoiler: >!Joel’s list does not contain it.!<)
Testing in small units also interacts positively with a “pure”, side-effect-free, or ‘functional’ programming style… with the caveat perhaps that this style might push complex I/O functions of a program to its periphery.
It feels more solid to have a complex program covered by tests, yes, but how can this be confirmed in an objective way? And if it can, for which kind of software is this valid? Are the same methodologies adequate for web programming as for industrial embedded devices or a text editor?
Note that this post is from 2014.
So what scientific evidence has emerged in the mean time?
We know with reasonable certainty that memory-safety reduces memory bugs. This is valid for dynamically and statically typed languages.
However, under the assumption that dynamically typed programs do have a minimum amount of tests, we can’t say that static type checking is generally a better or more efficient approach.
I don’t know; I haven’t caught up on the research over the past decade. But it’s worth noting that this body of evidence is from before the surge in popularity of strongly typed languages such as Swift, Rust, and TypeScript. In particular, mainstream “statically typed” languages still had
null
values rather thanOption
orMaybe
.The original author does mention that they want to try using rust when it becomes more stable.
This is why any published work needs a date annotation.
Do you mean Dan Luu, or one of the studies reviewed in the post?
Dan Luu. From summary of summaries:
Well, Lisp, Scheme and many more are strongly typed as well. The difference here is they are dynamically-strongly typed, where the evaluation acts as-if all types are not evaluated before run time.
This means essentially, that the type of a variable can change over its run time. And this is less relevant for functional or expression-oriented languages like Scheme, Scala or Rust, where a variable is in most cases rather a label for an expression and does not change its value at all.
That again is more a feature of functional languages, where most things evaluate to expressions. Clojure is an example for this, it is dynamically - strongly typed and in spite of that it runs on the JVM, it does not raise NullPointerExeptions (the exception, so to speak, is when calling into Java).
And in most cases, said languages use type inference and also garbage collection (except Rust of course). This in turn results of course in clear ergonomic advantages, but they have little to do with static or dynamic typing.
Yeah, I understand that Option and Maybe aren’t new, but they’ve only recently become popular. IIRC several of the studies use Java, which is certainly safer than C++ and is technically statically typed, but in my opinion doesn’t do much to help ensure correctness compared to Rust, Swift, Kotlin, etc.