Why am I sharing my travel stories?
Founder & CEO of TruStory. I have a passion for understanding things at a fundamental level and sharing it as clearly as possible.
Imperative vs. Declarative. Pure vs. Impure. Static vs. Dynamic.
Terminology like this is sprinkled throughout programming blog posts, conference talks, papers, and text books.
But don’t be turned off by this jargon. Let’s jump right in and break some of these concepts down, so you can understand what all these developers around you are talking about.
This is about when a type information is acquired — either at compile time or at runtime.
You can use this type information to detect type errors. A type error is when a value is not of the expected type.
The process of verifying the type safety of a program based on analysis of a program’s source code. In other words, type checking happens at compile time, allowing type errors to be detected sooner.
The process of verifying the type safety of a program at runtime. With dynamic type checking, type errors occur at runtime.
It’s important to note strong vs. weak typing doesn’t have a universally-agreed-upon technical meaning. For example, even though Java is statically typed, every time you use reflection or a cast, you’re deferring the type check to run time.
Similarly, most strongly-typed languages will still automatically convert between integers and floats. Hence, you should avoid using these terms because calling a type system “strong” or “weak” by itself does not communicate very much.
In a strongly typed language, the type of a construct does not change — an int is always an int, and trying to use it as a string will result in an error.
Weak typing means that the type of a construct can change depending on context. For example, in a weakly-typed language, the string “123” may be treated as the number 123 if you add another number to it.
It generally means the type system can be subverted (invalidating any guarantees) because you can cast a value of one type to another.
When an object is not modifiable after it has been created, you can say it’s “immutable” which is a fancy word for “unchangable.” This means you’ll instead allocate a new value for every change.
When you can modify an object after its creation, it’s “mutable.” When you have a reference to a mutable object, for instance, the contents of the object can change.
A pure function has two qualities:
Any function that does not meet those two requirements for a pure function is “impure.”
Lazy evaluation does not evaluate function arguments unless their values are required to evaluate the function call itself.
In other words, expressions are only evaluated when evaluating another expression which are dependent on the current expression.
Laziness allows programs to calculate data structures that are potentially infinite without crashing.
Eager evaluation — also known as strict evaluation — always fully evaluates function arguments before invoking the function. In other words, an expression is evaluated as soon as it is bound to a variable.
Declarative programs express a set of operations without revealing how they’re implemented, or how data flows through them. They focus on “what” the program should accomplish (by using expressions to describe the logic) rather than “how” the program should achieve the result.
One example of declarative programming is SQL. SQL queries are composed of statements that describe what the outcome of a query should look like, while abstracting over the internal process for how the data is retrieved:
SELECT EMP_ID, FIRST_NAME, LAST_NAME
WHERE CITY = ‘SAN FRANCISCO’
ORDER BY EMP_ID;
Here’s an example of declarative code:
Imperative programming focuses on describing how a program should achieve a result by using statements that specify control flow or state changes. It uses a sequence of statements to compute a result.
Here’s an example of imperative code:
A state is a sequence of values calculated progressively, which contains the intermediate results of a computation.
Stateful programs have some mechanism to keep track of and update state. They have some memory of the past, and remember previous transactions that may affect the current transaction.
Stateless programs, on the other hand, doesn’t keep track of state. There’s no memory of the past. Every transaction is performed as if it were being done for the very first time. Stateless programs will give the same response to the same request, function, or method call — every single time.
Functional programming is a paradigm that places a major emphasis on the use of functions. The goal of functional programming is to use functions to abstract control flows and operations on data, and to avoid side effects.
So functional programming uses pure functions and avoids mutable data, which in turn provides referential transparency.
A function has referential transparency when you can freely replace an expression with its value and not change the behavior of the program. Said a bit differently: for a given input, it always returns the same results.
The Object Oriented programming paradigm places major emphasis on the use of objects. This results in programs that are made out of objects that interact with one another. These objects can contain data (in the form of fields or attributes) and behavior (in the form of methods).
It’s a style of partitioning (or encapsulating) the state of a program via objects to make analyzing the effect of changes tractable.
Moreover, object-oriented programs uses inheritance and/or composition as their main mechanisms for code reuse. Inheritance means that a new class can be defined in terms of existing classes by specifying just how the new class is different. It represents an “is-a” relationship (e.g. a Bird class which extends an Animal class). Composition, on the other hand, is when classes contain instances of other classes that implement the desired functionality. It represents a “has a” relationship (e.g. a Bird class has an instance of a Wing class as it’s member).
Polymorphism is also an important mechanism for code reuse in object oriented programming. It’s when a language can process objects differently depending on their data type or class.
Deterministic programs always return the same result any time they’re called with a specific set of input values and the same given state.
Nondeterministic programs may return different results each time they’re called, even with the same specific set of input values and initial state.
Nondeterminism is a property of any concurrent system — that is, any system where multiple tasks can happen at the same time by running on different threads. A concurrent algorithm that is mutating state might perform differently on each time, depending upon which thread the scheduler decides to execute.
thread X=1 end
thread X=2 end
The execution order of the two threads is not fixed. We don’t know whether X will be bound to 1 or 2. The system will choose during the program’s execution, and it’s free to choose which thread to execute first.
Another example of non-determinism:
As always, your feedback is really important to me. I read and consider every single comment, so please don’t shy away from responding!
Finally, you can also check out the Prezi presentation I built for this article.
And finally, thank you to Kent Beck for his input on this.