Evolution of the lib-framework dichotomy

The debate about the relative merits of frameworks versus libraries will likely remain a fixture in software development theory for a long time to come. Nevertheless, there are trends afoot, that suggest that this dichotomy is not so rigid as the OO theorists of the 90s would have us think.

What does it mean to develop software?

Consider the following:

In principle, in the general purpose computing world, once you check in to your version control system a solution to a certain problem, you will solve that problem for everyone everywhere for all eternity.

You may counter that if that were true, there would be only one Get Things Done app for Android, rather than 100. However, this springs more from disagreement on what the problem is.

The twelve-factor app

I do not normally do "look-it-I-found" posts, but sometimes you encounter some article, or even a whole community, that makes you go "This is exactly what I feel!"

The Twelve-Factor App is such an article. It collects in one compact article several relevant points about writing software-as-a-service, that I have been trying to make. In particular, the article articulates the relationship between such things as dependencies, releases and deploys, which I have long felt to be marginalized in the greater software development discussion.

Shell scripting patterns: configuration and option parsing

Adhering to software patterns lowers the threshold for doing the right thing, even when the quick hack beckons. This is particularly true in shell script handling of options and configuration. Essentially, the top of all scripts should look like this:

Shell scripting patterns: paths, file globs and and current dir

The basis of this pattern is simple: never ever change current working directory in your scripts. In fact, never ever assume that you have a working directory unless your command explicitly works from local directory (e.g. find). Consequently, you should never use relative paths in your script unless you got it from the caller; since your script doesn't change working directory, it's fine to just use whatever path the caller handed you.

Shell scripting patterns: errors and failure

In my opinion, error handling is the area where shell script clearest shows its age. The idea that the shell will by default ignore that commands signal an error state (i.e. a non-zero exit code) seems very strange when viewed through the lens of modern programming theory. Thus, all scripts should start with this instruction, that tells the shell to care about exit codes:

set -e

Now, the following code will no longer result in tears when /destination does not exist:

Shell scripting patterns: returning from functions

One shortcoming of shell scripting is the inability to return anything of significance from a shell script function. Consider: get a function that returns the youngest file in a directory. The basic moving part in this is ls -tr /da/dir | tail -1. Abstracting this to a function seem problematic, given that we cannot return a value. However, functions is very similar to external commands in bash, so you can do this:

functions latest_file() {
  ls -tr "$1" | tail -1

... and then simply invoke it thus:

Shell scripting patterns - a personal perspective

It is often said that no serious development project should ever use shell script.

While it is certainly true that there are some serious shortcomings is the bash strain of shells, this is not to say that these shortcomings cannot be overcome. Like so much else in programming, it is mostly a matter of finding good patterns that contain those shortcomings.