“Use Gulp!” “Use Grunt!” If I wanted to gulp and grunt I’d go to 7-11 and down an entire big gulp slurpee.

Above is a snide remark I made when trying to time and time again find a simple CLI for different Web development helpers. It is surprising just how difficult that is. If memory serves me correctly I believe I was looking for a CLI for PostCSS at the time. I’d been looking at other helpers and noticed they would have them mentioned in small print below Gulp and Grunt. PostCSS didn’t even have a CLI listed. I later found out an interface was available, but it was maintained separately. Why didn’t I want to use Gulp or Grunt? Because I have in the past, and quite frankly it’s much easier and quicker to just use the shell. I will use it with existing projects, but when doing anything myself I will forego the unnecessary complexity.

Over-engineering

There used to be a time where when you wanted to make a new website all you needed to do was open a text editor, but let’s face it the requirements for a website back then weren’t nearly what they are today. Today we’re encouraged to use all sorts of preprocessors, postprocessors, package managers, and everything in-between. I find Sass, PostCSS, and Babel useful just like the next guy. However, I use them after knowing how do do everything manually, and I use them as little as possible. We’re definitely reaching the point where we’re over-engineering the development of websites, and these layers of abstraction over HTML, CSS, and JavaScript are creating developers who don’t know how to write any of those. BEM is a good example; it exists simply because people want to ignore CSS cascading because they do not understand the cascade order despite its being written in plain english in the specification. Instead of learning the order they instead over-engineer a meta syntax within class names. “Simplification of CSS” is used as an excuse when the reality is that it becomes more difficult to maintain as everything is a collection of esoteric class names in the CSS and the HTML is bloated with excessive class attributes.

To me the best examples of over-engineering are the Gulp and Grunt task runners. Just stop and think about what they are for a moment. They are nothing more than build scripts, in other words a command line program used to build various aspects of a project. Here is the example Gulp provides:

var gulp = require('gulp');
var coffee = require('gulp-coffee');
var concat = require('gulp-concat');
var uglify = require('gulp-uglify');
var imagemin = require('gulp-imagemin');
var sourcemaps = require('gulp-sourcemaps');
var del = require('del');
 
var paths = {
  scripts: ['client/js/**/*.coffee', '!client/external/**/*.coffee'],
  images: 'client/img/**/*'
};
 
// Not all tasks need to use streams 
// A gulpfile is just another node program and you can use any package available on npm 
gulp.task('clean', function() {
  // You can use multiple globbing patterns as you would with `gulp.src` 
  return del(['build']);
});
 
gulp.task('scripts', ['clean'], function() {
  // Minify and copy all JavaScript (except vendor scripts) 
  // with sourcemaps all the way down 
  return gulp.src(paths.scripts)
    .pipe(sourcemaps.init())
      .pipe(coffee())
      .pipe(uglify())
      .pipe(concat('all.min.js'))
    .pipe(sourcemaps.write())
    .pipe(gulp.dest('build/js'));
});
 
// Copy all static images 
gulp.task('images', ['clean'], function() {
  return gulp.src(paths.images)
    // Pass in options to the task 
    .pipe(imagemin({optimizationLevel: 5}))
    .pipe(gulp.dest('build/img'));
});
 
// Rerun the task when a file changes 
gulp.task('watch', function() {
  gulp.watch(paths.scripts, ['scripts']);
  gulp.watch(paths.images, ['images']);
});
 
// The default task (called when you run `gulp` from cli) 
gulp.task('default', ['watch', 'scripts', 'images']);
The example Gulp provides

This is a mess. Isn’t this sort of thing what the shell is for? Much of the plugins at the beginning are plugins specifically for Gulp which load JavaScript interfaces to shell programs, so they are maintained separately to the CLI. With a shell script one could simply access the programs directly. A ridiculous amount of JavaScript is executed to do what the shell already does. This is a textbook example of over-engineering in my opinion: plugins which load their own plugins must be plugged into a task runner which is then run on a task runner to then run tasks. Why?

If your excuse is that you don’t want to use bash you’re in good company. Bash is icky. It is, however, the tool for the job. There are other shells, but bash is easy to find solutions for online. Familiarity with bash is a skill any developer should have, and knowing how to use the shell has benefits outside of just creating a build script. With Gulp and Grunt you have their API to learn. Instead, Every command line program has a similar syntax, and good command line tools have documentation built-in to know which flags to use and what order to place things. You also then have the entire shell at your disposal instead of needing to hunt down Gulp or Grunt JavaScript plugins in npm.

Bash Task Runner

I began developing this website using a build script I rolled myself. Later into the process I discovered Bash Task Runner. It is simple. Seriously, just look at the code. It is just a few files. Using it in a bash script makes it much easier to work up a build script. With it you are running a bash file, but runner’s code aids in this. You can name the file anything you want, and the gulpfile above would be something like this in bash:

#!/usr/local/bin/bash 
dirname=$(pwd)
PATH="$dirname/node_modules/.bin:$PATH"
 
# Include runner 
source node_modules/bash-task-runner/src/runner.sh
 
# Set options for globbing 
shopt -s globstar
 
task_clean() {
    rm -rf build
}
 
task_scripts() {
    mkdir -p build/js
 
    # Concatenate the files and put a newline between them 
    awk 'FNR==1{print ""}{print}' client/js/**/*.coffee |
    coffee --stdio --no-header --print |
    uglifyjs --compress --mangle --source-map-inline > build/js/all.min.js
}
 
task_images() {
    mkdir -p build/img
 
    for file in client/img/**/*; do
        outfile="$dirname/build/img/"$(basename "$file")
        case $(file --mime-type -b "$file") in
            "image/png")
                optipng -quiet -o5 -out "$outfile" "$file";;
            "image/jpeg")
                jpegtran -optimize -progressive -outfile "$outfile" "$file";;
        esac
    done
}
 
task_default() {
    task_clean
    runner_parallel scripts images
}
My quick conversion of the above example to bash

Aside from the awk command I use to concatenate the script files everything else should be self-explanatory. CoffeeScript and UglifyJS both have CLIs. Imagemin is a JavaScript interface to command line image optimizers with its own plugins for them like OptiPNG and jpegtran. There are many other possible optimizers through imagemin, but showing just those two were enough to get the idea across in my example above.

One big issue I have with Gulp and Grunt’s syntax and of other jQuery-ish syntaxes is that while it looks cool and clean it isn’t really easy to read. Too much of how it works is abstracted into a series of daisy chained object methods and closures. You wouldn’t write an English essay in only simple subject verb object sentences, so why would you write code you have to maintain and read yourself like that?

I use macOS, so the shebang points to /usr/local/bin because Apple still uses version 3.2 and somehow refuses to update bash to the newest version. Bash Task Runner requires at least version 4.0 along with the GNU core utilities. Fortunately with Homebrew installation is easy on macOS.

Unfortunately, the one thing Bash Task Runner doesn’t have facilities for is watching files. The developer of Bash Task Runner is aware of this shortcoming, and there are plans to implement the functionality. In the meantime there is fswatch. With its being accessible through the shell writing your own watcher isn’t difficult, and the code is reusable.