Introducing RunPage

RunPage is a Jupyter-like portal. If are not familiar with Jupyter, then that is a server which allows you to create documents with embedded Python codes. The documents can contain normal document stuff intermixed with Python codes. The Python codes have the ability to render directly onto the document. RunPage these documents are referred to as pages.

Difference with Jupyter

The first biggest difference is that RunPage allows you to embed Javascript codes instead of Python. Although technically Jupyter can be used to run any kind of language codes, depending on the “kernel”, but all those codes are run on server-side. In case of RunPage they all run in a “sandboxed” environment inside your own browser. So no roundtrip to server is involved which makes it faster and secure. RunPage also allows you to embed file selector in your page, which you can reference in your JS code. These selected files (excel, text, etc. files) can then be processed using APIs provided by RunPage right in your browser without uploading them to server for processing.

RunPage also provides a proper block editor, so that the article writing process is as frictionless as possible.

RunPage is a fully hosted solution with concepts of teams and sharing between teams and to public. The provided APIs are designed to be simple and intuitive. For example, unlike Jupyter the last statement in the code-block are not automatically rendered as output on the document. In RunPage each code-block (referred to as script-block in RunPage) behaves like an Async function body. So only items which are returned from that function get rendered on the page. In fact it renders an array as series of multiple outputs.

Head over to run.applegrew.com for a free account.

Calculating Amortisation Schedule of your loans

Be it home loan, car loan or any other loans; they all have same style of calculation. These calculations are non-trivial and it becomes more complicated if you are going to make some prepayments.

To make it all too easy I have built an app to do all the hard work for you. You will get insight into how much interest money you are really paying on your “low interest” rate loans, and how a small repayment early can save you big money. Also it gives you visibility into exactly how much interest vs principal you are repaying on your each EMI.

If you are new to this then the last statement might not be very clear to you. Each month you pay a fixed amount as EMI (Equated Monthly Instalments), however, each of those EMIs pay off part of the principal amount (the loan amount you borrowed) and part of the overall interest amount. The interesting thing is that percentage principal and percentage interest amount you pay in each EMI is not fixed. Towards the start of loan period the interest part is more and the principal part is less, as the loan progresses their ratio gradually progressively approaches 1:1 near the mid of your loan tenure then it flips and your principal ratio becomes more than your interest. Eventually it is only principal which is left; at which point the loan is fully repaid.

That is why making lump sum prepayments towards end of loan tenure is not too beneficial as you are only repaying the principal which anyway you need to repay. Where you can save on, is the interest part. Always keep in mind the amount of interest you are paying is directly proportional to the amount you loaned and the time you are taking to return that amount.

Play with different values in the calculator below and notice the pattern in the graph.

Wifi STB Remote+ (JAVA) for laptop and desktops

My Android App Wifi STB Remote+, which works as a remote for TataSky+ HD Settop boxes, now has a laptop/desktop counterpart.

Wifi STB Remote+ JAVA - Main screen

Wifi STB Remote+ JAVA – Main screen

Wifi STB Remote+ JAVA - All channels screen

Wifi STB Remote+ JAVA – All channels screen

This application is built over Java and packaged as a Jar file. (Download link below.) So, Java (JRE 8 or JDK 8) needs to be installed on the computer where you want to run this.

Please note this is an experimental application. Unlike its Android counterpart this application is not that chatty and articulate about the issues it faces while making connection to the STB (Set Top Box).

To start this app usually just double clicking on it should work. If that doesn’t work then use then you need to open command prompt and use cd to goto the directory where you have kept this app then use the command – java -jar WifiStbRemotePlus.jar.

To use the app make sure your computer is connected to the same Wifi as the STB. Then click on “Connect Device”. When the connection is successful then you will notice a “Connected” message. If you keep on seeing “Connecting…” message for a long time then that means something is not right and it is unable to reach to the STB. In that case fix the issue and click “Connect Device” again. Note unlike the Android app this app does not let you choose the STB. It just finds the STB and when it finds it simply connects to it.

Download WifiStbRemotePlus jar file

Important Javascript Fundamentals for OOP Developers

In this guide I have tried to put together all important nuts and bolts of Javascript. I had a really hard time learning JS, since all these parts were scattered over the net. I gradually found them over the years. I hope I have covered all of them. Please let me know in the comments if you feel I missed something important or something is incorrect.

Irrespective of the title, this guide should be helpful to all developers. However, you should be familiar with basics of JS before this guide could be helpful. This is more of a collection of JS concepts.

Loosely typed

Yup! No scratching head to decide if this should be float or double, int or short or long or even a String. Just declare the variable using – var my_variable; and be done with it. This is easiest to grasp concept of the lot.

Always ‘use strict’;

You can use the magic phrase "use strict"; at the start of a function or a JS file to turn on strict mode.

Just like any other language, JS language’s syntax, semantics and APIs are first proposed in a language specification document; which when approved is ready to be implemented by all browsers. This helps ward off incompatible implementations by different browsers and makes our, JS developers’, life easier. What a plague of incompatibility looks like can be seen in section ‘DOM is not JS’.

For JS, the language specification document is named – ECMAScript. The current version of JS we see in our modern browsers is based on ECMAScript 5. This specification describes a stricter version of JS. Frankly the non-strict version of JS allows and encourages extremely sloppy and bad coding practices, which at the end of the day will result into one hell of a messy product. The “strict” mode is much more cleaner, which all self respecting JS developers should be aware of and must use.

A full list of restrictions is available on MDN, but the most important one I feel is that all variables must be declared before use in strict mode.

So,

function () {
    'use strict';
    return a;
}

will result into error in strict mode, provided a is not already defined globally. In non-strict mode it will merrily create a variable a in global scope and continue. More about scopes later. However, I can give you a demonstration of this.

function f1() {
    a = 12;
    return a;
}
function f2() {
    return a;
}
alert(f1());
alert(f2());

Try running this code on jsfiddle.net. In both alerts you will see 12. This is because in both the functions a is in global scope.

Another important point to remember is that once strict mode is enabled, it can’t be disabled. Strict mode can be specified for a particular JS file, where that is enabled for all codes in that file; or, it could be enabled selectively inside functions.

Semi-colon is not optional

You must have read somewhere before that in JS semi-colons at the end of a statement is not required. However, this does not mean that there is absolutely no requirement of semi-colons. The language interpreter actually tries to guess where a semi-colon should have been and you missed it, and continues like that. I loath this “feature” a lot. Sometimes this results into hard to find bugs and this is allowed in strict mode too. >:(

function avg(a, b) {
    return
        (a + b) / 2
}
console.log(avg(2, 3))

What will be printed on the console? It will be undefined! An unambiguous language like C or Java makes no error here but JS does because of its overzealous attempt to “guess” when you need a semi-colon. In this case, JS “sees” this code as:-

function avg(a, b) {
    return;
        (a + b) / 2;
}
console.log(avg(2, 3));

However, if you wrote it like:-

function avg(a, b) {
    return (a
          + b) / 2
}
console.log(avg(2, 3))

Then the result would had been correct. In this case if JS tries to insert semi-colon after (a then that will result into invalid syntax as the ( needs to be closed.

The JS syntax specification goes into details about how to play this guessing game so that the guesswork works consistently across all implementations of JS – https://262.ecma-international.org/6.0/#sec-automatic-semicolon-insertion.

Hoisting and Scoping in JS

Unlike C, C++ or Java JS has only two simple scope types – Global and Function. So, in JS for, if, while, etc do not define a scope block. So, a code like:-

function () {
    if (someCondition) {
        var a;
    }
}

is actually treated as:-

function () {
    var a;
    if (someCondition) {
        // ...
    }
}

This behavior of JS is also known as hoisting. Like hoisting a flag it hoists all variable declaration to the topmost line in that scope.

Take another example.

function test() {
    'use strict';
    console.log(a);
    console.log(b);
    //console.log(x);
    var a = 10, b = 10;
    console.log(a);
    console.log(b);
}
test();

This is a unique example. The output in this case will be:-

undefined
undefined
10
10

However, if you uncomment the line console.log(x), you will end up with error – ReferenceError: x is not defined. This is because a and b are hoisted to top of function so they are present when console.log statement is run, however they were yet to be assigned values.

function test() {
    'use strict';
    console.log(a);
    console.log(b);
    var a = b = 10;
    console.log(a);
    console.log(b);
}
test();

Notice the var line. In this case not only the console.log(b) lines will error out, but also the var line, provided the outer scope doesn’t already have variable b defined. This is because in this case b is part of an expression, so var does not define b, but only a.

All variables defined outside the function blocks (be it in JS files or <script> blocks) are in global scope. Since there is only one global scope, so they all are accessible everywhere.

About functions

Functions are also objects

I will re-state this point throughout this guide. This is important to understand. The functions are objects of type Function. Like any other objects they too have methods! And like any object they too can be defined anywhere, returned from other functions or passed as arguments etc.

Take this example.

function gen() {
    return function ans(factor) {
        return 2 * factor;
    };
}

Does it look confusing? If yes, then let’s substitute the returned function by a variable.

function gen() {
    // ...
    return f;
}

Looks better? Since functions are just mere objects we can do any of the following.

function gen() {
    var f = function ans(factor) {
        return 2 * factor;
    };
    return f;
}

Or

function gen() {
    function ans(factor) {
        return 2 * factor;
    };
    return ans;
}

The assignment analogy

When you name a function (taking function f as example), like:-

function f(factor) {
    return 2 * factor;
};

then that is almost equivalent to:-

var f = function (factor) {
    return 2 * factor;
};

I say ‘almost’, since even though…

f(2);
var f = function (factor) {
    return 2 * factor;
};

will error out saying – TypeError: f is not a function, since f is actually undefined. But,…

f(2);
function f(factor) {
    return 2 * factor;
};

will not error out. Since, similar to var, function definitions too are hoisted.

Functions are not polymorphic

You cannot name two functions the same and hope to invoke either of them based on the parameter types. The function defined later will overwrite the previous. After all if you take the assignment analogy then the next function definition is going to reassign the same variable with its own function object.

function ans(f1, f2) { ... }

function ans(f1) { ... } // This replaces the previous definition.

But the point to note is, all arguments in functions are always optional.

function ans(a, b) {
    //...
}
ans(2); //ans will be invoked with a = 2, and b = undefined

Function return

In a function you may choose to return any data or nothing at all.

function () {
    if (cond1) {
        // Returns an object
        return {
            a: 10
        };
    } else if (cond2) {
        // Returns undefined
        return;
    } else if (cond3) {
        // Returns a number.
        return 1;
    }
}

What if all the conditions fail? Similar to cond2, this will return undefined.

Closure

JS has the power of lambda. Simply put lambdas are anonymous functions. This has proved to be one of the core pillars of the language. Now this has been introduced even into Java 8.

All functions in JS also have access to the outer scope, be it another function or global. It is also able to retain the outer scope even after the outer function has finished execution. This concept of hanging onto the outer scope is closure.

Java developers will be familiar with the concept of final. Where anonymous inner classes have access to final variables in outer scope and they hang onto it. This is like closure, but not 100%, since closure requires that the whole of outer scope be captured. Although JS interpreters optimize their implementation and only hold-on to the variables actually being referred. Also in true closure you are allowed to update values of variables in outer scope.

With this knowledge can you guess the output of the following?

function adder(factor) {
    return function (a) {
        return a + factor;
    };
}
var adder2 = adder(2);
console.log( adder2(5) );

If you guessed 7 then that is correct. adder2 variables refers to a function generated by adder which always adds 2 to any number passed to it.

If you find it difficult to understand then, this is what adder2 actually is:-

adder2 = function (a) {
    return a + 2;
};

Now make a guess for the following.

function gen() {
    var i = 1, f = [];
    for (; i <= 2; i++) {
        f.push(function (a) { return a + i; });
    }
    return f; // Returns an array with two functions in it.
}
var fs = gen();
console.log( fs[0](5) );
console.log( fs[1](5) );

If your answer is not 8 and 8 then it is wrong! fs[0] and fs[1] return the functions generated inside gen‘s for-loop. Remember that in this case both these functions hold-on to the same outer scope, not the values of i. When the for-loop ends the value of i is 3. So, both the functions are adding 3 to 5, and not 1 and 2 to 5.

Truthy and Falsy

Much like C and C++, but unlike Java, JS has a wide range of what can be treated as truthy or falsy. All objects (except empty string) and non-zero numbers are treated as truthy. Whereas empty string, zero, null and undefined are treated as falsy.

undefined is a special value. All variables when not assigned any value has the value undefined. Clear? 🙂 Similarly all functions which do not return a value actually return undefined. In fact it is a keyword. So, the following code is valid.

var a = undefined;

This is actually equivalent to:-

var a;

Value coercing

In JS when you try do something impossible with values then JS tries it best to make them compatible and come up with some meaningful result.

For example: !0 is actually boolean true since ! can work only with boolean values. 0 when coerced into boolean is false. '2' * 1 is actually number 2, since * cannot work on strings. But, '2' + 1 is string 21, since because of the presence of one string the number is coerced into string.

Here is a tip. Since, !0 is true. You can use this for a neat trick – var hasChildren = !!children.length;. This will set hasChildren to appropriate pure boolean value.

Prototype based programing

Unlike C, C++ or Java functions in JS are actually objects and, as an OOP developer would say, instance of a class Function. However, there are no classes in JS, just constructors. The constructors create objects by cloning another object. So, all functions in JS are clones of Function. Only functions are allowed to be constructors, i.e. new operator can be applied only on them.

In words of Douglas Crockford: you make prototype objects, and then … make new instances. Objects are mutable in JavaScript, so we can augment the new instances, giving them new fields and methods. These can then act as prototypes for even newer objects. We don’t need classes to make lots of similar objects….Objects inherit from objects. What could be more object oriented than that?

JS support two kinds of object creation – by cloning existing object (using Object.create(otherObj)) or ex nihilo (“from nothing”, using Object.create(null)). By the way, {} is the short-hand for Object.create(Object.prototype), and [] is short-hand for new Array().

Actually Object.create(obj) creates a new object (think of it like an empty shell) where obj is its prototype (this gives content to that shell). So, it doesn’t actually clone obj; instead it sets obj as its prototype. As its literal meaning, a prototype is an object from which the main object derives its properties and methods. However, you are allowed to directly add any property or method to the main object too.

Object.prototype itself is an ex nihilo object which other objects inherit, including Function.prototype. The prototype property in objects in itself is an object and can have other prototype forming a chain. More on this later. The standard way to get the prototype of an object is using Object.getPrototypeOf(obj). However, IE8 and below do not implement this. The non-standard way (also not supported by IE) is using __proto__ property. For IE and others you can use obj.constructor.prototype.

new operator

You can guess it. Similar to Java, new Foo() will create a new object of type Foo. When I say it is type of Foo, it means the object has the prototype set to Foo.prototype. As you will recall, you can do the same using Object.create() too. So, new Foo() is almost equivalent to Object.create(Foo.prototype). I say almost, since in former case Foo function is executed before the created object is returned. In later case the Foo function is not executed.

What is this?

This is one of the primary point of confusion for new JS developers. In JS the function is always executed in some context, implicit or explicit. That context decides the value of this inside the function. The same function can be invoked with any explicit context. When the context is not specified then it is ‘window’ for non-strict mode and undefined for strict mode. You can use the following code to test this.

function A() { return this; }
A(); // returns window
function B() {'use strict'; return this; }
B(); // returns undefined

Have a look below:-

var car = new Car();
car.honk('loud');

When you uses the new operator, you created a new object of type Car. When you did car.honk('loud'), first JS interpreter looked into car object for honk method, if it did not find that there, then next it would look into Car.prototype object for this method. If the method was not even there then it would look into Car.prototype.prototype object and so on. Once the method is located that method would be triggered in the context of car object. That means, in that method then, this will be car. These behaviors are part of JS language.

Recall that functions are themselves objects of type Function, which means they too have methods and can in-turn be used as objects! Functions have a method call using which you can explicitly specify the context in which to execute the function.

Car.prototype.honk.call(someObj, 'loud');

This will call honk such that this inside that refers to someObj. Practically someObj could be any object, not necessarily objects of type Car.

There is also an apply() method in Function class. The only difference between this and call() is that, here the second argument is an array of the arguments we need to send to the called function.

In next section we will put these information to use.

Prototypal inheritance is way different from OOPs

In class based inheritance the compiler takes care of making the inheritance work for you automatically. However, in prototypal inheritance (JS) it is left to developers to fend for themselves. Actually prototypal inheritance is a concept/trick developed by developers and is not something the language defines.

The main feature we expect in inheritance is the ability to inherit methods and fields from parent classes, and we should be able to override them if required.

Let’s now try to mimic this behavior in JS.

function Vehicle(color) {
    this.color = color;
}
Vehicle.prototype.honk = function() {
    alert('Honking!');
};

function Car(color) {
    Vehicle.call(this, color);
}
Car.prototype = Object.create(Vehicle.prototype);
Car.prototype.getWheelsCount = function() {
    return 4;
};

function Autorickshaw(color) { // OR TukTuk, take your pick
    Vehicle.call(this, color);
}
Autorickshaw.prototype = Object.create(Vehicle.prototype);
Autorickshaw.prototype.getWheelsCount = function() {
    return 3;
};

Above the Vehicle.call(this, color) statement executes the Vehicle function in the context of current object and passes on the color param. This way we technically made a super() call. So, this.color is like a field variable and this.honk() and this.getWheelsCount() are methods.

The prototype chain formed in this case is:-

Car.prototype -> Vehicle.prototype

Now there lots of boilerplate code above. Let’s try to cut that.

function define(superClass, definition) {
    function Class() {
        if (superClass)
            superClass.apply(this, arguments);
        if (definition.initialize)
            definition.initialize.apply(this, arguments);
    }
    if (superClass)
        Class.prototype = Object.create(superClass.prototype);
    var proto = Class.prototype;
    for (var key in definition) {
        proto[key] = definition[key];
    }
    return Class;
}

var Vehicle = define(null, {
    initialize: function(color) {
        this.color = color;
    },

    honk: function() {
        alert('Honking!');
    }
});

var Car = define(Vehicle, {
    getWheelsCount: function() {
        return 4;
    }
});

var Autorickshaw = define(Vehicle, {
    getWheelsCount: function() {
        return 3;
    }
});

The define method is pretty straight forward. Although, before I continue, note the arguments keyword. This magic variable is available inside a function. This is an “array” of all the arguments supplied to that function, when it is invoked. I say array in quotes, since this is not really a standard JS array. It has only few features and methods of an array.

This function internally defines another function which is the new class we are defining. One quick thing to note is that the function’s name is Class. That means to define its instance we should write new Class(). However, the name we write against the new operator has no significance. The name is just a reference to the action function object. So, if A = B = Class then new A() or new B() or new Class() will all yield the same result.

The function then iterates over the provided singleton object and simply copies them with the same keys to Class‘s prototype. Finally it returns that function – Class. Astute readers will notice that define feels similar to PrototypeJs’ Object.extends().

Now let’s add some features to define.

function define(superClass, definition) {
    function Class() {
        if (superClass) {
            this.$super = superClass.prototype; //New addition
            superClass.apply(this, arguments);
        }
        if (definition.initialize)
            definition.initialize.apply(this, arguments);
    }
    if (superClass)
        Class.prototype = Object.create(superClass.prototype);
    var proto = Class.prototype;
    for (var key in definition) {
        proto[key] = definition[key];
    }
    return Class;
}

We just now added a this.$super, which we can use to access super methods like we do in other languages.

var Vehicle = define(null, {
    initialize: function(color) {
        this.color = color;
    },

    honk: function() {
        alert('Honking!');
    }
});

var Car = define(Vehicle, {
    honk: function() {
        this.$super.honk(); // This will display the Honking! alert.
        alert('Beep Beep');
    }
});

You may ask, how to mimic private methods? Well we simply don’t. We prefix method name with _, denote that it is private. Convention is simpler than enforcing a rule. However, if you really want to enforce that then there is a another way to define a class.

function Vehicle(color) {
    var that = this;
    this.color = color;

    function _showColor() { // This is a private method
        alert(that.color);
    }

    this.honk = function() {
        _showColor();
        alert('Honking!');
    };
}

This will give an effect similar to what we have used till now, with the added benefit of private methods. Notice that we defined a variable that. This was needed so that _showColor() could refer to it. It simply can’t use this, since that has a special meaning.

In this approach we use the power of closures. However, you will notice that this is not as performant as previous approach, since a new function instance will be created per object of type Vehicle. Let’s how can Car inherit this.

function Car(color) {
    Vehicle.call(this, color);
    this.getWheelsCount = function () {
        return 4;
    };
}
Car.prototype = new Vehicle();

Here is the key difference. This type Car‘s prototype is not Vehicle.prototype but object of Vehicle.

The prototype chain formed in this case is:-

Car.prototype -> new Vehicle() -> Vehicle.prototype

There still another way to define a class.

function Vehicle(color) {
    this.color = color;
}
Vehicle.prototype = {
    honk: function() {
        alert('Honking!');
    }
};

Here we replaced the default prototype object with another object. Nothing changed, but this is more convenient to read and type.

ECMAScript 6 has a proposal to support class and extends keywords. Finally, in future, we might get actual classes support in JS.

The instanceof operator

Java developers will instantly recognize this. Same as in Java, this operator evaluates to true if the object on left is of a given class type (given on right). It’s syntax is:-

object instanceof function

This will make JS follow the prototype chain of object to look for function.prototype. So,

console.log(car instanceof Car); //Is true
console.log(car instanceof Vehicle); //Is true
console.log(car instanceof Autorickshaw); //Is false

But,

var o = Object.create(null);
console.log(o instanceof Object); // Is false

This is false since o is ex nihilo object and is not an object of anything. So you can say, it is just a shell. This too can be used as map, similar to {}, which is an object of type Object.

JS in browser is single threaded

JS language does not dictate that the interpreter be single threaded, in fact many server-side interpreters like NodeJs are not. However, the browser interpreters are single threaded. (Modern browsers now support Web Workers API which can launch background threads.)

This is an important fact to keep in mind. So, no matter when the ajax call completes, the callback you provided for that is not going to fire unless you are not done with what you are doing right now.

Also note when JS is processing, the browser gets locked up. It can’t respond to user inputs at that instant. So, if you have long running JS job then possibly the browser will stop responding. If you have modern browser then use Web Workers for that or divide your task into smaller chunks and periodically invoke setTimeout() to give control back to browser before continuing with your next chunk of task. I have a project CInk, which does some heavy rendering task; where I use this trick.

DOM is not JS

The magic objects we usually see in browser JS, like – window, document, etc. are DOM objects defined by the browser. DOM stands for Document Object Model. This is a tree data model to represent the Html code. These objects are injected by browser into JS domain, and they are not defined by JS language specification. DOMs have their own specification – Document Object Model (DOM) Level 1, 2 and 3. This specification was formed after ECMAScript was standardised1.

Unfortunately DOMs have a wide list of APIs which varies from browser to browser. For a short list you can see DOM Core on QuirksMode.

However, I am not here to talk about what is DOM. I only want to hammer in the point that DOM and JS are separate things. So, when coding on server-side JS, do not expect these DOM objects. You might find other magic global objects there.

Concluding

Before I let you go; here is a fun little project I worked on and which needs your help. I tried to implement the JS engine from scratch! That was a fun endeavour until I ran out of free time and motivation. I only managed to get the parser work. It is able to parse ES6 complaint JS codes and build AST tree. The parser is coded by hand and is based on the ES6 Grammar Specs. It is called Just (JS on Rust).

I hope this helps, and happy coding. 🙂

Lessons learned from PhoneGap (Cordova) and jQueryMobile on Android

I recently created my first Android app – JustTodo. This is a simple app but took considerable time to finish due many unexpected problems. Google wasn’t too helpful either. Here is a list of issues I faced and their solutions I discovered.

General tips

Zooming and Scaling

I presume that you have already encountered suggestions over the net that you should use meta tag in your html page. So, that your page is scaled to, maybe, take the exact device width. For example you may use…

<meta name="viewport" content="width=device-width, initial-scale=1.0">

width=device-width will make sure that your page’s width is equal to the device’s width. Of course, for any device, the width differs based on if it is in landscape or portrait orientation.

initial-scale=1 will hint the mobile browser to not zoom out the pages. Mobile browsers typically zoom out the whole page when it is initially loaded, so that the whole page fits on the screen. This way the user can tap anywhere to zoom into that exact location. You do not want that to happen to your apps.

However, in spite of the above setting, the browser may still scale your page. This is because a web page designed for iPhone 3G will look half its size on iPhone 5, since iPhone 5 has twice the pixel density. To prevent the webpages from breaking on high DPI devices, the devices usually scale them by a fixed percent to make them look like they would on MDPI (Medium Dots Per Inch) devices. Webpages can read the value of window.devicePixelRatio to know the factor by which they have been scaled. This is 1 for MDPI devices.

jQueryMobile tips

Do not minify jqm structure css!

I don’t know the reason or the root cause for this. When I use the jquery.mobile.structure css minified by Yahoo UI compressor, the fixed header stops resizing on orientation change! The solution is simple. Always use the official minified version.

tap event gets fired when swipe is fired

If on an element if you are listenning for both tap and swipe events, then it would be better to replace tap by vclick. This is because tap is fired before swipe. However, in case of vclick, it waits for about 200 – 300ms to check if any other event is fired. If not then it fires that. This way you can prevent user accidentally clicking an element while trying to swipe it.

Better swipe event detection

Jqm sets swipe horizontal distance threshold to 30px and time threshold to 1s. Which means to successfully swipe an element the user needs to drag his finger at least 30px horizontally within 1s. I usually set the time threshold to 2.5s. However, due to scaling on high density bigger phones and tablets the physical distance covered by 30px on a MDPI and a XHDPI might vary by a lot. This would force the users to drag their fingers for longer distance in the same duration. So, the trick is to change the distance threshold, such that it covers the same physical distance on all devices.

I wrote the following Javascript function for that.

#!javascript
function getLenInCurrDevice(len) {
    var refernecDPI = 192.2960680247461, // Ref device is Sony Xperia Mini Pro - SK17i.
        pRatio = window.devicePixelRatio, // Ratio of current device DPI on a square inch to that of a MDPI.
        currDPI = refernecDPI * Math.sqrt(pRatio),
        originalWInInch = len / refernecDPI;
    return (originalWInInch / pRatio) * currDPI;
}

For a given distance in px the above function will return a new distance in px on the current device, such that, the distances cover the same physcial length on the current and the reference devices. In this case the reference device is Sony Xperia Mini Pro Sk17i, which has DPI of about 192.3 and devicePixelRatio of 1.

If you want to accurately calculate the DPI of your device you can use the DPI Calculator here.

Cordova tips

Caution when using Proguard

Build files created for an Android project which allows you to enable Proguard on your project. Proguard analysis the Java files and removes all which are not used. However, it also strips out the Cordova plugin classes, since it does see them referenced in any Java classes. (They are referenced from the cordova.js file.) So, you need to add the following to your Proguard config.

-keep class org.apache.cordova.** { *; }

Minimum plugins needed

This is not documented anywhere and it seems the minimum plugins needed are – App and Device. Frankly, I did not try removing them ever. So, maybe even they too are needed. Just try it and let me know. 😉

Although I must mention that if you remove NetworkStatus plugin then occasionally you might see error related to that in the console. Other than that there is no adverse effect of that. In my app I have kept this disabled, so that I can create an app which requires no special permissions. 🙂

Remove junk files and folders

Be sure to delete assets/www/res, assets/www/specs and assets/www/specs.html files and folders. The first one might be about 7MB! Actually the only file needed is cordova.js and the cordova.jar file.

Show native Android context menu

In JustTodo the user when long presses an item in the list it programmatically shows the context menu from the JS. There are two parts to this problem. First is adding the code which allows the JS to open the context menu. Second is to prevent WebView from automatically opening the context menu. More on this later.

Implementing context menu

First step is creating the context menu xml. Below is an example.

res/menu/example_ctx_menu.xml. The xml’s name can be of your choosing.

<?xml version="1.0" encoding="utf-8"?>
<menu xmlns:android="http://schemas.android.com/apk/res/android" >
    <item android:id="@+id/edit"
          android:title="@string/edit"/>
    <item android:id="@+id/delete"
          android:title="@string/delete" />    
</menu>

res/values/strings.xml. This maps the key we used in menu xml to the actual string which is displayed to the user.

<?xml version='1.0' encoding='utf-8'?>
<resources>
    <string name="edit">Edit</string>
    <string name="delete">Delete</string>
</resources>

The official way we should be implementing this is using Cordova Plugins. However, I find the technique described here to be simpler. You be your best judge.

NativeContextMenu.java

#!java
public class NativeContextMenu {
    private WebView mAppView;
    private DroidGap mGap;

    public NativeContextMenu(DroidGap gap, WebView view) {
      mAppView = view;
      mGap = gap;
      mGap.registerForContextMenu(mAppView);
    }

    @JavascriptInterface
    public void showCtxMenu() {
        mGap.runOnUiThread(new Runnable() {
            @Override
            public void run() {
                mGap.openContextMenu(mAppView);
            }
        });
    }

    private void raiseJSEvent(String event) {
        mGap.sendJavascript("$(document).trigger('" + event + "');");
    }

    boolean onContextItemSelected(MenuItem item) {
        switch (item.getItemId()) {
        case R.id.edit:
            raiseJSEvent("menu.item.edit");
            return true;
        case R.id.delete:
            raiseJSEvent("menu.item.delete");
            return true;
        }
        return false;
    }

    void onCreateContextMenu(ContextMenu menu, View v,
                                    ContextMenuInfo menuInfo) {
        mGap.getMenuInflater().inflate(R.menu.todo_ctxmenu, menu);
        raiseJSEvent("menu.opened");
    }

    void onContextMenuClosed(Menu menu) {
        raiseJSEvent("menu.closed");
    }
}

YourCordovaActivity.java

#!java
public class YourCordovaActivity extends DroidGap {
    private NativeContextMenu ctxMenu;

    @Override
    public void onCreate(Bundle savedInstanceState) {

        super.onCreate(savedInstanceState);

        super.loadUrl(Config.getStartUrl());

        ctxMenu = new NativeContextMenu(this, appView);
        appView.addJavascriptInterface(ctxMenu, "ContextMenu");
    }

    @Override
    public boolean onContextItemSelected(MenuItem item) {
        return ctxMenu.onContextItemSelected(item) ? true : super.onContextItemSelected(item);
    }

    @Override
    public void onContextMenuClosed(Menu menu) {
        super.onContextMenuClosed(menu);
        ctxMenu.onContextMenuClosed(menu);
    }

    @Override
    public void onCreateContextMenu(ContextMenu menu, View v,
                                    ContextMenuInfo menuInfo) {
        super.onCreateContextMenu(menu, v, menuInfo);
        ctxMenu.onCreateContextMenu(menu, v, menuInfo);
    }
}

Now ContextMenu.showCtxMenu() would be available to you in Javascript.

example.js

#!javascript
$('element').on('taphold', function  () { // taphold event is defined by jqm
    ContextMenu.showCtxMenu(); // Shows the context menu.
                               // Also the user will get a haptic feedback.
});

$(document).on('menu.item.edit', function () {
    console.log('Edit option was selected.');    
});

Preventing WebView from automatically opening the context menu

The big problem you will face here that when you long press, the context menu will open twice. One by your call in JS code, and another by WebView. WebView has a method setLongClickable() which even if you set after calling registerForContextMenu() does not seem to have any effect. WebView directly calls performLongClick() without checking if isLongClickable(). So the other way to do this is make NativeContextMenu also implement OnLongClickListener.

Changed codes.

NativeContextMenu.java

#!java
public class NativeContextMenu implements OnLongClickListener {  // <---
    private WebView mAppView;
    private DroidGap mGap;

    public NativeContextMenu(DroidGap gap, WebView view) {
      mAppView = view;
      mGap = gap;
      mGap.registerForContextMenu(mAppView);
      mAppView.setOnLongClickListener(this); // <---
    }

    @JavascriptInterface
    public void showCtxMenu() {
        mGap.runOnUiThread(new Runnable() {
            @Override
            public void run() {
                mGap.openContextMenu(mAppView);
            }
        });
    }

    private void raiseJSEvent(String event) {
        mGap.sendJavascript("$(document).trigger('" + event + "');");
    }

    boolean onContextItemSelected(MenuItem item) {
        switch (item.getItemId()) {
        case R.id.edit:
            raiseJSEvent("menu.item.edit");
            return true;
        case R.id.delete:
            raiseJSEvent("menu.item.delete");
            return true;
        }
        return false;
    }

    void onCreateContextMenu(ContextMenu menu, View v,
                                    ContextMenuInfo menuInfo) {
        mGap.getMenuInflater().inflate(R.menu.todo_ctxmenu, menu);
        raiseJSEvent("menu.opened");
    }

    void onContextMenuClosed(Menu menu) {
        raiseJSEvent("menu.closed");
    }

    @Override
    public boolean onLongClick(View v) {  //<---
        return true; // We return true, to let performLongClick() know that we handled the long press.
    }
}

The only side effect of the above code is that whenever the user long presses and you do not show a context menu, the user will still get the haptic feedback. The only way to circumvent that is by sub-classing CordovaWebView and overriding performLongClick().

The Emperor’s Old Clothes

This is a republication from ACM’s archive of Prof. C. A. R. Hoare‘s lecture at 1980 ACM Turning Award. Prof. Hoare is popularly known for inventing QuickSort. The original piece can be found as PDF here. The primary reason for republishing this is the source PDF’s fonts. The PDF fonts are too horrible to read.


Inventor of QuickSort

C. A. R. Hoare

The 1980 ACM Turing Award was presented to Charles Antony Richard Hoare, Professor of Computation at the University of Oxford, England, by Walter Carlson, Chairman of the Awards committee, at the ACM Annual Conference in Nashville, Tennessee, October 27, 1980.

Professor Hoare was selected by the General Technical Achievement Award Committee for his fundamental contributions to the definition and design of program- ming languages. His work is characterized by an unusual combination of insight, originality, elegance, and impact. He is best known for his work on axiomatic definitions of programming languages through the use of techniques popularly referred to as axiomatic semantics. He developed ingenious algorithms such as Quicksort and was responsible for inventing and promulgating advanced data structuring techniques in scientific programming languages. He has also made important contributions to operating systems through the study of monitors. His most recent work is on communicating sequential processes.

Prior to his appointment to the University of Oxford in 1977, Professor Hoare was Professor of Computer Science at The Queen’s University in Belfast, Ireland from 1968 to 1977 and was a Visiting Professor at Stanford University in 1973. From 1960 to 1968 he held a number of positions with Elliot Brothers, Ltd., England.

Professor Hoare has published extensively and is on the editorial boards of a number of the world’s foremost computer science journals. In 1973 he received the ACM Programming Systems and Languages Paper Award. Professor Hoare became a Distinguished Fellow of the British Computer Society in 1978 and was awarded the degree of Doctor of Science Honoris Causa by the University of Southern California in 1979.

The Turing Award is the Association for Computing Machinery’s highest award for technical contributions to the computing community. It is presented each year in commemoration of Dr. A. M. Turing, an English mathematician who made many important contributions to the computing sciences.


The Emperor’s Old Clothes by Charles Antony Richard Hoare, Oxford University, England

The author recounts his experiences in the implementation, design, and standardization of computer programming languages, and issues a warning for the future. Key Words and Phrases: programming languages, history of programming languages, lessons for the future CR Categories: 1.2, 2.11, 4.2

My first and most pleasant duty in this lecture is to express my profound gratitude to the Association for Computing Machinery for the great honor which they have bestowed on me and for this opportunity to address you on a topic of my choice. What a difficult choice it is! My scientific achievements, so amply recognized by this award, have already been amply described in the scientific literature. Instead of repeating the abstruse technicalities of my trade, I would like to talk informally about myself, my personal experiences, my hopes and fears, my modest successes, and my rather less modest failures. I have learned more from my failures than can ever be revealed in the cold print of a scientific article and now I would like you to learn from them, too. Besides, failures are much more fun to hear about afterwards; they are not so funny at the time.

I start my story in August 1960, when I became a programmer with a small computer manufacturer, a division of Elliott Brothers (London) Ltd., where in the next eight years I was to receive my primary education in computer science. My first task was to implement for the new Elliot 803 computer, a library subroutine for a new fast method of internal sorting just invented by Shell. I greatly enjoyed the challenge of maximizing efficiency in the simple decimal-addressed machine code of those days. My boss and tutor, Pat Shackleton, was very pleased with my completed program. I then said timidly that I thought I had invented a sorting method that would usually run faster than SHELLSORT, without taking much extra store. He bet me six pence that I had not. Although my method was very difficult to explain, he finally agreed that I had won my bet.

I wrote several other tightly coded library subroutines but after six months I was given a much more important task — that of designing a new advanced high level programming language for the company’s next computer, the Elliott 503, which was to have the same instruction code as the existing 803 but run sixty times faster. In spite of my education in classical languages, this was a task for which I was even less qualified than those who undertake it today. By great good fortune there came into my hands a copy of the Report on the International Algorithmic Language ALGOL 60. Of course, this language was obviously too complicated for our customers. How could they ever understand all those begins and ends when even our salesmen couldn’t?

Around Easter 1961, a course on ALGOL 60 was offered in Brighton, England, with Peter Naur, Edsger W. Dijkstra, and Peter Landin as tutors. I attended this course with my colleague in the language project, Jill Pym, our divisional Technical Manager, Roger Cook, and our Sales Manager, Paul King. It was there that I first learned about recursive procedures and saw how to program the sorting method which I had earlier found such difficulty in explaining. It was there that I wrote the procedure, immodestly named QUICKSORT, on which my career as a computer scientist is founded. Due credit must be paid to the genius of the designers of ALGOL 60 who included recursion in their language and enabled me to describe my invention so elegantly to the world. I have regarded it as the highest goal of programming language design to enable good ideas to be elegantly expressed.

After the ALGOL course in Brighton, Roger Cook was driving me and my colleagues back to London when he suddenly asked, “Instead of designing a new language, why don’t we just implement ALGOL 60?” We all instantly agreed — in retrospect, a very lucky decision for me. But we knew we did not have the skill or experience at that time to implement the whole language, so I was commissioned to design a modest subset. In that design I adopted certain basic principles which I believe to be as valid today as they were then.

  1. The first principle was security: The principle that every syntactically incorrect program should  be rejected by  the  compiler  and  that  every  syntactically  correct program  should  give  a  result  or  an  error  message  that was  predictable  and  comprehensible  in  terms  of  the source  language  program  itself.  Thus  no  core  dumps should  ever be  necessary. It was  logically impossible for any source  language  program  to  cause  the  computer  to run  wild,  either  at  compile  time  or  at  run  time.  A consequence of this principle is that  every occurrence of every  subscript  of  every  subscripted  variable  was  on every  occasion  checked  at  run  time  against  both  the upper and the lower declared bounds of the array. Many years later we asked our customers whether they wished us to provide an  option to switch off these  checks in the interests of efficiency on production runs.  Unanimously, they urged us not to — they already knew how frequently subscript  errors  occur on  production runs  where  failure to  detect them  could be  disastrous.  I  note with fear and horror  that  even  in  1980,  language  designers  and  users have  not  learned  this  lesson.  In  any respectable  branch of engineering,  failure  to  observe  such  elementary  precautions would  have long been  against  the law.
  2. The  second  principle  in  the  design  of the  implementation was  brevity  of the object  code produced by the compiler and compactness  of run time working data. There was  a  clear reason for this:  The size of main  storage  on any computer is limited  and its extension involves delay and  expense.  A  program  exceeding  the  limit,  even  by one word,  is impossible to run,  especially since many  of our customers did not intend to purchase backing stores. This principle of compactness  of object  code is  even more valid today, when processors are trivially cheap in comparison  with  the  amounts  of  main  store  they  can address,  and backing stores are comparatively even more expensive  and  slower  by  many  orders  of magnitude.  If as a  result of care taken in implementation the  available hardware  remains  more  powerful  than  may  seem  necessary for a  particular  application,  the  applications  programmer  can  nearly  always take  advantage  of the  extra capacity  to  increase  the  quality  of his  program,  its simplicity, its ruggedness,  and  its reliability.
  3. The third principle of our design was that the entry and  exit  conventions for procedures  and functions should be as compact  and efficient  as for  tightly  coded machine-code subroutines.  I  reasoned  that  procedures  are  one  of the  most  powerful features  of a  high  level language,  in that  they  both  simplify  the  programming  task  and shorten  the  object  code. Thus  there must  be  no  impediment  to their frequent use.
  4. The  fourth  principle  was  that  the compiler should use only a single pass.  The  compiler was  structured  as  a collection of mutually recursive procedures, each capable of analyzing and translating  a major syntactic unit of the language — a statement, an expression, a declaration, and so on. It was designed and documented in ALGOL 60, and then  coded into  decimal machine  code using  an  explicit stack  for  recursion.  Without  the  ALGOL 60  concept  of recursion, at that time highly controversial, we could not have written this compiler at  all.

I can still recommend single-pass top-down recursive descent both as an implementation method and as a design principle for a programming language. First, we certainly want programs to be read by people and people prefer to read things once in a single pass. Second, for the user of a time-sharing or personal computer system, the interval between typing in a program (or amendment) and starting to run that program is wholly unproductive. It can be minimized by the high speed of a single pass compiler. Finally, to structure a compiler according to the syntax of its input language makes a great contribution to ensuring its correctness. Unless we have absolute confidence in this, we can never have confidence in the results of any of our programs.

To observe these four principles, I selected a rather small subset of ALGOL 60. As the design and implementation progressed, I gradually discovered methods of relaxing the restrictions without compromising any of the principles. So in the end we were able to implement nearly the full power of the whole language, including even recursion, although several features were removed and others were restricted.

In the middle of 1963, primarily as a result of the work of Jill Pym and Jeff Hillmore, the first version of our compiler was delivered. After a few months we began to wonder whether anyone was using the language or taking any notice of our occasional reissue, incorporating improved operating methods. Only when a customer had a complaint did he contact us and many of them had no complaints. Our customers have now moved on to more modern computers and more fashionable languages but many have told me of their fond memories of the Elliott ALGOL System and the fondness is not due just to nostalgia, but to the efficiency, reliability, and convenience of that early simple ALGOL System.

As a result of this work on ALGOL, in August 1962, I was invited to serve on the new Working Group 2.1 of IFIP, charged with responsibility for maintenance and development of ALGOL. The group’s first main task was to design a subset of the language which would remove some of its less successful features. Even in those days and even with such a simple language, we recognized that a subset could be an improvement on the original. I greatly welcomed the chance of meeting and hearing the wisdom of many of the original language designers. I was astonished and dismayed at the heat and even rancor of their discussions. Apparently the original design of ALGOL 60 had not proceeded in that spirit of dispassionate search for truth which the quality of the language had led me to suppose.

In order to provide relief from the tedious and argumentative task of designing a subset, the working group allocated one afternoon to discussing the features that should be incorporated in the next design of the language. Each member was invited to suggest the improvement he considered most important. On October 11, 1963, my suggestion was to pass on a request of our customers to relax the ALGOL 60 rule of compulsory declaration of variable names and adopt some reasonable default convention such as that of FORTRAN. I was astonished by the polite but firm rejection of this seemingly innocent suggestion: It was pointed out that the redundancy of ALGOL 60 was the best protection against programming and coding errors which could be extremely expensive to detect in a running program and even more expensive not to. The story of the Mariner space rocket to Venus, lost because of the lack of compulsory declarations in FORTRAN, was not to be published until later. I was eventually persuaded of the need to design programming notations so as to maximize the number of errors which cannot be made, or if made, can be reliably detected at compile time. Perhaps this would make the text of programs longer. Never mind! Wouldn’t you be delighted if your Fairy Godmother offered to wave her wand over your program to remove all its errors and only made the condition that you should write out and key in your whole program three times! The way to shorten programs is to use procedures, not to omit vital declarative information.

Among the other proposals for the development of a new ALGOL was that the switch declaration of ALGOL 60 should be replaced by a more general feature, namely an array of label-valued variables and that a program should be able to change the values of these variables by assignment. I was very much opposed to this idea, similar to the assigned, Go TO of FORTRAN, because I had found a surprising number of tricky problems in the implementation of even the simple labels and switches of ALGOL 60. I could see even more problems in the new feature including that of jumping back into a block after it had been exited. I was also beginning to suspect that programs that used a lot of labels were more difficult to understand and get correct and that programs that assigned new values to label variables would be even more difficult still.

It occurred to me that the appropriate notation to replace the ALGOL 60 switch should be based on that of the conditional expression of ALGOL 60, which selects between two alternative actions according to the value of a Boolean expression. So I suggested the notation for a “case expression” which selects between any number of alternatives according to the value of an integer expression. That was my second language design proposal. I am still most proud of it, because it raises essentially no problems either for the implementor, the programmer, or the reader of a program. Now, after more than fifteen years, there is the prospect of international standardization of a language incorporating this notation — a remarkably short interval compared with other branches of engineering.

Back again to my work at Elliott’s. After the unexpected success of our ALGOL Compiler, our thoughts turned to a more ambitious project: To provide a range of operating system software for larger configurations of the 503 computer, with card readers, line printers, magnetic tapes, and even a core backing store which was twice as cheap and twice as large as main store, but fifteen times slower. This was to be known as the Elliott 503 Mark II software system.

It comprised:

  1. An assembler for a symbolic assembly language in which all the rest of the software was to be written.
  2. A scheme for automatic administration of code and data overlays, either from magnetic tape or from core backing store. This was to be used by the rest of the software.
  3. A scheme for automatic buffering of all input and output on any available peripheral device, again, to be used by all the other software.
  4. A filing system on magnetic tape with facilities for editing and job control.
  5. A completely new implementation of ALGOL 60, which removed all the nonstandard restrictions which we had imposed on our first implementation.
  6. A compiler for FORTRAN as it was then.

I wrote documents which described the relevant concepts and facilities and we sent them to existing and prospective customers. Work started with a team of fifteen programmers and the deadline for delivery was set some eighteen months ahead in March 1965. After initiating the design of the Mark II software, I was suddenly promoted to the dizzying rank of Assistant Chief Engineer, responsible for advanced development and design of the company’s products, both hardware and software.

Although I was still managerially responsible for the 503 Mark II software, I gave it less attention than the company’s new products and almost failed to notice when the deadline for its delivery passed without event. The programmers revised their implementation schedules and a new delivery date was set some three months ahead in June 1965. Needless to say, that day also passed without event. By this time, our customers were getting angry and my managers instructed me to take personal charge of the project. I asked the senior programmers once again to draw up revised schedules, which again Showed that the software could be delivered within another three months. I desperately wanted to believe it but I just could not. I disregarded the schedules and began to dig more deeply into the project.

It turned out that we had failed to make any overall plans for the allocation of our most limited resource — main storage. Each programmer expected this to be done automatically, either by the symbolic assembler or by the automatic overlay scheme. Even worse, we had failed to simply count the space used by our own software which was already filling the main store of the computer, leaving no space for our customers to run their programs. Hardware address length limitations prohibited adding more main storage.

Clearly, the original specifications of the software could not be met and had to be drastically curtailed. Experienced programmers and even managers were called back from other projects. We decided to concentrate first on delivery of the new compiler for ALGOL 60, which careful calculation showed would take another four months. I impressed upon all the programmers involved that this was no longer just a prediction; it was a promise; if they found they were not meeting their promise, it was their personal responsibility to fred ways and means of making good.

The programmers responded magnificently to the challenge. They worked nights and days to ensure completion of all those items of software which were needed by the ALGOL compiler. To our delight, they met the scheduled delivery date; it was the first major item of working software produced by the company over a period of two years.

Our delight was short-lived; the compiler could not be delivered. Its speed of compilation was only two characters per second which compared unfavorably with the existing version of the compiler operating at about a thousand characters per second. We soon identified the cause of the problem: It was thrashing between the main store and the extension core backing store which was fifteen times slower. It was easy to make some simple improvements, and within a week we had doubled the speed of compilation to four characters per second. In the next two weeks of investigation and reprogramming, the speed was doubled again to eight characters per second. We could see ways in which within a month this could be still further improved; but the amount of reprogramming required was increasing and its effectiveness was decreasing; there was an awful long way to go. The alternative of increasing the size of the main store so frequently adopted in later failures of this kind was prohibited by hardware addressing limitations.

There was no escape: The entire Elliott 503 Mark II software project had to be abandoned, and with it, over thirty man-years of programming effort, equivalent to nearly one man’s active working life, and I was responsible, both as designer and as manager, for wasting it.

A meeting of all our 503 customers was called and Roger Cook, who was then manager of the computing division, explained to them that not a single word of the long-promised software would ever be delivered to them. He adopted a very quiet tone of delivery, which ensured that none of the customers could interrupt, murmur in the background, or even shuffle in their seats. I admired but could not share his calm. Over lunch our customers were kind to try to comfort me. They had realized long ago that software to the original specification could never have been delivered, and even if it had been, they would not have known how to use its sophisticated features, and anyway many such large projects get cancelled before delivery. In retrospect, I believe our customers were fortunate that hardware limitations had protected them from the arbitrary excesses of our software designs. In the present day, users of microprocessors benefit from a similar protection — but not for much longer.

At that time I was reading the early documents describing the concepts and features of the newly announced OS 360, and of a new time-sharing project called Multics. These were far more comprehensive, elaborate, and sophisticated than anything I had imagined, even in the first version of the 503 Mark II software. Clearly IBM and MIT must be possessed of some secret of successful software design and implementation whose nature I could not even begin to guess at. It was only later that they realized they could not either.

So I still could not see how I had brought such a great misfortune upon my company. At the time I was convinced that my managers were planning to dismiss me. But no, they were intending a far more severe punishment. “O.K. Tony,” they said. “You got us into this mess and now you’re going to get us out.” “But I don’t know how,” I protested, but their reply was simple. “Well then, you’ll have to fred out.” They even expressed confidence that I could do so. I did not share their confidence. I was tempted to resign. It was the luckiest of all my lucky escapes that I did not.

Of course, the company did everything they could to help me. They took away my responsibility for hardware design and reduced the size of my programming teams. Each of my managers explained carefully his own theory of what had gone wrong and all the theories were different. At last, there breezed into my office the most senior manager of all, a general manager of our parent company, Andrew St. Johnston. I was surprised that he had even heard of me. “You know what went wrong?” he shouted &mash; he always shouted — “You let your programmers do things which you yourself do not understand.” I stared in astonishment. He was obviously out of touch with present day realities. How could one person ever understand the whole of a modern software product like the Elliott 503 Mark II software system?

I realized later that he was absolutely right; he had diagnosed the true cause of the problem and he had planted the seed of its later solution.

I still had a team of some forty programmers and we needed to retain the good will of customers for our new machine and even regain the confidence of the customers for our old one. But what should we actually plan to do when we knew only one thing — that all our previous plans had failed? I therefore called an all-day meeting of our senior programmers on October 22, 1965, to thrash out the question between us. I still have the notes of that meeting. We first listed the recent major grievances of our customers: Cancellation of products, failure to meet deadlines, excessive size of software, “… not justified by the usefulness of the facilities provided,” excessively slow programs, failure to take account of customer feedback; “Earlier attention paid to quite minor requests of our customers might have paid as great dividends of goodwill as the success of our most ambitious plans.”

We then listed our own grievances: Lack of machine time for program testing, unpredictability of machine time, lack of suitable peripheral equipment, unreliability of the hardware even when available, dispersion of programming staff, lack of equipment for keypunching of programs, lack of firm hardware delivery dates, lack of technical writing effort for documentation, lack of software knowledge outside of the programming group, interference from higher managers who imposed decisions, “… without a full realization of the more intricate implications of the matter,” and overoptimism in the face of pressure from customers and the Sales Department.

But we did not seek to excuse our failure by these grievances. For example, we admitted that it was the duty of programmers to educate their managers and other departments of the company by “… presenting the necessary information in a simple palatable form.” The hope “… that deficiencies in original program specifications could be made up by the skill of a technical writing department, was misguided; the design of a program and the design of its specification must be undertaken in parallel by the same person, and they must interact with each other. A lack of clarity in specification is one of the surest signs of a deficiency in the program it describes, and the two faults must be removed simultaneously before the project is embarked upon.” I wish I had followed this advice in 1963; I wish we all would follow it today.

My notes of the proceedings of that day in October 1965 include a complete section devoted to failings within the software group; this section rivals the most abject self-abasement of a revisionist official in the Chinese cultural revolution. Our main failure was over-ambition. “The goals which we have attempted have obviously proved to be far beyond our grasp.” There was also failure in prediction, in estimation of program size and speed, of effort required, in planning the coordination and interaction of programs, in providing an early warning that things were going wrong. There were faults in our control of program changes, documentation, liaison with other departments, with our management, and with our customers. We failed in giving clear and stable definitions of the responsibilities of individual programmers and project leaders, — Oh, need I go on? What was amazing was that a large team of highly intelligent programmers could labor so hard and so long on such an unpromising project. You know, you shouldn’t trust us intelligent programmers. We can think up such good arguments for convincing ourselves and each other of the utterly absurd. Especially don’t believe us when we promise to repeat an earlier success, only bigger and better next time.

The last section of our inquiry into the failure dealt with the criteria of quality of software. “In the recent struggle to deliver any software at all, the first casualty has been consideration of the quality of the software delivered. The quality of software is measured by a number of totally incompatible criteria, which must be carefully balanced in the design and implementation of every program.” We then made a fist of no less than seventeen criteria which has been published in a guest editorial in Volume 2 of the journal, Software Practice and Experience.

How did we recover from the catastrophe? First, we classified our 503 customers into groups, according to the nature and size of the hardware configurations which they had bought — for example, those with magnetic tapes were all in one group. We assigned to each group of customers a small team of programmers and told the team leader to visit the customers to find out what they wanted; to select the easiest request to fulfill, and to make plans (but not promises) to implement it. In no case would we consider a request for a feature that would take more than three months to implement and deliver. The project leader would then have to convince me that the customers’ request was reasonable, that the design of the new feature was appropriate, and that the plans and schedules for implementation were realistic. Above all, I did not allow anything to be done which I did not myself understand. It worked! The software requested began to be delivered on the promised dates. With an increase in our confidence and that of our customers, we were able to undertake fulfilling slightly more ambitious requests. Within a year we had recovered from the disaster. Within two years, we even had some moderately satisfied customers.

Thus we muddled through by common sense and compromise to something approaching success. But I was not satisfied. I did not see why the design and implementation of an operating system should be so much more difficult than that of a compiler. This is the reason why I have devoted my later research to problems of parallel programming and language constructs which would assist in clear structuring of operating systems — constructs such as monitors and communicating processes.

While I was working at Elliotts’, I became very interested in techniques for formal definition of programming languages. At that time, Peter Landin and Christopher Strachey proposed to define a programming language in a simple functional notation, that specified the effect of each command on a mathematically defined abstract machine. I was not happy with this proposal because I felt that such a definition must incorporate a number of fairly arbitrary representation decisions and would not be much simpler in principle than an implementation of the language for a real machine. As an alternative, I proposed that a programming language definition should be formalized as a set of axioms, describing the desired properties of programs written in the language. I felt that carefully formulated axioms would leave an implementation the necessary freedom to implement the language efficiently on different machines and enable the programmer to prove the correctness of his programs. But I did not see how to actually do it. I thought that it would need lengthy research to develop and apply the necessary techniques and that a university would be a better place to conduct such research than industry. So I applied for a chair in Computer Science at the Queen’s University of Belfast where I was to spend nine happy and productive years. In October 1968, as I unpacked my papers in my new home in Belfast, I came across an obscure preprint of an article by Bob Floyd entitled, “Assigning Meanings to Programs.” What a stroke of luck! At last I could see a way to achieve my hopes for my research. Thus I wrote my first paper on the axiomatic approach to computer programming, published in the Communications of the ACM in October 1969.

Just recently, I have discovered that an early advocate of the assertional method of program proving was none other than Alan Turing himself. On June 24, 1950 at a conference in Cambridge, he gave a short talk entitled, “Checking a Large Routine” which explains the idea with great clarity. “How can one check a large routine in the sense of making sure that it’s right? In order that the man who checks may not have too difficult a task, the programmer should make a number of definite assertions which can be checked individually, and from which the correctness of the whole program easily follows.”

Consider the analogy of checking an addition. If the sum is given [just as a column of figures with the answer below] one must check the whole at one sitting. But if the totals for the various columns are given, [with the carries added in separately], the checker’s work is much easier, being split up into the checking of the various assertions [that each column is correctly added] and the small addition [of the carries to the total]. This principle can be applied to the checking of a large routine but we will illustrate the method by means of a small routine viz. one to obtain n factorial without the use of a multiplier. Unfortunately there is no coding system sufficiently generally known to justify giving this routine in full, but a flow diagram will be sufficient for illustration. That brings me back to the main theme of my talk, the design of programming languages.

During the period, August 1962 to October 1966, I attended every meeting of the IFIP ALGOL working group. After completing our labors on the IFIP ALGOL subset, we started on the design of ALGOL X, the intended successor to ALGOL 60. More suggestions for new features were made and in May 1965, Niklaus Wirth was commissioned to collate them into a single language design. I was delighted by his draft design which avoided all the known defects of ALGOL 60 and included several new features, all of which could be simply and efficiently implemented, and safely and conveniently used.

The description of the language was not yet complete. I worked hard on making suggestions for its improvement and so did many other members of our group. By the time of the next meeting in St. Pierre de Chartreuse, France in October 1965, we had a draft of an excellent and realistic language design which was published in June 1966 as “A Contribution to the Development of ALGOL”, in the Communications of the ACM. It was implemented on the IBM 360 and given the title ALGOL W by its many happy users. It was not only a worthy successor of ALGOL 60, it was even a worthy predecessor of PASCAL.

At the same meeting, the ALGOL committee had placed before it, a short, incomplete and rather incomprehensible document, describing a different, more ambitious and, to me, a far less attractive language. I was astonished when the working group, consisting of all the best known international experts of programming languages, resolved to lay aside the commissioned draft on which we had all been working and swallow a line with such an unattractive bait.

This happened just one week after our inquest on the 503 Mark II software project. I gave desperate warnings against the obscurity, the complexity, and over ambition of the new design, but my warnings went unheeded. I conclude that there are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies and the other way is to make it so complicated that there are no obvious deficiencies.

The first method is far more difficult. It demands the same skill, devotion, insight, and even inspiration as the discovery of the simple physical laws which underlie the complex phenomena of nature. It also requires a willingness to accept objectives which are limited by physical, logical, and technological constraints, and to accept a compromise when conflicting objectives cannot be met. No committee will ever do this until it is too late.

So it was with the ALGOL committee. Clearly the draft which it preferred was not yet perfect. So a new and final draft of the new ALGOL language design was promised in three months’ time; it was to be submitted to the scrutiny of a subgroup of four members including myself. Three months came and went, without a word of the new draft. After six months, the subgroup met in the Netherlands. We had before us a longer and thicker document, full of errors corrected at the last minute, describing yet another but to me, equally unattractive language. Niklaus Wirth and I spent some time trying to get removed some of the deficiencies in the design and in the description, but in vain. The completed final draft of the language was promised for the next meeting of the full ALGOL committee in three months time.

Three months came and went — not a word of the new draft appeared. After six months, in October 1966, the ALGOL working group met in Warsaw. It had before it an even longer and thicker document, full of errors corrected at the last minute, describing equally obscurely yet another different, and to me, equally unattractive language. The experts in the group could not see the defects of the design and they firmly resolved to adopt the draft, believing it would be completed in three months. In vain, I told them it would not. In vain, I urged them to remove some of the technical mistakes of the language, the predominance of references, the default type conversions. Far from wishing to simplify the language, the working group actually asked the authors to include even more complex features like overloading of operators and concurrency.

When any new language design project is nearing completion, there is always a mad rush to get new features added before standardization. The rush is mad indeed, because it leads into a trap from which there is no escape. A feature which is omitted can always be added later, when its design and its implications are well understood. A feature which is included before it is fully understood can never be removed later.

At last, in December 1968, in a mood of black depression, I attended the meeting in Munich at which our long-gestated monster was to come to birth and receive the name ALGOL 68. By this time, a number of other members of the group had become disillusioned, but too late: The committee was now packed with supporters of the language, which was sent up for promulgation by the higher committees of IFIP. The best we could do was to send with it a minority report, stating our considered view that, “… as a tool for the reliable creation of sophisticated programs, the language was a failure.” This report was later suppressed by IFIP, an act which reminds me of the lines of Hilaire Belloc,

But scientists, who ought to know/Assure us that it must be so./ Oh, let us never, never doubt/What nobody is sure about.

I did not attend any further meetings of that working group. I am pleased to report that the group soon came to realize that there was something wrong with their language and with its description; they labored hard for six more years to produce a revised description of the language. It is a great improvement but I’m afraid, that in my view, it does not remove the basic technical flaws in the design, nor does it begin to address the problem of its overwhelming complexity.

Programmers are always surrounded by complexity; we cannot avoid it. Our applications are complex because we are ambitious to use our computers in ever more sophisticated ways. Programming is complex because of the large number of conflicting objectives for each of our programming projects. If our basic tool, the language in which we design and code our programs, is also complicated, the language itself becomes part of the problem rather than part of its solution.

Now let me tell you about yet another overambitious language project. Between 1965 and 1970 I was a member and even chairman of the Technical Committee No. 10 of the European Computer Manufacturers Association. We were charged first with a watching brief and then with the standardization of a language to end all languages, designed to meet the needs of all computer applications, both commercial and scientific, by the greatest computer manufacturer of all time. I had studied with interest and amazement, even a touch of amusement, the four initial documents describing a language called NPL, which appeared between March 1 and November 30, 1964. Each was more ambitious and absurd than the last in its wishful speculations. Then the language began to be implemented and a new series of documents began to appear at six-monthly intervals, each describing the final frozen version of the language, under its final frozen name PL/I.

But to me, each revision of the document simply showed how far the initial F-level implementation had progressed. Those parts of the language that were not yet implemented were still described in free-flowing flowery prose giving promise of unalloyed delight. In the parts that had been implemented, the flowers had withered; they were choked by an undergrowth of explanatory footnotes, placing arbitrary and unpleasant restrictions on the use of each feature and loading upon a programmer the responsibility for controlling the complex and unexpected side-effects and interaction effects with all the other features of the language.

At last, March 11, 1968, the language description was nobly presented to the waiting world as a worthy candidate for standardization. But it was not. It had already undergone some seven thousand corrections and modifications at the hand of its original designers. Another twelve editions were needed before it was finally published as a standard in 1976. I fear that this was not because everybody concerned was satisfied with its design, but because they were thoroughly bored and disillusioned.

For as long as I was involved in this project, I urged that the language be simplified, if necessary by subsetting, so that the professional programmer would be able to understand it and be able to take responsibility for the correctness and cost-effectiveness of his programs. I urged that the dangerous features such as defaults and ON- conditions be removed. I knew that it would be impossible to write a wholly reliable compiler for a language of this complexity and impossible to write a wholly reliable program when the correctness of each part of the program depends on checking that every other part of the program has avoided all the traps and pitfalls of the language.

At first I hoped that such a technically unsound project would collapse but I soon realized it was doomed to success. Almost anything in software can be implemented, sold, and even used given enough determination. There is nothing a mere scientist can say that will stand against the flood of a hundred million dollars. But there is one quality that cannot be purchased in this way — and that is reliability. The price of reliability is the pursuit of the utmost simplicity. It is a price which the very rich find most hard to pay.

All this happened a long time ago. Can it be regarded as relevant in a conference dedicated to a preview of the Computer Age that lies ahead? It is my gravest fear that it can. The mistakes which have made in the last twenty years are being repeated today on an even grander scale. I refer to a language design project which has generated documents entitled strawman, woodenman, tinman, ironman, steelman, green and finally now ADA. This project has been initiated and sponsored by one of the world’s most powerful organizations, the United States Department of Defense. Thus it is ensured of an influence and attention quite independent of its technical merits and its faults and deficiencies threaten us with far greater dangers. For none of the evidence we have so far can inspire confidence that this language has avoided any of the problems that have afflicted other complex language projects of the past.

I have been giving the best of my advice to this project since 1975. At first I was extremely hopeful. The original objectives of the language, included reliability, readability of programs, formality of language definition, and even simplicity. Gradually these objectives have been sacrificed in favor of power, supposedly achieved by a plethora of features and notational conventions, many of them unnecessary and some of them, like exception handling, even dangerous. We relive the history of the design of the motor car. Gadgets and glitter prevail over fundamental concerns of safety and economy.

It is not too late! I believe that by careful pruning of the ADA language, it is still possible to select a very powerful subset that would be reliable and efficient in implementation and safe and economic in use. The sponsors of the language have declared unequivocally, however, that there shall be no subsets. This is the strangest paradox of the whole strange project. If you want a language with no subsets, you must make it small.

You include only those features which you know to be needed for every single application of the language and which you know to be appropriate for every single hardware configuration on which the language is implemented. Then extensions can be specially designed where necessary for particular hardware devices and for particular applications. That is the great strength of PASCAL, that there are so few unnecessary features and almost no need for subsets. That is why the language is strong enough to support specialized extensions — Concurrent PASCAL for real time work, PASCAL PLUS for discrete event simulation, UESD PASCAL for microprocessor work stations. If only we could learn the right lessons from the successes of the past, we would not need to learn from our failures.

And so, the best of my advice to the originators and designers of ADA has been ignored. In this last resort, I appeal to you, representatives of the programming profession in the United States, and citizens concerned with the welfare and safety of your own country and of mankind: Do not allow this language in its present state to be used in applications where reliability is critical, i.e., nuclear power stations, cruise missiles, early warning systems, anti-ballistic missile defense systems. The next rocket to go astray as a result of a programming language error may not be an exploratory space rocket on a harmless trip to Venus: It may be a nuclear warhead exploding over one of our own cities. An unreliable programming language generating unreliable programs constitutes a far greater risk to our environment and to our society than unsafe cars, toxic pesticides, or accidents at nuclear power stations. Be vigilant to reduce that risk, not to increase it

Let me not end on this somber note. To have our best advice ignored is the common fate of all who take on the role of consultant, ever since Cassandra pointed out the dangers of bringing a wooden horse within the walls of Troy. That reminds me of a story I used to hear in my childhood. As far as I recall, its title was:

The Emperor’s Old Clothes

Many years ago, there was an Emperor who was so excessively fond of clothes that he spent all his money on dress. He did not trouble himself with soldiers, attend banquets, or give judgement in court. Of any other king or emperor one might say, “He is sitting in council,” but it was always said of him, “The emperor is sitting in his wardrobe.” And so he was. On one unfortunate occasion, he had been tricked into going forth naked to his chagrin and the glee of his subjects. He resolved never to leave his throne, and to avoid nakedness, he ordered that each of his many new suits of clothes should be simply draped on top of the old.

Time passed away merrily in the large town that was his capital. Ministers and courtiers, weavers and tailors, visitors and subjects, seamstresses and embroiderers, went in and out of the throne room about their various tasks, and they all exclaimed, “How magnificent is the attire of our Emperor.”

One day the Emperor’s oldest and most faithful Minister heard tale of a most distinguished tailor who taught at an ancient institute of higher stitch-craft, and who had developed a new art of abstract embroidery using stitches so refined that no one could tell whether they were actually there at all. “These must indeed be splendid stitches,” thought the minister. “If we can but engage this tailor to advise us, we will bring the adornment of our Emperor to such heights of ostentation that all the world will acknowledge him as the greatest Emperor there has ever been.”

So the honest old Minister engaged the master tailor at vast expense. The tailor was brought to the throne room where he made obeisance to the heap of fine clothes which now completely covered the throne. All the courtiers waited eagerly for his advice. Imagine their astonishment when his advice was not to add sophistication and more intricate embroidery to that which already existed, but rather to remove layers of the finery, and strive for simplicity and elegance in place of extravagant elaboration. “This tailor is not the expert that he claims,” they muttered. “His wits have been addled by long contemplation in his ivory tower and he no longer understands the sartorial needs of a modern Emperor.” The tailor argued loud and long for the good sense of his advice but could not make himself heard. Finally, he accepted his fee and returned to his ivory tower.

Never to this very day has the full truth of this story been told: That one fine morning, when the Emperor felt hot and bored, he extricated himself carefully from under his mountain of clothes and is now living happily as a swineherd in another story. The tailor is canonized as the patron saint of all consultants, because in spite of the enormous fees that he extracted, he was never able to convince his clients of his dawning realization that their clothes have no Emperor.