I still think Postel's principle is absolute mischief. I don't understand how the "fifth error code" can be harmless, unless you have been handling all errors the same way. And even then, suppose the fifth error code is "your transmissions have been compromised". The sender's subsequent behavior, if defensive, must be appropriate for a receiver who doesn't understand the fifth code. So the fifth code is useless. The paper seems to say that programming a protocol recognizer in a Turing-complete language is a bad idea. If the mathematical definition of the protocol is sound and the implementation is provable, the approach is fine. The trouble is that all too often liberality is added to the program, not to the specification. Babel follows. The web is an awful reminder. I use 4 different browsers. Practically every day I encounter a page that displays badly on one or more of them. The reason is that the maker of the page was fooled by Postel's principle. The page worked on his browser, so he couldn't tell he hadn't been conservative. Of course he should have had the web page validated, but what the hell, he's a busy guy and if it looks like it works that's enough for him. I absolutely agree with your retort to the idea that liberalism defeats formalism. The right way to be liberal is to augment the grammar to incorporate the liberality. That is likely to be easy to do for the (necessarily predictable) "technical errors where the meaning is clear". It should help close loopholes for exploits, but it still doesn't suppress the "what-the-hell" attitude that liberal receivers enable. "consequently, they are more sensitive to errors making such meaning ambiguous." I don't see any reason to believe this claim. I think there may be some imprecision here--errors may rob a message of its meaning or even change the meaning. I suspect that errors don't often make a message ambiguous. Errors, though, may well be "correctible" in more than one way, which may be the sense in which you meant to use the term "ambiguous". Perhaps a better formulation than "meaning is clear", is that the protocol be an error-correcting code for some well-defined set of errors. Though I think this is a proper formulation, it is also odd, because once the set is defined, it effectively becomes part of the protocol. TRIVIA "the progress of programming has been to produce ever more complex machine behaviors" glosses over the very important role of abstraction. Real progress happens mainly by identifying abstractions so one can think about behaviors at a higher level. The tragedy from a security standpoint is that the bad guys can attack at every level, which undoes the abstractions that enabled the design. P.S. AN UNPUBLISHED LETTER TO THE EDITOR OF TECHNOLOGY REVIEW This paragraph from Johnson's article about HTML5 (TR, Nov/Dec 2010) gets things exactly backward: "[XHTML] was incredibly harsh in the way it handled mistakes by web programmers. Until then, the Web had been forgiving; it simply glossed over badly written code. The new system, however, mandated that any pages with malformed code return an error message. That seemed fine under lab conditions, but in practice even the most experienced Web designers had trouble writing perfectly formed XHTML code. Web pages were breaking without warning." A serious mistake in early versions of the HTML standard was a recommendation that browsers try to work around errors. That recommendation was eventually withdrawn, but its echoes reverberate to this day. Web pages nowadays "break without warning" right and left because the degree of toleration is inconsistent among browsers, and necessarily so, since the handling of bugs unspecified. Web designers' bad code may work on the one or two browsers they have at hand, but it tanks in others. Most Web pages haven't even passed syntax-validation tests, freely available at W3C, before being loosed on the world. "Harsh" treatment, with automatic feedback to offending websites would go a long way towards cleaning up the mess. Admittedly many of the troubles one sees on web pages arise outside of HTML, e.g. in JavaScript. Indeed, when Web pages can do general computation, it's not generally possible for a browser to know whether what's being done is "right". But this is all the more reason for the code to be precisely interpreted. Tolerant interpretation of sloppy code is a recipe for chaos--and an invitation to malware. Having praised the traditional sloppiness of browsers, Johnson goes on to say, "HTML5 will clean this content up" as a result of bringing extralinguistic capabilities into the fold. But that will only happen if HTML5 itself is strictly interpreted. Otherwise extralinguistic mayhem will still exist, and the blame will lie squarely on the language and its browsers.