Skip to main content

Physics of Software, the Blind and the Elephant

In The Simpsons' Chapter 28 Homer designs a car with all the features he likes.


"The Homer" has no conceptual integrity and is a total failure, but it is still a car.

There's a limit to how many isolated features can be added to a car before it stops looking and behaving like a car.

Software doesn't work like that.

The laws of physics also constrain what a car can be, there are limits to the shape and size of a car.

Shape and size aren't constraints in software.

Inside the solution space of viable cars there is physical property that helps the design process: the entire car can be seen at once.

Seeing all at once lets people find out if it reflects one set of ideas or it contains many good but independent and uncoordinated ones.

Software can't be seen all at once, not by looking at its code nor by using it, that removes a limit that physical things have.

We can keep adding features into software without noticing any effect from the only perspective we have, a local perspective.

Software is like the fable of the blind and the elephant, at any given moment we are seeing a small slice of the whole, trying to figure out the global shape and how everything interacts and fits together.

The real world limits how much complexity we can add to a design before it "collapses under its own weight".

As software grows in complexity analogs to friction and inertia increase.

Friction in software happens when a change produces an opposing force in the shape of bugs or broken code.

99 bugs in the issue tracker, 99 bugs. Take one down and squash it, 128 bugs in the issue tracker.

Inertia in software happens when a change requires a large amount of preliminary work, refactors, tests, waiting and bureaucracy.

A good measurement of inertia is how long it takes from the moment you have an idea to the moment you make the required changes and see it reflected in the running application.

Software collapses under its own weight when the amount of energy we put in is larger than the value we get out.

Constraints similar to the physical world along with their units, measurement devices and operational limits may help avoid the worst effects of complexity in software.

Being able to "see" software "all at once" may allow the comparison of different designs and finding out when a design starts losing conceptual integrity.

Or maybe this is just a bad analogy :)

Why OpenDoc failed, and then failed 3 more times?

A recurring questions that surfaces around the Future of Coding Community is what happened to OpenDoc? why did it fail?

This post is a summary of reasons found around the web, then I will explore other implementations similar to OpenDoc to see if there is a general pattern.

Bias warning: I pick the quotes and the emphasis, read the sources in full to form your own conclusion and let me know!


To start, here's a brief description of what OpenDoc was:

The OpenDoc concept was that developers could just write the one piece they were best at, then let end-users mix and match all of the little pieces of functionality together as they wished.

Let's find out the reasons:

OpenDoc post by Greg Maletic

A consortium, lots of money and the main driver being competing against Microsoft:

Hence was born OpenDoc, both the technology and the consortium, consisting primarily of Apple, IBM, and WordPerfect, all companies that didn’t like Microsoft very much. All poured loads of money into the initiative.

The hardware wasn't there:

The Copland team was wary of OpenDoc. I looked at those people as bad guys at the time, but in reality they were right to be afraid. It’s hard to remember now, but back in 1996 memory (as in RAM) was a big issue.

The average Mac had about 2 megabytes of memory. OpenDoc wouldn’t run on a machine with less than 4 megs, and realistically, 8 megs was probably what you wanted.

Second system effect?:

The OpenDoc human interface team had taken it upon themselves to correct the perceived flaws of the Mac as a modal, application-centric user experience, and instead adopted a document-centric model for OpenDoc apps.


It was a noble and interesting idea, but in retrospect it was a reach, not important to the real goals of OpenDoc, and it scared a lot of people including the developers we were trying to woo.

No "Business Model":

It didn’t create a new economy around tiny bits of application code, and the document-centric model was never allowed to bloom as we had hoped, to the point where it would differentiate the Mac user experience.

A solution looking for a problem:

There are lots of reasons for OpenDoc’s failure, but ultimately it comes down to the fundamental question of why Apple was developing this technology when no-one in the company really wanted it. The OS group had mixed feelings, but ultimately didn’t care. Most folks at Claris, Apple’s application group, didn’t want it at all, seeing it as an enabler for competition to Claris’s office suite product, ClarisWorks.

"Why didn't it catch on? 7 theories I've come across" by Geoffrey Litt

  1. UX quality. If every component is developed by an independent company, who is responsible for unifying it into a nice holistic experience rather than a mishmash?

  2. Modality. Different parts of your document operate with different behaviors.

  3. Performance. Huge memory overhead from the complexity

  4. Exporting. Hard to export a "whole document" into a different format if it's made of a bunch of different parts.

  5. Lack of broad utility. How common are "compound documents" really? Beyond the classic example of "word doc with images and videos embedded"

  6. Data format compatibility. If two components both edit spreadsheets but use different data formats, what do you do?

  7. Historical accident. Turf wars between Microsoft and everyone else, Steve Jobs ruthlessly prioritizing at Apple, execution failures, etc.

chris rabl's take inspired by Macinography 10: OpenDoc and Apple's Development Doldrums

  1. There wasn't a centrally distribution channel for "parts"

  2. Despite OpenDoc discouraging vendor lock-in for file formats, many vendors still insisted on locking down the file formats they used or making them incompatible with other vendors' products

  3. Lack of buy-in from the developer community: cumbersome C-based API and obtuse SDKs dissuaded developers from buying into the hype, plus it would have required them to re-think how their application worked in the first place (what would Photoshop look like as an OpenDoc app?)

  4. Very little dogfooding: even some of the big partners like IBM and Novell did a really bad job implementing OpenDoc in their product suites

Scott Holdaway Interview

Hard to adapt existing software:

OpenDoc made all these assumptions about how things were coded, which were kind of reasonable for an object-oriented program, which ClarisWorks was not. We told management that it just wasn't going to work trying to put OpenDoc support into ClarisWorks, especially not without doing a lot of rewriting first. And they said, well, too bad, do it anyway.

Bad Execution:

It was a good idea, it was certainly poorly done, but it still could have worked in a different codebase. They had like up to 100 people-engineers, testing, and support and all that-working on OpenDoc. That's too many people. Anytime you have that many people starting something like that, it's usually poorly done. They had too many compromises in their design, or just lack of foresight. Integrating it into ClarisWorks was certainly a huge problem. It was not made to be integrated into existing programs like, say, OLE was.

Oral History of Larry Tesler Part 3 of 3 (Transcript)

Big project, solution looking for a problem:

But they also had ActiveX. And that’s the thing, even a little more so than OLE, that OpenDoc was aiming it for. And it was a bad idea from the beginning. And I kind of sensed something was wrong but it wasn’t until we started playing it out and realizing how big a project it was and how little we had thought about the benefits and what you would actually do with it. And people tried to address it in various ways.

Doing it for the wrong reasons:

But we were doing it because Microsoft was doing it and we needed to have something better.

If we believe that it was all Steve Job's fault for stopping its development early, because the hardware wasn't there or because of bad execution then if someone tried something similar again then it should succeed right?

I went looking for other attempts at the same idea of "[Document] Component based software", here are the main contenders and some reasons why they are not the way we do things right now.

They are not 100% document based, but since they are more focused/pragmatic at least they should have succeeded being used as a better way to reuse components across applications.


ActiveX lived longer and was used in multiple places, so why was it deprecated?

ActiveX on Wikipedia


Even after simplification, users still required controls to implement about six core interfaces. In response to this complexity, Microsoft produced wizards, ATL base classes, macros and C++ language extensions to make it simpler to write controls.

Unified model, platform fragmentation:

Such controls, in practice, ran only on Windows, and separate controls were required for each supported platform

Security and lack of portability:

Critics of ActiveX were quick to point out security issues and lack of portability


The ActiveX security model relied almost entirely on identifying trusted component developers using a code signing technology


Identified code would then run inside the web browser with full permissions, meaning that any bug in the code was a potential security issue; this contrasts with the sandboxing already used in Java at the time.

A break from the past, part 2: Saying goodbye to ActiveX, VBScript, attachEvent…

HTML 5 replaces most use cases:

The need for ActiveX controls has been significantly reduced by HTML5-era capabilities, which also produces interoperable code across browsers.

The Open Source Alternatives: KParts & Bonobo

If the problem was the business model but the technology was really powerful then it should not be a problem for up-and-coming open source desktop environments with a user base consisting mostly on power users and developers that would benefit from code reuse.

The major desktop environments, Gnome and KDE, had/have something similar to OpenDoc.

An interesting fact is that Gnome used to be an acronym: GNU Network Object Model Environment, the Object Model idea was in the name itself.

I didn't notice until now that the name KParts seems to come from OpenDoc's Parts concept.

Usenix: KDE Application Integration Technologies:

A KPart is a dynamically loadable module which provides an embeddable document or control view including associated menu and toolbar actions. A broker returns KPart objects for certain data or service types to the requesting application. KParts are for example used for embedding an image viewer into the web browser or for embedding a spread sheet object into the word processor.

KParts on Wikipedia:

Example uses of KParts:

  • Konqueror uses the Okular part to display documents

  • Konqueror uses the Dragon Player part to play multimedia

  • Kontact embeds kdepim applications

  • Kate and other editors use the katepart editor component

  • Several applications use the Konsole KPart to embed a terminal

How will CORBA be used in GNOME, or, what is Bonobo?

Bonobo is a set of interfaces for providing application embedding and in-place activation which are being defined. The Bonobo interfaces and interactions are modeled after the OLE2 and OpenDoc interfaces.


Reusable controls: Another set of Bonobo interfaces deal with reusable controls. This is similar to Sun's JavaBeans and Microsoft Active-X.

Bonobo on Wikipedia:

Available components are:

  • Gnumeric spreadsheet

  • ggv PostScript viewer

  • Xpdf PDF viewer

  • gill SVG viewer

History of Gnome: Episode 1.3: Land of the bonobos:

Nautilus used Bonobo to componentise functionality outside the file management view, like the web rendering, or the audio playback and music album view.


This was the heyday of the component era of GNOME, and while its promises of shiny new functionality were attractive to both platform and application developers, the end result was by and large one of untapped potential. Componentisation requires a large initial effort in designing the architecture of an application, and it’s really hard to introduce after the fact without laying waste to working code.


As an initial roadblock it poorly fits with the “scratch your own itch” approach of free and open source software. Additionally it requires not just a higher level of discipline in the component design and engineering, it also depends on comprehensive and extensive documentation, something that has always been the Achille’s Heel of many an open source project.

History of Gnome: Episode 1.5: End of the road

Sun’s usability engineer Calum Benson presented the results of the first round of user testing on GNOME 1.4, and while the results were encouraging in some areas, they laid bare the limits of the current design approach of a mish-mash of components. If GNOME wanted to be usable by professionals, not curating the offering of the desktop was not a sustainable option any more.


The consistency, or lack thereof, of the environment was one of the issues that the testing immediately identified: identical functionality was labelled differently depending on the component; settings like the fonts to be used by the desktop were split across various places; duplication of functionality, mostly for the sake of duplication, was rampant. Case in point: the GNOME panel shipped with not one, not two, but five different clock applets—including, of course, the binary clock


Additionally, all those clocks, like all the panel applets, could be added and removed by sheer accident, clicking around on the desktop. The panel itself could simply go away, and there was no way to revert to a working state without nuking the settings for the user.

KParts seems to still be around, you can see a list of actively developed KParts on KDE's GitHub account, Bonobo was replaced by DBus which is a Message-oriented abstraction not related to OpenDoc concepts as far as I know.

What about Web Components?

From MDN: Web Components:

Web Components is a suite of different technologies allowing you to create reusable custom elements — with their functionality encapsulated away from the rest of your code — and utilize them in your web apps.

Web Components have been around for a while and developers seem to be split in two camps regarding how useful it is.

We will see with time if it becomes the main way to do web applications and a new attempt at OpenDoc, this time on the web.


Still not sure which were the main reasons, but many of them aren't shared by later attempts:

  • More RAM was available during the ActiveX days and is available today for KParts

  • KParts and Bonobo don't require a business model and benefit from reuse

    • Unlike OpenDoc/ActiveX there's no competition between Apple/Microsoft and developers

  • Execution on ActiveX/KParts didn't seem to be a problem

  • Lived long enough and learned from OpenDoc failure

  • Weren't developed by a consortium

  • Were well integrated into the underlying environment

  • Had use cases from day one

  • No vendor lock-in

  • Some had good distribution channels (Package Managers)

  • At least KParts and WebComponents seem to have a good developer experience

The conclusion is that there's no conclusion, the case is still open, what do you think?

Instadeq Reading List: April 2021

Here is a list of content we found interesting this month.

If it stays long we will move to once a week.

💾 Computers and Creativity

I will be arguing that to foster optimal human innovation, digital creative tools need to be interoperable, moldable, efficient, and community-driven.


Acknowledging that computers themselves are not inherently creative should not come as a surprise. Instead, this truth identifies an opportunity for computers to more fully assume the role of co-creator — not idea-generator, but actualizer.


Intelligence Augmentation (IA), is all about empowering humans with tools that make them more capable and more intelligent, while Artificial Intelligence (AI) has been about removing humans fully from the loop


The lack of interoperability between creative tools means that all work created within a tool is confined to the limitations of that tool itself, posing a hindrance to collaboration and limiting creative possibility


Moving beyond building on top of existing software, we can begin to imagine what a piece of software could look like if it itself was moldable, or built to be modified by the user

🧠 Thought as a Technology

At first these elements seem strange. But as they become familiar, you internalize the elements of this world. Eventually, you become fluent, discovering powerful and surprising idioms, emergent patterns hidden within the interface. You begin to think with the interface, learning patterns of thought that would formerly have seemed strange, but which become second nature. The interface begins to disappear, becoming part of your consciousness


What makes an interface transformational is when it introduces new elements of cognition that enable new modes of thought. More concretely, such an interface makes it easy to have insights or make discoveries that were formerly difficult or impossible


Mathematicians often don't think about mathematical objects using the conventional representations found in books. Rather, they rely heavily on what we might call hidden representations


This contrasts with the approach in most computer reasoning systems. For instance, much work on doing mathematics by computer has focused on automating symbolic computation (e.g., Mathematica), or on finding rigorous mathematical proofs (e.g., Coq). In both cases, the focus is on correct mathematical reasoning. Yet in creative work the supply of rigorously correct proofs is merely the last (and often least interesting) stage of the process. The majority of the creative process is instead concerned with rapid exploration relying more on heuristics and rules of thumb than on rigorous proof. We may call this the logic of heuristic discovery. Developing such a logic is essential to building exploratory interfaces.

🌱 Personal Digital Habitats

A Personal Digital Habitat is a federated multi-device information environment within which a person routinely dwells. It is associated with a personal identity and encompasses all the digital artifacts (information, data, applications, etc.) that the person owns or routinely accesses. A PDH overlays all of a person’s devices1 and they will generally think about their digital artifacts in terms of common abstractions supported by the PDH rather than device- or silo-specific abstractions. But presentation and interaction techniques may vary to accommodate the physical characteristics of individual devices.


Personal Digital Habitat is an aspirational metaphor. Such metaphors have had an important role in the evolution of our computing systems. In the 1980s and 90s it was the metaphor of a virtual desktop with direct manipulation of icons corresponding to metaphorical digital artifacts that made personal computers usable by a majority of humanity.

🏛️ DynamicLand Narrative

The mission of the Dynamicland Foundation is to enable universal literacy in a humane computational medium.


Instead of isolating and disembodying, it must bring communities together in the same physical space, to teach and discuss ideas face-to-face, to build and explore ideas with their hands, to solve problems collectively and democratically. For universal literacy, and for a humane medium, these are requirements.


Dynamicland researchers are inventing a new form of computing which takes place in physical space, using ordinary physical materials such as paper, pens, cardboard, and clay. There are no screens and no apps. Instead, people craft computational activities together with their hands


Unlike isolated apps, these projects all exist in the same space, can all be interconnected with one another, and can all be used by many people together


Dynamicland is particularly interested in its potential for public spaces in which people seek to understand and communicate — libraries, museums, classrooms, arts spaces, town halls, courtrooms.


The Dynamicland researchers are not developing a product. The computers that the researchers build are models: not for distribution, but for studying and evaluating


This community space is an early model for a new kind of civic institution — a public library for 21st-century literacy. Dynamicland envisions a future where every town has a “dynamic library” with computational literature on every subject, where people gather to collectively author and explore this literature, using the medium to represent and debate their ideas using computational models. People will hold presentations and town-hall discussions on issues of importance to the community, using the medium to see facts, explore consequences of proposals, and make data-driven decisions collectively. The public benefit of transforming collective learning and civic engagement is potentially immeasurable.

🧑‍💼 Office, messaging and verbs

The way forward for productivity is probably not to take software applications and document models that were conceived and built in a non-networked age and put them into the cloud


It takes time, but sooner or later we stop replicating the old methods with the new tools and find new methods to fit the new tools


What kills that task is not better or cheaper (or worse and free) spreadsheet or presentation software, but a completely different way to address the same underlying need - a different mechanism


But it should be replaced by a SaaS dashboard with realtime data, alerts for unexpected changes and a chat channel or Slack integration. PowerPoint gets killed by things that aren't presentations at all


You don't actually send email or make a spreadsheet - you analyze, delegate, report, confer, decide, track and so on. Or, perhaps, 'what's going on, what are we doing and what should we be doing?

🧑‍🎨 Software designers, not engineers An interview from alternative universe

In my universe, we treat the creation of software as a design activity, putting it as a third item on the same level as science and art.


when we start a software project, we think of it in more general terms. A typical form of what you call specification in our world is a bit more like a design brief. A much more space is dedicated to the context in which the work is happening, the problem that you are trying to solve and constraints that you are facing, but design briefs say very little about any specific potential software solution to the problem.


When you're solving a problem, even as you get to a more technical level, you always keep in mind why you are solving it.


The problem really only becomes apparent as you try to solve it. So, the key thing is to quickly iterate. You come up with a new framing for the problem, sketch a solution based on your framing and see what it looks like. If it looks good to you, you show it to the customer, or you sketch a range of different prototypes.


Software sketches are very ambiguous. When you are sketching, you are omitting a lot of details and so the sketching tool will give you something that can behave in unexpected ways in certain situations.

This is actually quite valuable, because this ambiguity allows your ideas to evolve. You can often learn quite a lot about the problem from the things that are underspecified in your sketches.


Very often, the right problem framing makes it possible to see a problem in a new way

Our Brain Typically Overlooks This Brilliant Problem-Solving Strategy

People often limit their creativity by continually adding new features to a design rather than removing existing ones

🛠️ The state of internal tools in 2021

Developers spend more than 30% of their time building internal applications.

That number jumps to 45% for companies with 5000+ employees.


More than 57% of companies reported at least one full-time employee dedicated to internal tools.


77% of companies with 500+ employees have dedicated teams for building and maintaining internal apps


2 out of 3 developers default to building from scratch, as opposed to using a spreadsheet or a SaaS tool.

🕹️ Serious Play

Games remain the most underrated and underexplored medium of art ever conceived by humans


Video games are right now shaping the patterns that will define the next generation of Design. Many of the hot topics in tech today like artificial intelligence, augmented reality, and remote collaboration have been brewing in video games for decades

♾️ Systems design explains the world

What is systems design? It's the thing that will eventually kill your project if you do it wrong, but probably not right away. It's macroeconomics instead of microeconomics.


Most of all, systems design is invisible to people who don't know how to look for it


With systems design, the key insight might be a one-sentence explanation given at the right time to the right person, that affects the next 5 years of work


What makes the Innovator's Dilemma so beautiful, from a systems design point of view, is the "dilemma" part. The dilemma comes from the fact that all large companies are heavily optimized to discard ideas that aren't as profitable as their existing core business

Statecharts: A Visual Formalism For Complex Systems

This is a summary of the paper: Statecharts: A Visual Formalism For Complex Systems

statecharts = state-diagrams + orthogonality + depth + broadcast-communication

Statechart Example


A broad extension of the conventional formalism of state machines and state diagrams, that is relevant to the specification and design of complex discrete-event systems, such as multi-computer real-time systems, communication protocols and digital control units. Our diagrams, which we call statecharts, extend conventional state-transition diagrams with essentially three elements, dealing, respectively, with the notions of hierarchy, concurrency and communication.

Statecharts are thus compact and expressive, small diagrams can express complex behavior, as well as compositional and modular.

Statecharts enable viewing the description at different levels of detail, and make even very large specifications manageable and comprehensible.

Statecharts counter many of the objections raised against conventional state diagrams, and thus appear to render specification by diagrams an attractive and plausible approach.

Statecharts constitute a visual formalism for describing states and transitions in a modular fashion, enabling clustering, orthogonality (i.e., concurrency) and refinement, and encouraging ‘zoom' capabilities for moving easily back and forth between levels of abstraction.

The kernel of the approach is the extension of conventional state diagrams by AND/OR decomposition of states together with inter-level transitions, and a broadcast mechanism for communication between concurrent components.

The graphics is actually based on a more general concept, the higraph, which combines notions from Euler circles, Venn diagrams and hypergraphs, and which seems to have a wide variety of applications.

An arrow will be labelled with an event (or an abbreviation of one) and optionally also with a parenthesized condition, Mealy-like outputs, or actions.

Statechart Example


The literature on software and systems engineering is almost unanimous in recognizing the existence of a major problem in the specification and design of large and complex reactive systems.

The problem is rooted in the difficulty of describing reactive behavior in ways that are clear and realistic, and at the same time formal and rigorous, sufficiently so to be amenable to detailed computerized simulation. The behavior of a reactive system is really the set of allowed sequences of input and output events, conditions, and actions, perhaps with some additional information such as timing constraints.

Statechart Example

Reactive Systems

A reactive system, in contrast with a transformational system, is characterized by being, to a large extent, event-driven, continuously having to react to external and internal stimuli.


Much of the literature also seems to be in agreement that states and events are a piori a rather natural medium for describing the dynamic behavior of a complex system.

A basic fragment of such a description is a state transition, which takes the general form “when event E occurs in state A, if condition C is true at the time, the system transfers to state B”

State diagrams are simply directed graphs, with nodes denoting states, and arrows (labelled with the triggering events and guarding conditions) denoting transitions.


A good state/event approach should also cater naturally for more general and flexible statements, such as

  • (1) “in all airborne states, when yellow handle is pulled seat will be ejected”,

  • (2) “gearbox change of state is independent of braking system”,

  • (3) “when selection button is pressed enter selected mode”,

  • (4) “display-mode consists of time-display, date-display and stopwatch-display”.

Proposed Solutions

Clause (1) calls for the ability to cluster states into a superstate

(2) Introduces independence, or orthogonality

(3) Hints at the need for more general transitions than the single event-labelled arrow

(4) Captures the refinement of states.


State-levels: Clustering and refinement

Clustering states inside other states via exclusive-or (XOR) semantics

The outer state is an abstraction of the inner states

Enables Zoom in/out

A default state can be specified, analogous to start states of finite state automata

Orthogonality: Independence and concurrency

Being in a state, the system must be in all of its AND components

The notation is the physical splitting of a box into components using dashed lines

The outer (AND) state is the orthogonal product of the inner states

Synchronization: a single event causes two simultaneous happenings

Independence: a transition is the same whether the system is in any given inner state

Orthogonality avoids exponential blow-up in the number of states, usual in classical finite-state automata or state diagrams

Formally, orthogonal product is a generalization of the usual product of automata, the difference being that the latter is usually required to be a disjoint product, whereas here some dependence between components can be introduced, by common events or "in G"-like conditions.

Condition and selection entrances

Conditional entrance to an inner state according to guards (like an if/else statement)

Selection occurs when the state to be entered i determined in a simple one-one fashion by the value of a generic event (like a switch statement)

Delays and timeouts

The expression timeout(event, number) represents the event that occurs precisely when the specified number of time units have elapsed from the occurrence of the specified events.


Laying out parts of the statechart not within but outside of their natural neighborhood.

This conventional notation for hierarchical description has the advantage of keeping the neighborhood small yet the parts of interest large.

Taking this to the extreme yields and/or trees, undermining the basic area-dominated graphical philosophy. However, adopting the idea sparingly can be desirable.

Actions and Activities

What 'pure' statecharts represent is the control part of the system, which is responsible for making the time-independent decisions that influence the system's entire behavior.

What is missing is the ability of statecharts to generate events and to change the value of conditions.

These can be expressed with the action notation that can be attached to the label of a transition.

Actions are split-second happenings, instantaneous occurrences that take ideally zero time.

Activities are durable, they take some time, whereas actions are instantaneous. In order to enable statecharts to control activities too, it needs two special actions to start and stop activities. start(X) and stop(X)

Possible extensions to the formalism

Parameterized states

Different states have identical internal structure. Some of the most common ones are those situations that are best viewed as a single state with a parameter.

Overlapping states

Statechart don't need to be entirely tree-like.

Overlapping states act as OR, they can be used economically to describe a variety of synchronization primitives, and to reflect many natural situations in complex systems. Note that they cause semantical problems, especially when the overlapping involves orthogonal components.

Incorporating temporal logic

It is possible to specify ahead of time many kinds of global constraints in temporal logic, such as eventualities, absence from deadlock, and timing constraints.

Then attempt to verify that the statechart-based description satisfies the temporal logic clauses.

Recursive and probabilistic statecharts

Recursion can be done by specifying the name of a separate statechart.

Probabilistic statecharts can be done by allowing nondeterminism.

Practical experience and implementation

The statechart formalism was conceived and developed while the author was consulting for the Israel Aircraft Industries on the development of a complex state-of-the-art avionics system for an advanced airplane.

This projects can demand bewildering levels of competence from many people of diverse backgrounds.

Thee people have to continuously discuss, specify, design, construct and modify parts of the system.

The languages and tools they use in doing their work, and for communicating
ideas and decisions to others, has an effect on the quality, maintainability
and expedition of the system that cannot be overestimated.

Things end users care about but programmers don't

(And we agree with them, but our tools make it really hard to provide)


  • Change color of things

  • Nice default color palette

  • Use my preferred color palette

  • Use this color from here

  • Import/export color themes

  • Some words or identifiers should always have a specific color

  • The same thing should have the same color everywhere

  • Good contrast

  • Automatic contrast of text if I pick a dark/light background

  • Automatically generate color for large number of items

    • All the colors should belong to the same palette

    • Don't generate colors that are hard to tell apart

    • Don't place similar colors close to each other

    • Don't use the background color on things


  • Apply basic formatting to text

  • Align text

  • Use the fonts I use in Office

  • WYSIWYG editor that behaves like Word

  • Number alignment

  • Number/date formatting to the right locale

  • Decimal formatting to the right/fixed number of decimal places

  • No weird .0000004 formatting anywhere

  • No .00 for integers

  • Emoji support


  • Dark theme

  • My Theme

  • Company Branding

  • Put the logo in places

  • My logo on the Login Page


  • Integrate with system accounts

  • Use accounts/permissions from Active Directory

  • Import from Excel/CSV

  • Import from email/email attachment

  • Export to Excel

  • Export to PDF/Image

  • Record a short video

    • As a GIF

  • Send as email

    • Send as email periodically

    • Send as PDF attachment on an email

  • Import/attach images

    • Crop before upload

    • Compress

    • Change Format

  • Use image as background but stretch the right way

  • Notifications on the app

    • On apps on my phone

    • By SMS

    • On our systems

    • On my mail


  • Good error handling

  • Good error descriptions

    • Translated error messages

  • Tell me what to do to solve an error

  • Tell me what this does before I click on it

  • Support touch gestures and mouse

  • Keyboard shortcuts

    • Customizable

  • Undo everywhere

  • Multiple undo

  • Recover deleted things

  • Ask before deleting

  • Copy and paste

  • Templates

  • Detailed and up to date guides in text with screenshots at each step and highlights

    • And in Video

    • Screenshots that stay up to date as the product evolves

    • That are in sync with the version I'm using

    • That adapt to my custom setup

  • Up to date and detailed documentation

  • Translated to my language

  • Sorting everywhere

    • Natural sort

    • Sort by multiple criteria

  • Filter everywhere

    • Fuzzy filtering

    • Case sensitive/insensitive filtering

    • Filter by multiple/complex criteria

  • Track what is used where and warn me when deleting

  • Optional cascade deletion

  • Native and simple date picker on every platform

  • Sorted lists/selects (by the label!)

    • Natural Sorting

  • Dropdowns with filtering but that behave like the native controls

  • Preview things

  • Consistent button ordering/labels

  • Consistent capitalization

  • Progress bars for slow/async operations

  • Responsive UI during slow operations

  • Disabling buttons during slow operations

  • Handling double clicks on things that should be clicked once

  • Clear indication of what can be clicked


  • Deploy on exotic configurations/platforms

  • Deploy on old/unsupported versions

  • Deploy on what we have

  • Deploy/Run without internet connection

  • Handle Excel, CSV, JSON, XML

    • Handle malformed versions of all of the above

  • Handle (guess) dates with no timezone

  • Handle ambiguous/changing date formats

  • Integrate with obscure/outdated software/format

  • It should work on my old android phone browser/IE 11

  • Unicode

    • Handle inputs with unknown and variable encodings


  • Easy to install

  • Easy to update

  • Easy to backup

  • Easy to recover

  • Works with the database and version we use

  • Can be mounted on a path that is not the root of the domain

Deprecating the first function, a compatibility strategy

NOTE: This post refers to an old version of instadeq that is no longer available.

It happened, a user reported that a function was behaving weird, she did date ( current date ) to record and the month value in the resulting record was wrong, it was 10 instead of 11.

As a programmer I noticed instantly, in the date _ to record implementation I forgot the fact that in Javascript Date.getMonth returns zero indexed months, that means January is 0 and December is 11.

I wanted to fix the error, but there are many users out there that may be using this function, how could I fix it without breaking existing visualizations?

My first quick idea was one I used in the past for other kind of migrations, when reading the visualization configuration next time, check if there's a call to the "old" version, if so, change it for a call to the "new" and fixed one.

There's a problem with that solution, if the user noticed the problem and used the month field somewhere else in the logic, the change will break that logic by having months from 2 to 13.

Building a program transformation that would detect and fix that is really hard, and I may introduce weird changes that will surprise users.

That's why I decided to introduce a new version of the function alongside the current one, they both have the same representation in the UI, from now on only the new one can be selected, but the old one will still be available for existing logic that uses it.

The only difference is that when displaying the old version, a warning sign will appear, on hover it will explain that this version had a problem and that you should select the new version and change any month + 1 logic you may have.

Here's an example with the old and new function one after the other, notice the month field in a and b on the left side.


On mouse hover:


This was done in a generic way to deprecate other functions in the future the same way.

We will try hard to avoid having to use it.

Lost in Migration: Missing Gestures and a Proposal

The pains that remain are the missing freedoms

-- Liminar Manifesto

There's a tension between a clean, simple user interface that can be displayed in small screens and used with touch; and a discoverable and beginner friendly one that provides with hints, context and extra information to help new users discover an application's capabilities.

At instadeq, we try to provide hints, context, and extra information by [ab]using tooltips.

Almost every part that can be interacted with, sets the correct cursor and provides a tooltip, to be sure that the information is noticed by users we display the tooltips at the bottom right corner of our application.

This is done to avoid the "hover and wait" interaction to discover if something has a tooltip, and the "I accidentally moved my mouse and the tooltip went away" problem.

All of this careful considerations go out of the window when using an application on a touch device.

There's no hover support there, there's no cursor change to hint that the user can interact with a thing and because of this, there's no easy and standard way to display tooltips.

Many different ways have been proposed and used but even then the user has to discover those and learn an application specific way of doing it.

Thinking and talking about this on the [1] I came with an idea.

A new gesture to discover user interface components that want to provide more context about themselves.

For lack of a name let's call it the enquire gesture.

The idea behind the gesture is twofold:

  • To inquire about all the UI components that can provide more information about themselves

  • To inquire a specific component for its details directly

The gesture works with a shape we all know, the question mark: ?

The interaction goes as follows:

  • The user starts using an application, is on a new screen and doesn't know what is possible on that screen

  • The user draws the top part of the question mark anywhere on the screen (the part without the dot)

  • The 'inquire-all' event is triggered

    • Here the application can highlight the components that have contextual information

  • The user either waiting for the components highlight or continuing with the gesture taps the question mark's dot on the component he/she wants to know more about

  • The ('inquire', target) event is triggered where target is a reference to the component that was tapped

  • The application can then display more information about that component the way it prefers (tooltip, dialog, popover, embedded UI section)

This gesture not only provides a way to replace tooltips but also a way to "come back" to the tour that some applications implement.

The thing with the tour feature in some applications is that it happens when you want to jump straight to the application to play with it and/or don't have time/willingness to go through a lot of information that you don't know when or if you are going to need.

But at the same time, you are afraid that if you skip the tour you won't be able to go back to it whenever you want.

But then, when you go back, you hope there's a skip button so you can go to the place you are interested.

As you may realize, I don't like many things about the tour feature :)

With the enquire gesture, you can ask the application to highlight the "tour locations" and you can jump straight to the ones you are interested, in in the order you want, whenever you want.

So, here's my proposal for a new gesture to replace tooltips and the tour feature in a standard way, feedback and implementations welcome.