Skip to main content

Japan's Fifth Generation Computer Systems: Success or Failure?

This post is a summary of content from papers covering the topic, it's mostly quotes from the papers from 1983, 1993 and 1997 with some edition, references to the present and future depend on the paper but should be easy to deduce. See the Sources section at the end.


In 1981, the emergence of the government-industry project in Japan known as Fifth Generation Computer Systems (FGCS) was unexpected and dramatic.

The Ministry of International Trade and Industry (MITI) and some of its scientists at Electrotechnical Laboratory (ETL) planned a project of remarkable scope, projecting both technical daring and major impact upon the economy and society.

This project captured the imagination of the Japanese people (e.g. a book in Japanese by Junichiro Uemae recounting its birth was titled The Japanese Dream).

It also captured the attention of the governments and computer industries of the USA and Europe, who were already wary of Japanese takeovers of important industries.

A book by Feigenbaum and McCorduck, The Fifth Generation, was a widely-read manifestation of this concern.

The Japanese plan was grand but it was unrealistic, and was immediately seen to be so by the MITI planners and ETL scientists who took charge of the project.

A revised planning document was issued in May 1982 that set more realistic objectives for the Fifth Generation Project.


Previous Four Generations

  • First generation: ENIAC, invented in 1946, and others that used vacuum tubes.

  • Second generation: IBM 1401, introduced in 1959, and others that used transistors.

  • Third generation: IBM S/360, introduced in 1964, and others that used integrated circuits.

  • Fourth generation: IBM E Series, introduced in 1979, and others that used very large-scale integrated circuits, VLSI which have massively increased computational capacity but are still based on the Von Neumann architecture and require specific and precise commands to perform a task.

FGCS was conceived as a computer that can infer from an incomplete instruction, by making use of the knowledge it has accumulated in its database.

FGCS was based on an architecture distinct from that of the previous four generations of computers which had been invented by Von Neumann and commercially developed by IBM among others.

The Vision

  1. Increased intelligence and ease of use so that they will be better able to assist man. Input and output using speech, voice, graphics, images and documents, using everyday language, store knowledge to practical use and to the ability to learn and reason.

  2. To lessen the burden of software generation in order that a high level requirements specification is sufficient for automatic processing, so that program verification is possible thus increasing the reliability of software. Also the programming environment has to be improved while it should also be possible to use existing software assets.

  3. To improve overall functions and performance to meet social needs. The construction of light, compact, high-speed, large capacity computers which are able to meet increased diversification and adaptability, which are highly reliable and offer sophisticated functions.


The objective of this project is to realise new computer systems to meet the anticipated requirements of the 1990s.

Everybody will be using computers in daily life without thinking anything of it. For this objective, an environment will have to be created in which a man and a computer find it easy to communicate freely using multiple information media, such as speech, text, and graphs.

The functions of FGCSs may be roughly classified as follows:

  1. Problem-solving and inference

  2. Knowledge-base management

  3. Intelligent interface

The intelligent interface function will have to be capable of handling man/machine communication in natural languages, speeches, graphs, and images so that information can be exchanged in a way natural to a man.

There will also be research into and development of dedicated hardware processors and high-performance interface equipment for efficiently executing processing of speech, graph, and image data.

Several basic application systems will be developed with the intention of demonstrating the usefulness of the FGCS and the system evaluation. These are machine translation systems, consultation systems, intelligent programming systems and an intelligent VLSI-CAD system.

The key technologies for the Fifth Generation Computer System seem to be:

  • VLSI architecture

  • Parallel processing such as data flow control

  • Logic programming

  • Knowledge base based on relational database

  • Applied artificial intelligence and pattern processing

Project Requirements

  1. Realisation of basic mechanisms for inference, association, and learning in hardware, making them the core functions of the Fifth Generation computers.

  2. Preparation of basic artificial intelligence software in order to fully utllise the above functions.

  3. Advantageous use of pattern recognition and artificial intelligence research achievements, in order to realise man/machine interfaces that are natural to man.

  4. Realisation of support systems for resolving the 'software crisis' and enhancing software production.

It will be necessary to develop high performance inference machines capable of serving as core processors that use rules and assertions to process knowledge information.

Existing artificial intelligence technology has been developed to be based primarily on LISP. However, it seems more appropriate to employ a Prolog-like logic programming language as the interface between software and hardware due to the following considerations: the introduction of VLSI technology made possible the implementation of high level functions in hardware; in order to perform parallel processing, it will be necessary to adopt new languages suitable for parallel processing; such languages will have to have a strong affinity with relational data models.

Research and development will be conducted for a parallel processing hardware architecture intended for parallel processing of new knowledge bases, and which is based on a relational database machine that includes a high-performance hierarchical memory system, and a mechanism for parallel relational operations and knowledge operations.

The knowledge base system is expected to be implemented on a relational database machine which has some knowledge base facilities in the Fifth Generation Computer System, because the relational data model has a strong affinity with logic programming.

Relational calculus has a close relation with the first order predicate logic. Relational algebra has the same ability as relational calculus in the description of a query. These are reasons for considering a relational algebra machine as the prime candidate for a knowledge base machine.


There is no precedent for this innovative and large-scale research and development anywhere in the world. We will therefore be obliged to move toward the target systems through a lengthy process of trial and error, producing many original ideas along the way.

Timeline / Plan

(1982-1984) Initial Stage

During the initial stage, research was conducted on the basic technologies for FGCS. The technologies developed included:

  1. ESP (extended self-contained Prolog), a sequential logic-programming language based on Prolog.

  2. PSI (personal sequential inference machine), the world's first sequential inference computer to incorporate a hardware inference engine.

  3. SIMPOS (sequential inference machine programming and operating system), the world's first logic-programming-language-based operating system written with ESP for the PSI.

  4. GHC (guarded horn clauses), a new parallel-logic language for the implementation of parallel inference.

(1985-1988) Intermediate Stage

During the intermediate stage, research was done on the algorithms needed for implementation of the subsystems that would form the basis of FGCS and on the basic architecture of the new computer.

Furthermore, on the basis of this research, small and medium-sized subsystems were developed. The technologies developed included:

  1. KL1, a logic language for parallel inference.

  2. PIMOS (parallel inference machine operating system), a parallel-machine operating system based on the use of KL1 (kernel horn clauses 1).

  3. KAPPA (knowledge application oriented advanced database and knowledge base management system), a knowledge-base management system capable of handling large amounts of complex knowledge.

  4. MultiPSI, an experimental parallel inference machine consisting of 64 element processors linked together in the form of a two-dimensional lattice.

(1989-1992) Final Stage

During the final stage, the object was to put together a prototype fifth generation computer based on the technologies developed during the two preceding stages. The project team developed a number of additional features including:

  1. PIM (parallel inference machine), a parallel inference computer consisting of 1000 linked element processors.

  2. Improvement of PIMOS.

  3. KAPPA-p, a parallel data-management system. For the knowledge programming system the team also developed.

  4. Interactive interface technology.

  5. Problem-solving programming technology.

  6. Knowledge-base creation technology.

  7. To test the prototype system, the team also carried out research into the integration and application of parallel programming technology.

  8. several application software programs were developed to run on the PIM.

(1993-1994) Wrap Up

The project continued on a more limited scale during 1993 and 1994.

In addition to follow-up research on, say, a new KL1 programming environment (called KL1C) on sequential and parallel UNIX-based machines, many efforts were made to disseminate FGCS technologies, for instance, to distribute free ICOT software and to disclose technical data on the Internet.

Why not a Generation Evolution?

For computers to be employed at numerous application levels in the 1990s, they must evolve from machines centered around numerical computations to machines that can assess the meaning of information and understand the problems to be solved.

Non-numeric data such as sentences, speeches, graphs, and images will be used in tremendous volume compared to numerical data.

Computers are expected to deal with non-numeric data mainly in future applications. However, present computers have much less capability in non-numeric data processing than in numeric data processing.

The key factors leading to the necessity for rethinking the conventional computer design philosophy just described include the following:

  1. Device speeds are approaching the limit imposed by the speed of light.

  2. The emergence of VLSI reduces hardware costs substantially, and an environment permitting the use of as much hardware as is required will shortly be feasible.

  3. To take advantage of the effect of VLSI mass production, it will be necessary to pursue parallel processing.

  4. Current computers have extremely poor performance in basic functions for processing speeches, texts, graphs, images and other nonnumerical data, and for artificial intelligence type processing such as inference, association, and learning.

The research and development targets of the FGCS are such core functions of knowledge information processing as problem-solving and inference systems and knowledge-base systems that cannot be handled within the framework of conventional computer systems.


With the Fourth Conference on Fifth Generation Computer Systems, held June 1-5, 1992 in Tokyo, Japan, an era came to an end.

This section quotes different people analyzing the results, it won't be fully consistent

Since then ten years have passed in which ICOT grew to about 100 researchers and spent about 54 billion Yen, that is some 450 million US$. In these ten years a large variety of machines have been built ranging from the special purpose PSI machine, that is a personal sequential inference machine, to several constellations of processors and memory ranging from 16 to 512 processing elements together forming the PIM family, that is the Parallel Inference Machine.


Some people overreacted and spoke even of a technological war. Today some people again overreact. As they see that their fears have not materialized, they regard the project as a failure.



  • ✅ Hardware: use of parallelism

  • ✅ Software: use of logic programming

  • ✅ Applications

    • ❌ No natural language, no pattern recognition

  • ❌ Break-through in architecture

  • ❌ Break-through in software


  • ✅ Impact on Japanese researchers

  • ❌ Impact on Japanese hardware makers


  • ✅ International scientific reputation

    • ❌ But no solution to social problems in Japan


  • ICOT has shown the ability of Japan to innovate in computer architectures.

  • The ICOT architectures' peak parallel performance is within the range of the original performance goals.

  • The PIMs represent an opportunity to study tradeoffs in parallel symbolic computing which does not exist elsewhere.

  • KL1 is an interesting model for parallel symbolic computation, but one which is unlikely to capture the imagination of US researchers.

  • PIMOS has interesting ideas on control of distribution and communication which US researchers should evaluate seriously.

  • ICOT has been one of the few research centers pursuing parallel symbolic computations.

  • ICOT has been the only center with a sustained effort in this area.

  • ICOT has shown significant (i.e. nearly linear) acceleration of non regular computations (i.e. those not suitable for data parallelism of vectorized pipelining).

  • ICOT created a positive aura for AI, Knowledge Based Systems, and innovative computer architectures. Some of the best young researchers have entered these fields because of the existence of ICOT.


  • ICOT has done little to advance the state of knowledge based systems, or Artificial Intelligence per se.

  • ICOT's goals in the area of natural language were either dropped or spun out to EDR.

  • Other areas of advanced man machine interfacing were dropped.

  • Research on Very Large Knowledge bases were substantially dropped.

  • ICOT's efforts have had little to do with commercial application of AI technology. Choice of language was critical.

  • ICOT's architectures have been commercial failures. Required both a switch in programming model and the purchase of cost ineffective hardware.

  • ICOT hardware has lagged behind US hardware innovation (e.g. the MIT Lisp Machine and its descendants and the MIT Connection Machine and its descendants).

  • Application systems of the scale described in the original goals have not been developed (yet).

  • Very little work on knowledge acquisition.

The early documents discuss the management of very large knowledge bases, of large scale natural language understanding and image understanding with a strong emphasis on knowledge acquisition and learning. Each of these directions seems to have been either dropped, relegated to secondary status, absorbed into the work on parallelism or transferred to other research initiatives.

The ICOT work has tended to be a world closed in upon itself. In both the sequential and parallel phases of their research, there has been a new language developed which is only available on the ICOT hardware. Furthermore, the ICOT hardware has been experimental and not cost effective. This has prevented the ICOT technology from having any impact on or enrichment from the practical work.


It is remarkable how little attention is given to the notion of parallel processing, while this notion turned out to be of such great importance for the whole project.

First, in my opinion the original goal of the FGCS project changed its emphasis from what has been described above as primarily a knowledge information processing system, KIPS, with very strong capabilities in man-machine interaction, such as natural language processing, into the following:

A computer system which is:

  • Easy to use intellectually

  • Fast in solving complex problems

In combining the two ideals:

  • Efficient for the mind

  • Efficient for the machine

The intellectual process of translating a problem into the solution of that problem should be simple. By exploiting sophisticated (parallel processing) techniques the computer should be fast.

Research Impact

Japan has indeed proved that it has the vision to take a lead for the rest of the world.

They acted wisely and offered the results to the international public for free use, thus acting as a leader to the benefit of mankind and not only for its own self-interest.

One of the major results and successes of the FGCS project is its effect on the infrastructure of Japanese research and development in information technology.

The technical achievements of ICOT are impressive. Given the novelty of the approaches, the lack of background, the difficulties to be solved, the amount of work done which has delivered something of interest is purely amazing; this is true in hardware as well as in software.

The fulfillment of the vision, should I say working on the "grand plan" and bringing benefits to the society, is definitely not at the level that some people anticipated when the project was launched. This is not, to me, a surprise at all, i.e. I have never believed that very significant parts of this grand plan could be successfully tackled.

Overall, the project has had a major scientific impact, in furthering knowledge throughout the world of how to build advanced computing systems.

I agree that the international impact of the project was not as large as one hoped for in the beginning. I think all of us who believed in the direction taken by the project, i.e. developing integrated parallel computer systems based on logic programming, hoped that by the end of the 10 years' period the superiority of the logic programming approach will be demonstrated beyond doubt, and that commercial applications of this technology will be well on their way. Unfortunately, this has not been the case. Although ICOT has reached its technological goals, the applications it has developed were sufficient to demonstrate the practicality of the approach, but not its conclusive superiority.

Lessons Learned

  1. Be aware that government-supported industrial consortia may not be able to 'read the market', particularly over the long term.

  2. Do not confuse basic research and advanced development.

  3. Expect negative results but hope for positive. Mid-course corrections are a good thing.

  4. Have vision. The vision is critical: people need a big dream to make it worthwhile to get up in the morning.

Logic Programming

It certainly provided a tremendous boost to research in logic programming.

I was expecting however to see 'actual use' of some of the technology at the end of the project. There are three ways in which this could have happened.

The first way would have been to have real world applications, in user terms; only little of that can be seen at this stage, even though the efforts to develop demonstrations are not be underestimated.

The second would have been to the benefit of computer systems themselves. This does not appear to be directly happening, at least not now and this is disappointing if only because the Japanese manufacturers have been involved in the FGCS project, at least as providers of human resources and as subcontractors.

The third way would have been to impact computer science outside of the direct field in which this research takes place: for example to impact AI, to impact software engineering, etc.; not a lot can yet be seen, but there are promising signs.

I am genuinely impressed by the scientific achievements of this remarkable project. For the first time in our field, there is a uniform approach to both hardware and software design through a single language, viz. KL1.

It is nearly unbelievable how much software was produced in about two and a half years written directly or indirectly in KL1.

There are at least three aspects to what has been achieved in KLI:

First the language itself is an interesting parallel programming language. KL1 bridges the abstraction gap between parallel hardware and knowledge based application programs. Also it is a language designed to support symbolic (as opposed to strictly numeric) parallel processing. It is an extended logic programming language which includes features needed for realistic programming (such as arrays).

However, it should also be pointed out that like many other logic programming languages, KL1 will seem awkward to some and impoverished to others.

Second is the development of a body of optimization technology for such languages. Efficient implementation of a language such as KL1 required a whole new body of compiler optimization technology.

The third achievement is noticing where hardware can play a significant role in supporting the language implementation.

By Companies

The main Companies involved in the project were Fujitsu, Hitachi, Mitsubishi Electric, NEC, Oki, Toshiba, Matsushita Electric Industrial and Sharp.

Almost all companies we interviewed said that ICOT's work had little direct relevance to them.

The reasons most frequently cited were: The high cost of the ICOT hardware, the choice of Prolog as a language and the concentration on parallelism.

However, nearly as often our hosts cited the indirect effect of ICOT: the establishment of a national project with a focus on 'fifth generation technology' had attracted a great deal of attention for Artificial Intelligence and knowledge based technology.

Several sites commented on the fact that this had attracted better people into the field and lent an aura of respectability to what had been previously regarded as esoteric.


During the first 3 year phase of the project, the Personal Sequential Inference machine (PSI 1) was built and a reasonably rich programming environment was developed for it.

Like the MIT machine, PSI was a microprogrammed processor designed to support a symbolic processing language. The symbolic processing language played the role of a broad spectrum 'Kernel language' for the machine, spanning the range from low level operating system details up to application software. The hardware and its microcode were designed to execute the kernel language with high efficiency. The machine was a reasonably high performance work station with good graphics, networking and a sophisticated programming environment. What made PSI different was the choice of language family. Unlike more conventional machines which are oriented toward numeric processing or the MIT machine which was oriented towards LISP the language chosen for PSI was Prolog.

The choice of a logic programming framework for the kernel language was a radical one since there had been essentially no experience anywhere with using logic programming as a framework for the implementation of core system functions.

Several hundred PSI machines were built and installed at ICOT and collaborating facilities; and the machine was also sold commercially. However, even compared to specialized Lisp hardware in the US, the PSI machines were impractically expensive. The PSI (and other ICOT) machines had many features whose purpose was to support experimentation and whose cost/benefit tradeoff had not been evaluated as part of the design; the machines were inherently non-commercial.

The second 3 year phase saw the development of the PSI 2 machine which provided a significant speedup over PSI 1. Towards the end of Phase 2 a parallel machine (the Multi-PSI) was constructed to allow experimentation with the FGHC paradigm. This consisted of an 8 × 8 mesh of PSI 2 processors, running the ICOT Flat Guarded Horn Clause language KL1.

The abstract model of all PIMs consists of a loosely coupled network connecting clusters of tightly coupled processors. Each cluster is, in effect, a shared memory multiprocessor; the processors in the cluster share a memory bus and implement a cache coherency protocol. Three of the PIMs are massively parallel machines.

Multi-Psi is a medium scale machine built by connecting 64 Psi's in a mesh architecture.

Even granting that special architectural features of the PIM processor chips may lead to a significant speedup (say a factor of 3 to be very generous), these chips are disappointing compared to the commercial state of the art.

Specialized Hardware

Another most important issue, of a completely different nature, is the question of whether ICOT was wise to concentrate so much effort on building specialised hardware for logic programming, as opposed to building, or using off the shelf, more general purpose hardware not targeted at any particular language or programming paradigm. The problems with designing specialised experimental hardware is that any performance advantage that can be gained is likely to be rapidly overtaken by the ever-continuing rapid advance of commercially available machines, both sequential and parallel. ICOT's PSI machines are now equalled if not bettered for Prolog and CCL performance by advanced RISC processors.

Many are skeptical about the need for special purpose processors and language dedicated machines. The LISP machines failed because LISP was as fast, or nearly as fast, implemented via a good compiler on a general purpose machine. The PSI machines surely do not have a market because the latest Prolog compilers, compiling down to RISC instructions and using abstract interpretation to help optimize the code, deliver comparable performance.

It is interesting to compare the PIMs to Thinking Machines Inc.'s CM-5; this is a massively parallel machine which is a descendant of the MIT Connection Machine project. The CM-5 is the third commercial machine in this line of development.

Although the Connection Machine project and ICOT started at about the same time, the CM-5 is commercially available and has found a market within which it is cost effective.

Demo Applications

I think that this was the result of the applications being developed in an artificial set-up. I believe applications should be developed by people who need them, and in the context where they are needed.

In general, I believe that too little emphasis was placed on building the best versions of applications on the machines (as opposed to demonstration versions).

In a nutshell, the following has been achieved: for a number of complicated applications in quite diverse areas, ranging from Molecular Biology to Law it has been shown that it is indeed possible to exploit the techniques of (adapted) logic programming, LP, to formulate the problems and to use the FGCS machines to solve them in a scalable way; that is parallelism could indeed profitably be used.

The demonstrations involved:

  1. A Diagnostic and control expert system based on a plant model

  2. Experimental adaptive model-based diagnostic system

  3. Case-based circuit design support system

  4. Co-HLEX: Experimental parallel hierarchical recursive layout system

  5. Parallel cell placement experimental system

  6. High level synthesis system

  7. Co-LODEX: A cooperative logic design expert system

  8. Parallel LSI router

  9. Parallel logic simulator

  10. Protein sequence analysis program

  11. Model generation theorem prover: MGTP

  12. Parallel database management system: Kappa-P

  13. Knowledge representation language: QUIXOTE

  14. A parallel legal reasoning system: HELIC-II

  15. Experimental motif extraction system

  16. MENDELS ZONE: A concurrent program development system

  17. Parallel constraint logic programming system: GDCC

  18. Experimental system for argument text generation: Dulcinea

  19. A parallel cooperative natural language processing system: Laputa

  20. An experimental discourse structure analyzers

They have some real success on a technical level, but haven't produced applications that will make a difference (on the world market).

Programming Languages

Two extended Prolog-like languages (ESP and KL0) were developed for PSI-1. ESP (Extended Self Contained Prolog) included a variety of features such as coroutining constructs, non-local cuts, etc. necessary to support system programming tasks as well as more advanced Logic Programming. SIMPOS, the operating system for the PSI machines, was written in ESP.

Phase 3 has been centered around the refinement of the KL1 model and the development of massively parallel hardware systems to execute it.

KL1 had been refined into a three level language:

  • KLI-b is the machine level language underlying the other layers

  • KLI-c is the core language used to write most software; it extends the basic FGHC paradigm with a variety of useful features such as a macro language

  • KLI-p includes the 'pragmas' for controlling the implementation of the parallelism

Much of the current software is written in higher level languages embedded in KL1, particularly languages which establish an object orientation. Two such languages have been designed: A'UM and AYA. Objects are modeled as processes communicating with one another through message streams.

Two languages of this type developed at ICOT are CAL (Constraint Avec Logique) which is a sequential constraint logic programming language which includes algebraic, Boolean, set and linear constraint solvers.

A second language, GDCC (Guarded Definite Clauses with Constraints) is a parallel constraint logic programming language with algebraic, Boolean, linear and integer parallel constraint solvers.

Prolog vs LISP

Achieving such revolutionary goals would seem to require revolutionary techniques. Conventional programming languages, particularly those common in the late 1970s and early 1980s offered little leverage.

The requirements clearly suggested the use of a rich, symbolic programming language capable of supporting a broad spectrum of programming styles.

Two candidates existed: LISP which was the mainstream language of the US Artificial Intelligence community and Prolog which had a dedicated following in Europe.

LISP had been used extensively as a systems programming language and had a tradition of carrying with it a featureful programming environment; it also had already become a large and somewhat messy system. Prolog, in contrast, was small and clean, but lacked any experience as an implementation language for operating systems or programming environments.


Multi-PSI supported the development of the ICOT parallel operating system (PIMOS) and some initial small scale parallel application development. PIMOS is a parallel operating system written in KL1; it provides parallel garbage collection algorithms, algorithms to control task distribution and communication, a parallel file system, etc.


Interest in AI (artificial intelligence) boomed around the time and companies started to realize the potential value of FGCS research as a complement to their own AI research.


In the area of databases, ICOT has developed a parallel database system called Kappa-P. This is a 'nested relational' database system based on an earlier ICOT system called Kappa. Kappa-P is a parallel version of Kappa, re-implemented in KL1.

It also adopts a distributed database framework to take advantage of the ability of the PIM machines to attach disk drives to many of the processing elements. Quixote is a Knowledge Representation language built on top of Kappa-P.

It is a constraint logic programming language with object-orientation features such as object identity, complex objects described by the decomposition into attributes and values, encapsulation, type hierarchy and methods. ICOT also describes Quixote as a Deductive Object Oriented Database (DOOD).

Theorem Proving

Another area explored is that of Automatic Theorem Proving. ICOT has developed a parallel theorem prover called MGTP (Model Generation Theorem Prover).

Fun Trivia

The one commercial use we saw of the PSI machines was at Japan Air Lines, where the PSI-II machines were employed; ironically, they were remicrocoded as Lisp Machines.



This section has no quotes from sources, these are my observations:

I find that for every thing you can point as a reason for its "failure" there's an alternative universe where you could point to the same things as reasons for its success.

For example:

  • Starting from scratch

  • Thinking from first principles

  • Radical changes to the status quo

  • Single focus

  • Vertical integration

  • Risky bet on nascent technologies

  • Specialized Hardware

The only reason that is clearly negative is that the demo applications were not developed with real use cases and involving real users, that may have made it harder to show the value of FGCS to end users and industrial partners.

It would be easy to mention Worse is Better but if it keeps coming up as a reason, maybe we should pay more attention to it?

My conclusion right now is that technically they achieved many of the things they planned, they succeeded at going where they thought the puck was going to be in the 90s but it ended up somewhere else.

What's your conclusion?

Past Futures of Programming: General Magic's Telescript


Telescript was a programming language developed by General Magic in the nineties that allow the first generation of mobile devices to interact with services in a network.

This sounds similar to the way smartphones work today, but the paradigm that Telescript supported called "Remote Programming" instead of Remote Procedure Calling is really different to the way we build services and mobile applications today.

For this reason and because as far as I know there's not much knowledge about the language and the paradign online I decided to write a summary after reading all the content I could find. All resources are linked at the end of the article.

If you haven't heard of General Magic I highly recommend you to watch the documentary, here's the trailer:

In case you prefer content in video form, the following may give you an idea.

For an overview video and the earliest mention of the Cloud I can think of see:

An introduction by Andy Hertzfeld at 21:20 in this video:

Another (longer) talk by Andy at Standford two weeks after the one above, mostly focused on Magic Cap but mentions Telescript around 1:06:38:

From now on most of the text is quoted from the resources linked at the end, since my personal notes are just a few I will quote only my comments, they will look like this:

Hi, this is a comment from the author and not a quote from Telescript resources.

Since each resource attempts to be self contained there's a lot of content that is repeated with some variation.

I slightly edited the quoted text to avoid repetition. Emphasis is mine.

The Pitch

The convergence of computers and communication, and advances in graphical user interfaces are placing powerful new electronics products in the hands of consumers.

In principle, such products can put people in closer touch with one another -- for example, by means of electronic postcards; simplify their relationships by helping them make and keep appointments; provide them with useful information such as television schedules, traffic conditions, restaurant menus, and stock market results; and help them carry out financial transactions, from booking theater tickets, to ordering flowers, to buying and selling stock.

Unless public networks become platforms on which third-party developers can build communicating applications, the networks will respond much too slowly to new and varied requirements and so will languish. Unfortunately, today's networks are not platforms.

Telescript enables the creation of a new breed of network that supports the development of communicating applications by making the network a platform for developers. It provides the "rules of the road" for the information superhighway, which leads to the electronic marketplace.

The Electronic Marketplace

Telescript integrates an electronic world of computers and the networks that link them. This world is filled with Telescript places occupied by Telescript agents.

In the electronic world, each place or agent represents an individual or organization in the physical world, its authority. A place's or agent's authority is revealed by its telename, which can't be falsified.

A place, but not an agent, has a teleaddress, which designates the place's location in this electronic world and reveals the authority of the individual or organization operating the computer in which the place is housed.

The typical place is permanently occupied by an agent of the place's authority and temporarily occupied –visited– by agents of other authorities.

The Plan

In July 1995, NTT, AT&T, and Sony announced a joint venture to deploy a Telescript-based service in Japan.

In October 1995, France Telecom (the operater of the Minitel electronic marketplace, which supports more than 26,000 merchants and accessed in 1994 by 18 million users) announced its licensing of Telescript for use in France.

The Language

Telescript is:

  • Object-oriented: As in SmallTalk

  • Complete: Any algorithm can be expressed in the language.

  • Dynamic: A program can define or discover, and then use new classes during execution. While exploring the electronic marketplace, a Telescript agent may encounter an object whose class it has never seen. The agent can take the object home, where it continues to function.

  • Persistent. The Telescript engine secures all its data transparently, even the program counter that marks its point of execution. Thus, a Telescript program persists even in the face of computer failures.

  • Interpreted

  • Portable and safe: A computer executes an agent's instructions through a Telescript engine, not directly. An agent can execute in any computer in which an engine is installed, yet it cannot access directly its processor, memory, file system, or peripheral devices.

  • Communication-centric: Designed for carrying out complex networking tasks: navigation, transportation, authentication, access control, and so on.

Telescript supplements systems programming languages such as C and C++; it does not replace them. Only the parts of an application that require the ability to move from one place in a network to another –the agents– are written in Telescript.

Telescript is object-oriented. In fact, like the first object-oriented language, Smalltalk, Telescript is deeply object-oriented. That is, every piece of information, no matter how small, is an object. Thus, even a Boolean is an object.

Like many object-oriented programming languages, Telescript focuses on classes. A class is a "slice" of an object's interface, combined with a related "slice" of its implementation. An object is an instance of a class.

Standard OOP similar to Smalltalk/Java, check Telescript Object Model for details

Process objects provide Telescipt's multi-tasking functionality. Processes are pre-emptively multi-tasked, and scheduled according to priority.

Telescript implements the following principal concepts:

  • Places

  • Agents

  • Travel

  • Meetings

  • Connections

  • Authorities

  • Permits

Telescript extends the concept of remote programming, the ability to upload and execute programs to a remote processor, with migrating processes. A Telescript mobile agent is a migrating process that is able to move autonomously during its execution to a different processor and continue executing there.

Mobile agents conceptually move the client to the server, where stationary processes, or places, service their requests. When it is done in a place, an agent might choose to move itself to a different processor, carry results back to where it originated, or simply terminate.

Clearly, security is a major concern in this scenario. The operator of a Telescript processor wants some assurance that nothing bad will come of its decision to admit an incoming agent. The host platform wants to know who is responsible for the agent. The agent, on the other hand, would like to trust that private information it is carrying will not be disclosed arbitrarily. It needs to trust the operator of the platform.

Telescript provides some useful Mix-in classes that associate security-relevant attributes with objects of those classes, where the associated functionality is enforced by the engine. These include:

  • Unmoved. An agent cannot take such an object along with it when it does a go. Places, for example, are Unmoved.

  • Uncopied. An attempt to make a copy of such an object returns a reference to the original object rather than creating a copy.

  • Copyrighted. This class is provided as a language extension rather than part of the language. Nonetheless, it is built into engines. An attempt to instantiate such an object will fail during initialization if it is not properly authorized by a suitable Copyright Enforcer object.

  • Protected. Such an object cannot be modified once created, and any reference to such an object is like having a protected reference, except that ownership can be transferred. Packages are Protected.

Unauthorized processes, or processes that are not running under the region's authority, cannot create instances of the following classes:

  • File: A File object can create a handle to any file that the engine can access on the underlying operating system.

  • External Handle: An External Handle object can open a TCP/IP port on the underlying operating system.

  • Control Manager: A Control Manager object can be used to perform a number of management and control operations on an engine. For example, a Control Manager can be used to change attributes of processes, such as their authority, or to halt the engine.

The Current Approach: Remote Procedure Calling

Today networking is based upon remote procedure calling (RPC). A network carries messages -- data -- that either request services or respond to such requests. The sending and receiving computers agree in advance upon the messages they will exchange. Such agreements constitute a protocol.

A client computer with work for a server computer to accomplish orchestrates the work with a series of remote procedure calls. Each call comprises a request, sent from client to server, and a follow-up response, sent from server to client.

The New Approach: Remote Programming

A different approach to networking is remote programming (RP). The network carries objects -- data and procedures -- that the receiving computer executes.

The two computers agree in advance upon the instructions from which procedures are formed. Such agreements constitute a language.

A client computer with work for a server computer to accomplish orchestrates the work by sending to the server an agent whose procedure makes the required requests (e.g., "delete") using the data (e.g., "two weeks"). Deleting the old files -- no matter how many -- requires just the message that transports the agent between computers. All of the orchestration, including the analysis deciding which files are old enough to delete, is done "on-site" at the server.

The salient characteristic of remote programming is that client and server computers can interact without the network's help once it has transported an agent between them. Thus, interaction does not require ongoing communication.

The opportunity for remote programming is bidirectional. Server computers, like client computers, can have agents, and a server agent can transport itself to and from a client computer. Imagine, for example, that a client computer makes its graphical user interface accessible to server agents. The client computer might do this, for example, by accepting a form from a server agent, displaying the form to the user, letting the user fill it out, and returning the completed form to the agent. The completed form might instruct a file server's agent to retrieve files meeting specified criteria.

Remote programming is especially well suited to computers that are connected to a network not permanently, but rather only occasionally.

With agents, manufacturers of client software can extend the functionality offered by manufacturers of server software.

Introducing a new rpc-based application requires a business decision on the part of the service provider.

A network using remote programming requires a buying decision on the part of one user. Remote programming therefore makes a network, like a personal computer, a platform.

The Engine

All Telescript engines provide:

  • Runtime type checking with dynamic feature binding

  • Automatic memory management

  • Exception processing

  • Authenticated, unforgeable identity for each process, in the form of an authority

  • Protected references

  • Protection by encapsulation of private properties and features. This forms the basis for object-enforced access controls

  • quotas and process privileges using permits, including control over creation of new processes

  • security-oriented mix-in classes (Copyrighted, Unmoved, Protected, Uncopied)

  • mediated protocols for process rendezvous (for example, entering a place, and Meeting Agents)

With regard to installing bogus classes, the Telescript engine won't admit an agent carrying a class that has the same name as one that's already in the engine, unless it's the same class. In other words, within an engine, class names must be unique.




A place is occupied by Telescript agents. Whereas places give the electronic marketplace its static structure, agents are responsible for its dynamic activity.

A Telescript agent is an independent process. The Telescript environment executes the programs of the various agents that occupy the marketplace in parallel with one another.

Two agents must meet before they can interact. One agent initiates the meeting using meet, an instruction in the Telescript instruction set. The second agent, if present, accepts or declines the meeting.

As a consequence of meet, the agents receive references to one another. The references let them interact as peers.

While in the same place, two agents interact by meeting. While in different places, they interact by communicating.

An agent can travel to several places in succession. It might link trips in this way to obtain several services or to select one from among them. Booking theater tickets, for example, might be only the first task on our user agent's to-do list. The second might be to travel to the florist place and there arrange for a dozen roses to be delivered the day of the theater event.



Telescript places lend structure and consistency to the electronic marketplace.

Each place represents, in the electronic world, an individual or organization -- the place's authority -- in the physical world. Several places may have the same authority. A place's authority is revealed by its telename.



Agents travel using Telescript's go instruction.

The agent need merely present a ticket that identifies its destination. An agent executes go to get from one place to another. After an agent executes go, the next instruction in the agent's program is executed at the agent's destination, not at its source. Thus, Telescript reduces networking to a single program instruction.

If the trip cannot be made (for example, because the means of travel cannot be provided or the trip takes too long), the go instruction fails and the agent handles the exception as it sees fit. However, if the trip succeeds, the agent finds that its next instruction is executed at its destination.

An agent can move from place to place throughout the performance of its procedure because the procedure is written in a language designed to permit this movement.


A meeting lets agents in the same computer call one another's procedures.

Another instruction available to the Telescript programmer is meet, which enables one agent to meet another. The agent presents a petition, which identifies the agent to be met. An agent executes meet whenever it wants assistance. By meeting, the agents receive references to one another that enable them to interact as peers.

The instruction requires a petition, data that specify the agent to be met and the other terms of the meeting, such as the time by which it must begin. If the meeting cannot be arranged (for example, because the agent to be met declines the meeting or arrives too late), the meet instruction fails and the agent handles the exception as it sees fit. However, if the meeting occurs, the two agents are placed in programmatic contact with one another.


Telescript lets two agents in different places make a connection between them. A connection lets agents in different computers communicate.

Connections are often made for the benefit of human users of interactive applications. The agent that travels in search of theater tickets, for example, might send to an agent at home a diagram of the theater showing the seats available. The agent at home might present the floor plan to the user and send to the agent on the road the locations of the seats the user selects.


Every Telescript place or agent has a permit that limits its capabilities in the electronic marketplace.

Because agents move, their permits, like their credentials, are of special concern. An agent's permit is established when the agent is created programmatically, and it is renegotiated whenever the agent travels between regions. The destination region may deny the agent capabilities that it received at birth as long as the agent is in that region.

Two kinds of capability are granted an agent by its permit. One kind is the right to use a certain Telescript instruction.

Another kind of capability is the right to use a particular Telescript resource, but only in a certain amount. An agent is granted, among other things, a maximum lifetime, measured in seconds (e.g., a 5-minute agent); a maximum size, measured in bytes (e.g., a 1K agent); and a maximum overall expenditure of resources, the agent's allowance, measured in teleclicks (e.g., a 50¢agent)

Permits provide a mechanism for limiting resource consumption and controlling the capabilities of executing code. A permit is an object (of the built-in class Permit) whose attributes include, among others:

  • age: maximum age in seconds

  • extent: maximum size in octets

  • priority: maximum priority

  • canCreate: true if new processes can be created

  • canGo: true if the affected code can request the go operation

  • canGrant: true if the permit of other processes can be "increased"

  • canDeny: true if the permit of other processes can be "decreased"

Telescript uses four kinds of permits:

  • native permits are assigned by the process creator

  • local permits can be imposed by a place on an entering agent or on a process created in that place. Local permits only apply in that place

  • regional permits are like local permits but imposed by the engine place. Regional permits only apply within a particular engine or set of engines comprising a region

  • temporary permits, which are imposed on a block of code using the Telescript restrict statement


Agents and places can discern but neither withhold nor falsify their authorities. Anonymity is precluded.

Telescript verifies the authority of an agent whenever it travels from one region of the network to another. A region is a collection of places provided by computers that are all operated by the same authority.

To determine an agent's or place's authority, an agent or place executes the Telescript's name instruction.

The result of the instruction is a telename, data that denote the entity's identity as well as its authority. Identities distinguish agents or places of the same authority.

A place can discern the authority of any agent that attempts to enter it and can arrange to admit only agents of certain authorities.

An agent can discern the authority of any place it visits and can arrange to visit only places of certain authorities.

An agent can discern the authority of any agent with which it meets or to which it connects and can arrange to meet with or connect to only agents of certain authorities.

Telescript provides different ways for identifying the authority and class of the caller (i.e., the requester, or client, from the point of view of the executing code) that are useful in making identity-based access checks. These are obtained directly from the engine in global variables, and include:

  • The current process. An unprotected reference to the process that owns the computation thread that requested the operation.

  • The current owner. An unprotected reference to the process that will own (retain owner references to) any objects created. The owner is usually the current process, but can be temporarily changed, for code executed within an own block, to one's own owner. That is, to the owner of the object that actually supplies the code being executed.

  • The current sponsor. An unprotected reference to the process whose authority will get attached to new process objects, and who will be charged for them. Processes own themselves, so for new process creation, it doesn't make any difference who the current owner is. The engine uses the authority (and permit) of the current sponsor, usually the current process, to determine responsibility for new agents and places.

  • The client. This is the object whose code requested the current operation. The client's owner might be yet another process.

    See An Introduction to Safety and Security in Telescript: Encapsulation and Access Control to see why 4 identities are needed


Telescript protocols operate at two levels. The higher level encompasses the encoding (and decoding) of agents, the lower level their transport.

The Telescript encoding rules explain how an agent -- its program, data, and execution state -- are encoded for transport, and how parts of the agent sometimes are omitted as a performance optimization.

The protocol suite can operate over a wide variety of transport networks, including those based on the TCP/IP protocols of the Internet, the X.25 interface of the telephone companies, or even electronic mail.

Example Use Cases

Electronic Mail

An important enterprise in the electronic marketplace is the electronic postal system, which can be composed of any number of interconnected post offices.

Telescript is a perfect vehicle for the implementation of electronic mail systems.

Following the remote programming paradigm, messages, since they are mobile, are implemented as agents. Mailboxes, since they are stationary, are implemented as places. Each mailbox is occupied by an agent of the mailbox's authority. A message's delivery is a transaction between two agents: the sender's message and the agent attending the recipient's mailbox. The transaction transfers the message's content between the two.

A message is customized by the sender, a mailbox by the receiver.

Booking a Round-trip

Chris can thank one Telescript agent for booking his round-trip flight to Boston, another for monitoring his return flight and notifying him of its delay.

Read the details on How Agents Provide the Experience

Buying a Camera

John is in the market for a camera. He's read the equipment reviews in the photography magazines and Consumer Reports and he's visited the local camera store. He's buying a Canon EOS A2. The only question that remains is, from whom? John poses that question to his personal communicator. In 15 minutes, John has the names, addresses, and phone numbers of the three shops in his area with the lowest prices.

Read the details on Doing Time-Consuming Legwork

Planning an Evening

Mary and Paul have been seeing each other for years. Both lead busy lives. They don't have enough time together. But Mary has seen to it that they're more likely than not to spend Friday evenings together. She's arranged -- using her personal communicator -- that a romantic comedy is selected and ready for viewing on her television each Friday at 7 p.m., that pizza for two is delivered to her door at the same time, and that she and Paul are reminded, earlier in the day, of their evening together and of the movie to be screened for them.

Paul and Mary recognize the need to live well-rounded lives, but their demanding jobs make it difficult. Their personal communicators help them achieve their personal, as well as their professional, objectives. And it's fun.

Read the details on Using Services in Combination

Example Code

A shopping agent, acting for a client, travels to a warehouse place, checks the price of a product of interest to its client, waits if necessary for the price to fall to a client-specified level, and returns, either when the price is at that level or after a client-specified period of time.

CatalogEntry: class = (
     see initialize
     see adjustPrice
     product: String;
     price: Integer; // cents
     lock: Resource;

initialize: op (product: String; price: Integer) = {
  lock = Resource()

adjustPrice: op (percentage: Integer) throws ReferenceProtected = {
  use lock   {
    price = price + (price*percentage).quotient(100)

Warehouse: class (Place, EventProcess) = (
    see initialize
    see live
    see getCatalog
    catalog: Dictionary[String, CatalogEntry];

initialize: op (catalog: owned Dictionary[String, CatalogEntry]) = {

live: sponsored op (cause: Exception|Nil) = {
  loop {
    // await the first day of the month
    time: = Time();
    calendarTime: = time.asCalendarTime();
    calendarTime.month = calendarTime.month + 1; = 1;

    // reduce all prices by 5%
    for product: String in catalog {
      try { catalog[product].adjustPrice(-5) }
      catch KeyInvalid { }

  // make known the price reductions
  *.signalEvent(PriceReduction(), 'occupants)


See The Electronic Shopper for more code


📚 Instadeq Reading List: May 2021

Here is a list of content we found interesting this month.

  • ✍️ Notation as a Tool of Thought

  • 💭 How can we develop transformative tools for thought?

  • 👩‍🎨 Designerly Ways of Knowing: Design Discipline Versus Design Science

✍️ Notation as a Tool of Thought

  • The importance of nomenclature, notation, and language as tools of thought has long been recognized. In chemistry and in botany the establishment of systems of nomenclature did much to stimulate and to channel later investigation

  • Mathematical notation provides perhaps the best-known and best-developed example of language used consciously as a tool of thought

  • In addition to the executability and universality emphasized in the introduction, a good notation should embody characteristics familiar to any user of mathematical notation:

  • Ease of Expressing Constructs Arising in Problems:

    If it is to be effective as a tool of thought, a notation must allow convenient expression not only of notions arising directly from a problem, but also of those arising in subsequent analysis, generalization, and specialization.

  • Suggestivity:

    A notation will be said to be suggestive if the forms of the expressions arising in one set of problems suggest related expressions which find application in other problems.

  • Subordination of Detail:

    As Babbage remarked in the passage cited by Cajori, brevity facilitates reasoning. Brevity is achieved by subordinating detail

  • Economy:

    The utility of a language as a tool of thought increases with the range of topics it can treat, but decreases with the amount of vocabulary and the complexity of grammatical rules which the user must keep in mind. Economy of notation is therefore important.

    Economy requires that a large number of ideas be expressible in terms of a relatively small vocabulary. A fundamental scheme for achieving this is the introduction of grammatical rules by which meaningful phrases and sentences can be constructed by combining elements of the vocabulary.

💭 How can we develop transformative tools for thought?

  • Retrospectively it’s difficult not to be disappointed, to feel that computers have not yet been nearly as transformative as far older tools for thought, such as language and writing. Today, it’s common in technology circles to pay lip service to the pioneering dreams of the past. But nostalgia aside there is little determined effort to pursue the vision of transformative new tools for thought

  • Why is it that the technology industry has made comparatively little effort developing this vision of transformative tools for thought?

  • Online there is much well-deserved veneration for these people. But such veneration can veer into an unhealthy reverence for the good old days, a belief that giants once roamed the earth, and today’s work is lesser

  • What creative steps would be needed to invent Hindu-Arabic numerals, starting from the Roman numerals? Is there a creative practice in which such steps would be likely to occur?

  • The most powerful tools for thought express deep insights into the underlying subject matter

  • Conventional tech industry product practice will not produce deep enough subject matter insights to create transformative tools for thought

  • The aspiration is for any team serious about making transformative tools for thought. It’s to create a culture that combines the best parts of modern product practice with the best parts of the (very different) modern research culture. Diagram: 'insight' and 'making', pointing to each other in a loop You need the insight-through-making loop to operate, whereby deep, original insights about the subject feed back to change and improve the system, and changes to the system result in deep, original insights about the subject.

    People with expertise on one side of the loop often have trouble perceiving (much less understanding and participating in) the nature of the work that goes on on the other side of the loop. You have researchers, brilliant in their domain, who think of making as something essentially trivial, “just a matter of implementation”. And you have makers who don’t understand research at all, who see it as merely a rather slow and dysfunctional (and unprofitable) making process

  • Why isn’t there more work on tools for thought today?

  • It is, for instance, common to hear technologists allude to Steve Jobs’s metaphor of computers as “bicycles for the mind”. But in practice it’s rarely more than lip service. Many pioneers of computing have been deeply disappointed in the limited use of computers as tools to improve human cognition

    Our experience is that many of today’s technology leaders genuinely venerate Engelbart, Kay, and their colleagues. Many even feel that computers have huge potential as tools for improving human thinking. But they don’t see how to build good businesses around developing new tools for thought. And without such business opportunities, work languishes.

  • What makes it difficult to build companies that develop tools for thought?

  • Many tools for thought are public goods. They often cost a lot to develop initially, but it’s easy for others to duplicate and improve on them, free riding on the initial investment. While such duplication and improvement is good for our society as a whole, it’s bad for the companies that make that initial investment

  • Pioneers such as Alan Turing and Alonzo Church were exploring extremely basic and fundamental (and seemingly esoteric) questions about logic, mathematics, and the nature of what is provable. Out of those explorations the idea of a computer emerged, after many years; it was a discovered concept, not a goal. Fundamental, open-ended questions seem to be at least as good a source of breakthroughs as goals, no matter how ambitious

  • There’s a lot of work on tools for thought that takes the form of toys, or “educational” environments. Tools for writing that aren’t used by actual writers. Tools for mathematics that aren’t used by actual mathematicians. Even though the creators of such tools have good intentions, it’s difficult not to be suspicious of this pattern. It’s very easy to slip into a cargo cult mode, doing work that seems (say) mathematical, but which actually avoids engagement with the heart of the subject

  • Good tools for thought arise mostly as a byproduct of doing original work on serious problems

👩‍🎨 Designerly Ways of Knowing: Design Discipline Versus Design Science

  • A desire to “scientise” design can be traced back to ideas in the twentieth century modern movement of design

  • We see a desire to produce works of art and design based on objectivity and rationality, that is, on the values of science

  • The 1960s was heralded as the “ design science decade” by the radical technologist Buckminster Fuller, who called for a “ design science revolution” based on science, technology, and rationalism to overcome the human and environmental problems that he believed could not be solved by politics and economics

  • We must avoid swamping our design research with different cultures imported either from the sciences or the arts. This does not mean that we should completely ignore these other cultures. On the contrary, they have much stronger histories of inquiry, scholarship, and research than we have in design. We need to draw upon those histories and traditions where appropriate, while building our own intellectual culture, acceptable and defensible in the world on its own terms. We have to be able to demonstrate that standards of rigor in our intellectual culture at least match those of the others

Physics of Software, the Blind and the Elephant

In The Simpsons' Chapter 28 Homer designs a car with all the features he likes.


"The Homer" has no conceptual integrity and is a total failure, but it is still a car.

There's a limit to how many isolated features can be added to a car before it stops looking and behaving like a car.

Software doesn't work like that.

The laws of physics also constrain what a car can be, there are limits to the shape and size of a car.

Shape and size aren't constraints in software.

Inside the solution space of viable cars there is physical property that helps the design process: the entire car can be seen at once.

Seeing all at once lets people find out if it reflects one set of ideas or it contains many good but independent and uncoordinated ones.

Software can't be seen all at once, not by looking at its code nor by using it, that removes a limit that physical things have.

We can keep adding features into software without noticing any effect from the only perspective we have, a local perspective.

Software is like the fable of the blind and the elephant, at any given moment we are seeing a small slice of the whole, trying to figure out the global shape and how everything interacts and fits together.

The real world limits how much complexity we can add to a design before it "collapses under its own weight".

As software grows in complexity analogs to friction and inertia increase.

Friction in software happens when a change produces an opposing force in the shape of bugs or broken code.

99 bugs in the issue tracker, 99 bugs. Take one down and squash it, 128 bugs in the issue tracker.

Inertia in software happens when a change requires a large amount of preliminary work, refactors, tests, waiting and bureaucracy.

A good measurement of inertia is how long it takes from the moment you have an idea to the moment you make the required changes and see it reflected in the running application.

Software collapses under its own weight when the amount of energy we put in is larger than the value we get out.

Constraints similar to the physical world along with their units, measurement devices and operational limits may help avoid the worst effects of complexity in software.

Being able to "see" software "all at once" may allow the comparison of different designs and finding out when a design starts losing conceptual integrity.

Or maybe this is just a bad analogy :)

Why OpenDoc failed, and then failed 3 more times?

A recurring questions that surfaces around the Future of Coding Community is what happened to OpenDoc? why did it fail?

This post is a summary of reasons found around the web, then I will explore other implementations similar to OpenDoc to see if there is a general pattern.

Bias warning: I pick the quotes and the emphasis, read the sources in full to form your own conclusion and let me know!


To start, here's a brief description of what OpenDoc was:

The OpenDoc concept was that developers could just write the one piece they were best at, then let end-users mix and match all of the little pieces of functionality together as they wished.

Let's find out the reasons:

OpenDoc post by Greg Maletic

A consortium, lots of money and the main driver being competing against Microsoft:

Hence was born OpenDoc, both the technology and the consortium, consisting primarily of Apple, IBM, and WordPerfect, all companies that didn’t like Microsoft very much. All poured loads of money into the initiative.

The hardware wasn't there:

The Copland team was wary of OpenDoc. I looked at those people as bad guys at the time, but in reality they were right to be afraid. It’s hard to remember now, but back in 1996 memory (as in RAM) was a big issue.

The average Mac had about 2 megabytes of memory. OpenDoc wouldn’t run on a machine with less than 4 megs, and realistically, 8 megs was probably what you wanted.

Second system effect?:

The OpenDoc human interface team had taken it upon themselves to correct the perceived flaws of the Mac as a modal, application-centric user experience, and instead adopted a document-centric model for OpenDoc apps.


It was a noble and interesting idea, but in retrospect it was a reach, not important to the real goals of OpenDoc, and it scared a lot of people including the developers we were trying to woo.

No "Business Model":

It didn’t create a new economy around tiny bits of application code, and the document-centric model was never allowed to bloom as we had hoped, to the point where it would differentiate the Mac user experience.

A solution looking for a problem:

There are lots of reasons for OpenDoc’s failure, but ultimately it comes down to the fundamental question of why Apple was developing this technology when no-one in the company really wanted it. The OS group had mixed feelings, but ultimately didn’t care. Most folks at Claris, Apple’s application group, didn’t want it at all, seeing it as an enabler for competition to Claris’s office suite product, ClarisWorks.

"Why didn't it catch on? 7 theories I've come across" by Geoffrey Litt

  1. UX quality. If every component is developed by an independent company, who is responsible for unifying it into a nice holistic experience rather than a mishmash?

  2. Modality. Different parts of your document operate with different behaviors.

  3. Performance. Huge memory overhead from the complexity

  4. Exporting. Hard to export a "whole document" into a different format if it's made of a bunch of different parts.

  5. Lack of broad utility. How common are "compound documents" really? Beyond the classic example of "word doc with images and videos embedded"

  6. Data format compatibility. If two components both edit spreadsheets but use different data formats, what do you do?

  7. Historical accident. Turf wars between Microsoft and everyone else, Steve Jobs ruthlessly prioritizing at Apple, execution failures, etc.

chris rabl's take inspired by Macinography 10: OpenDoc and Apple's Development Doldrums

  1. There wasn't a centrally distribution channel for "parts"

  2. Despite OpenDoc discouraging vendor lock-in for file formats, many vendors still insisted on locking down the file formats they used or making them incompatible with other vendors' products

  3. Lack of buy-in from the developer community: cumbersome C-based API and obtuse SDKs dissuaded developers from buying into the hype, plus it would have required them to re-think how their application worked in the first place (what would Photoshop look like as an OpenDoc app?)

  4. Very little dogfooding: even some of the big partners like IBM and Novell did a really bad job implementing OpenDoc in their product suites

Scott Holdaway Interview

Hard to adapt existing software:

OpenDoc made all these assumptions about how things were coded, which were kind of reasonable for an object-oriented program, which ClarisWorks was not. We told management that it just wasn't going to work trying to put OpenDoc support into ClarisWorks, especially not without doing a lot of rewriting first. And they said, well, too bad, do it anyway.

Bad Execution:

It was a good idea, it was certainly poorly done, but it still could have worked in a different codebase. They had like up to 100 people-engineers, testing, and support and all that-working on OpenDoc. That's too many people. Anytime you have that many people starting something like that, it's usually poorly done. They had too many compromises in their design, or just lack of foresight. Integrating it into ClarisWorks was certainly a huge problem. It was not made to be integrated into existing programs like, say, OLE was.

Oral History of Larry Tesler Part 3 of 3 (Transcript)

Big project, solution looking for a problem:

But they also had ActiveX. And that’s the thing, even a little more so than OLE, that OpenDoc was aiming it for. And it was a bad idea from the beginning. And I kind of sensed something was wrong but it wasn’t until we started playing it out and realizing how big a project it was and how little we had thought about the benefits and what you would actually do with it. And people tried to address it in various ways.

Doing it for the wrong reasons:

But we were doing it because Microsoft was doing it and we needed to have something better.

If we believe that it was all Steve Job's fault for stopping its development early, because the hardware wasn't there or because of bad execution then if someone tried something similar again then it should succeed right?

I went looking for other attempts at the same idea of "[Document] Component based software", here are the main contenders and some reasons why they are not the way we do things right now.

They are not 100% document based, but since they are more focused/pragmatic at least they should have succeeded being used as a better way to reuse components across applications.


ActiveX lived longer and was used in multiple places, so why was it deprecated?

ActiveX on Wikipedia


Even after simplification, users still required controls to implement about six core interfaces. In response to this complexity, Microsoft produced wizards, ATL base classes, macros and C++ language extensions to make it simpler to write controls.

Unified model, platform fragmentation:

Such controls, in practice, ran only on Windows, and separate controls were required for each supported platform

Security and lack of portability:

Critics of ActiveX were quick to point out security issues and lack of portability


The ActiveX security model relied almost entirely on identifying trusted component developers using a code signing technology


Identified code would then run inside the web browser with full permissions, meaning that any bug in the code was a potential security issue; this contrasts with the sandboxing already used in Java at the time.

A break from the past, part 2: Saying goodbye to ActiveX, VBScript, attachEvent…

HTML 5 replaces most use cases:

The need for ActiveX controls has been significantly reduced by HTML5-era capabilities, which also produces interoperable code across browsers.

The Open Source Alternatives: KParts & Bonobo

If the problem was the business model but the technology was really powerful then it should not be a problem for up-and-coming open source desktop environments with a user base consisting mostly on power users and developers that would benefit from code reuse.

The major desktop environments, Gnome and KDE, had/have something similar to OpenDoc.

An interesting fact is that Gnome used to be an acronym: GNU Network Object Model Environment, the Object Model idea was in the name itself.

I didn't notice until now that the name KParts seems to come from OpenDoc's Parts concept.

Usenix: KDE Application Integration Technologies:

A KPart is a dynamically loadable module which provides an embeddable document or control view including associated menu and toolbar actions. A broker returns KPart objects for certain data or service types to the requesting application. KParts are for example used for embedding an image viewer into the web browser or for embedding a spread sheet object into the word processor.

KParts on Wikipedia:

Example uses of KParts:

  • Konqueror uses the Okular part to display documents

  • Konqueror uses the Dragon Player part to play multimedia

  • Kontact embeds kdepim applications

  • Kate and other editors use the katepart editor component

  • Several applications use the Konsole KPart to embed a terminal

How will CORBA be used in GNOME, or, what is Bonobo?

Bonobo is a set of interfaces for providing application embedding and in-place activation which are being defined. The Bonobo interfaces and interactions are modeled after the OLE2 and OpenDoc interfaces.


Reusable controls: Another set of Bonobo interfaces deal with reusable controls. This is similar to Sun's JavaBeans and Microsoft Active-X.

Bonobo on Wikipedia:

Available components are:

  • Gnumeric spreadsheet

  • ggv PostScript viewer

  • Xpdf PDF viewer

  • gill SVG viewer

History of Gnome: Episode 1.3: Land of the bonobos:

Nautilus used Bonobo to componentise functionality outside the file management view, like the web rendering, or the audio playback and music album view.


This was the heyday of the component era of GNOME, and while its promises of shiny new functionality were attractive to both platform and application developers, the end result was by and large one of untapped potential. Componentisation requires a large initial effort in designing the architecture of an application, and it’s really hard to introduce after the fact without laying waste to working code.


As an initial roadblock it poorly fits with the “scratch your own itch” approach of free and open source software. Additionally it requires not just a higher level of discipline in the component design and engineering, it also depends on comprehensive and extensive documentation, something that has always been the Achille’s Heel of many an open source project.

History of Gnome: Episode 1.5: End of the road

Sun’s usability engineer Calum Benson presented the results of the first round of user testing on GNOME 1.4, and while the results were encouraging in some areas, they laid bare the limits of the current design approach of a mish-mash of components. If GNOME wanted to be usable by professionals, not curating the offering of the desktop was not a sustainable option any more.


The consistency, or lack thereof, of the environment was one of the issues that the testing immediately identified: identical functionality was labelled differently depending on the component; settings like the fonts to be used by the desktop were split across various places; duplication of functionality, mostly for the sake of duplication, was rampant. Case in point: the GNOME panel shipped with not one, not two, but five different clock applets—including, of course, the binary clock


Additionally, all those clocks, like all the panel applets, could be added and removed by sheer accident, clicking around on the desktop. The panel itself could simply go away, and there was no way to revert to a working state without nuking the settings for the user.

KParts seems to still be around, you can see a list of actively developed KParts on KDE's GitHub account, Bonobo was replaced by DBus which is a Message-oriented abstraction not related to OpenDoc concepts as far as I know.

What about Web Components?

From MDN: Web Components:

Web Components is a suite of different technologies allowing you to create reusable custom elements — with their functionality encapsulated away from the rest of your code — and utilize them in your web apps.

Web Components have been around for a while and developers seem to be split in two camps regarding how useful it is.

We will see with time if it becomes the main way to do web applications and a new attempt at OpenDoc, this time on the web.


Still not sure which were the main reasons, but many of them aren't shared by later attempts:

  • More RAM was available during the ActiveX days and is available today for KParts

  • KParts and Bonobo don't require a business model and benefit from reuse

    • Unlike OpenDoc/ActiveX there's no competition between Apple/Microsoft and developers

  • Execution on ActiveX/KParts didn't seem to be a problem

  • Lived long enough and learned from OpenDoc failure

  • Weren't developed by a consortium

  • Were well integrated into the underlying environment

  • Had use cases from day one

  • No vendor lock-in

  • Some had good distribution channels (Package Managers)

  • At least KParts and WebComponents seem to have a good developer experience

The conclusion is that there's no conclusion, the case is still open, what do you think?

Instadeq Reading List: April 2021

Here is a list of content we found interesting this month.

If it stays long we will move to once a week.

💾 Computers and Creativity

I will be arguing that to foster optimal human innovation, digital creative tools need to be interoperable, moldable, efficient, and community-driven.


Acknowledging that computers themselves are not inherently creative should not come as a surprise. Instead, this truth identifies an opportunity for computers to more fully assume the role of co-creator — not idea-generator, but actualizer.


Intelligence Augmentation (IA), is all about empowering humans with tools that make them more capable and more intelligent, while Artificial Intelligence (AI) has been about removing humans fully from the loop


The lack of interoperability between creative tools means that all work created within a tool is confined to the limitations of that tool itself, posing a hindrance to collaboration and limiting creative possibility


Moving beyond building on top of existing software, we can begin to imagine what a piece of software could look like if it itself was moldable, or built to be modified by the user

🧠 Thought as a Technology

At first these elements seem strange. But as they become familiar, you internalize the elements of this world. Eventually, you become fluent, discovering powerful and surprising idioms, emergent patterns hidden within the interface. You begin to think with the interface, learning patterns of thought that would formerly have seemed strange, but which become second nature. The interface begins to disappear, becoming part of your consciousness


What makes an interface transformational is when it introduces new elements of cognition that enable new modes of thought. More concretely, such an interface makes it easy to have insights or make discoveries that were formerly difficult or impossible


Mathematicians often don't think about mathematical objects using the conventional representations found in books. Rather, they rely heavily on what we might call hidden representations


This contrasts with the approach in most computer reasoning systems. For instance, much work on doing mathematics by computer has focused on automating symbolic computation (e.g., Mathematica), or on finding rigorous mathematical proofs (e.g., Coq). In both cases, the focus is on correct mathematical reasoning. Yet in creative work the supply of rigorously correct proofs is merely the last (and often least interesting) stage of the process. The majority of the creative process is instead concerned with rapid exploration relying more on heuristics and rules of thumb than on rigorous proof. We may call this the logic of heuristic discovery. Developing such a logic is essential to building exploratory interfaces.

🌱 Personal Digital Habitats

A Personal Digital Habitat is a federated multi-device information environment within which a person routinely dwells. It is associated with a personal identity and encompasses all the digital artifacts (information, data, applications, etc.) that the person owns or routinely accesses. A PDH overlays all of a person’s devices1 and they will generally think about their digital artifacts in terms of common abstractions supported by the PDH rather than device- or silo-specific abstractions. But presentation and interaction techniques may vary to accommodate the physical characteristics of individual devices.


Personal Digital Habitat is an aspirational metaphor. Such metaphors have had an important role in the evolution of our computing systems. In the 1980s and 90s it was the metaphor of a virtual desktop with direct manipulation of icons corresponding to metaphorical digital artifacts that made personal computers usable by a majority of humanity.

🏛️ DynamicLand Narrative

The mission of the Dynamicland Foundation is to enable universal literacy in a humane computational medium.


Instead of isolating and disembodying, it must bring communities together in the same physical space, to teach and discuss ideas face-to-face, to build and explore ideas with their hands, to solve problems collectively and democratically. For universal literacy, and for a humane medium, these are requirements.


Dynamicland researchers are inventing a new form of computing which takes place in physical space, using ordinary physical materials such as paper, pens, cardboard, and clay. There are no screens and no apps. Instead, people craft computational activities together with their hands


Unlike isolated apps, these projects all exist in the same space, can all be interconnected with one another, and can all be used by many people together


Dynamicland is particularly interested in its potential for public spaces in which people seek to understand and communicate — libraries, museums, classrooms, arts spaces, town halls, courtrooms.


The Dynamicland researchers are not developing a product. The computers that the researchers build are models: not for distribution, but for studying and evaluating


This community space is an early model for a new kind of civic institution — a public library for 21st-century literacy. Dynamicland envisions a future where every town has a “dynamic library” with computational literature on every subject, where people gather to collectively author and explore this literature, using the medium to represent and debate their ideas using computational models. People will hold presentations and town-hall discussions on issues of importance to the community, using the medium to see facts, explore consequences of proposals, and make data-driven decisions collectively. The public benefit of transforming collective learning and civic engagement is potentially immeasurable.

🧑‍💼 Office, messaging and verbs

The way forward for productivity is probably not to take software applications and document models that were conceived and built in a non-networked age and put them into the cloud


It takes time, but sooner or later we stop replicating the old methods with the new tools and find new methods to fit the new tools


What kills that task is not better or cheaper (or worse and free) spreadsheet or presentation software, but a completely different way to address the same underlying need - a different mechanism


But it should be replaced by a SaaS dashboard with realtime data, alerts for unexpected changes and a chat channel or Slack integration. PowerPoint gets killed by things that aren't presentations at all


You don't actually send email or make a spreadsheet - you analyze, delegate, report, confer, decide, track and so on. Or, perhaps, 'what's going on, what are we doing and what should we be doing?

🧑‍🎨 Software designers, not engineers An interview from alternative universe

In my universe, we treat the creation of software as a design activity, putting it as a third item on the same level as science and art.


when we start a software project, we think of it in more general terms. A typical form of what you call specification in our world is a bit more like a design brief. A much more space is dedicated to the context in which the work is happening, the problem that you are trying to solve and constraints that you are facing, but design briefs say very little about any specific potential software solution to the problem.


When you're solving a problem, even as you get to a more technical level, you always keep in mind why you are solving it.


The problem really only becomes apparent as you try to solve it. So, the key thing is to quickly iterate. You come up with a new framing for the problem, sketch a solution based on your framing and see what it looks like. If it looks good to you, you show it to the customer, or you sketch a range of different prototypes.


Software sketches are very ambiguous. When you are sketching, you are omitting a lot of details and so the sketching tool will give you something that can behave in unexpected ways in certain situations.

This is actually quite valuable, because this ambiguity allows your ideas to evolve. You can often learn quite a lot about the problem from the things that are underspecified in your sketches.


Very often, the right problem framing makes it possible to see a problem in a new way

Our Brain Typically Overlooks This Brilliant Problem-Solving Strategy

People often limit their creativity by continually adding new features to a design rather than removing existing ones

🛠️ The state of internal tools in 2021

Developers spend more than 30% of their time building internal applications.

That number jumps to 45% for companies with 5000+ employees.


More than 57% of companies reported at least one full-time employee dedicated to internal tools.


77% of companies with 500+ employees have dedicated teams for building and maintaining internal apps


2 out of 3 developers default to building from scratch, as opposed to using a spreadsheet or a SaaS tool.

🕹️ Serious Play

Games remain the most underrated and underexplored medium of art ever conceived by humans


Video games are right now shaping the patterns that will define the next generation of Design. Many of the hot topics in tech today like artificial intelligence, augmented reality, and remote collaboration have been brewing in video games for decades

♾️ Systems design explains the world

What is systems design? It's the thing that will eventually kill your project if you do it wrong, but probably not right away. It's macroeconomics instead of microeconomics.


Most of all, systems design is invisible to people who don't know how to look for it


With systems design, the key insight might be a one-sentence explanation given at the right time to the right person, that affects the next 5 years of work


What makes the Innovator's Dilemma so beautiful, from a systems design point of view, is the "dilemma" part. The dilemma comes from the fact that all large companies are heavily optimized to discard ideas that aren't as profitable as their existing core business

Statecharts: A Visual Formalism For Complex Systems

This is a summary of the paper: Statecharts: A Visual Formalism For Complex Systems

statecharts = state-diagrams + orthogonality + depth + broadcast-communication

Statechart Example


A broad extension of the conventional formalism of state machines and state diagrams, that is relevant to the specification and design of complex discrete-event systems, such as multi-computer real-time systems, communication protocols and digital control units. Our diagrams, which we call statecharts, extend conventional state-transition diagrams with essentially three elements, dealing, respectively, with the notions of hierarchy, concurrency and communication.

Statecharts are thus compact and expressive, small diagrams can express complex behavior, as well as compositional and modular.

Statecharts enable viewing the description at different levels of detail, and make even very large specifications manageable and comprehensible.

Statecharts counter many of the objections raised against conventional state diagrams, and thus appear to render specification by diagrams an attractive and plausible approach.

Statecharts constitute a visual formalism for describing states and transitions in a modular fashion, enabling clustering, orthogonality (i.e., concurrency) and refinement, and encouraging ‘zoom' capabilities for moving easily back and forth between levels of abstraction.

The kernel of the approach is the extension of conventional state diagrams by AND/OR decomposition of states together with inter-level transitions, and a broadcast mechanism for communication between concurrent components.

The graphics is actually based on a more general concept, the higraph, which combines notions from Euler circles, Venn diagrams and hypergraphs, and which seems to have a wide variety of applications.

An arrow will be labelled with an event (or an abbreviation of one) and optionally also with a parenthesized condition, Mealy-like outputs, or actions.

Statechart Example


The literature on software and systems engineering is almost unanimous in recognizing the existence of a major problem in the specification and design of large and complex reactive systems.

The problem is rooted in the difficulty of describing reactive behavior in ways that are clear and realistic, and at the same time formal and rigorous, sufficiently so to be amenable to detailed computerized simulation. The behavior of a reactive system is really the set of allowed sequences of input and output events, conditions, and actions, perhaps with some additional information such as timing constraints.

Statechart Example

Reactive Systems

A reactive system, in contrast with a transformational system, is characterized by being, to a large extent, event-driven, continuously having to react to external and internal stimuli.


Much of the literature also seems to be in agreement that states and events are a piori a rather natural medium for describing the dynamic behavior of a complex system.

A basic fragment of such a description is a state transition, which takes the general form “when event E occurs in state A, if condition C is true at the time, the system transfers to state B”

State diagrams are simply directed graphs, with nodes denoting states, and arrows (labelled with the triggering events and guarding conditions) denoting transitions.


A good state/event approach should also cater naturally for more general and flexible statements, such as

  • (1) “in all airborne states, when yellow handle is pulled seat will be ejected”,

  • (2) “gearbox change of state is independent of braking system”,

  • (3) “when selection button is pressed enter selected mode”,

  • (4) “display-mode consists of time-display, date-display and stopwatch-display”.

Proposed Solutions

Clause (1) calls for the ability to cluster states into a superstate

(2) Introduces independence, or orthogonality

(3) Hints at the need for more general transitions than the single event-labelled arrow

(4) Captures the refinement of states.


State-levels: Clustering and refinement

Clustering states inside other states via exclusive-or (XOR) semantics

The outer state is an abstraction of the inner states

Enables Zoom in/out

A default state can be specified, analogous to start states of finite state automata

Orthogonality: Independence and concurrency

Being in a state, the system must be in all of its AND components

The notation is the physical splitting of a box into components using dashed lines

The outer (AND) state is the orthogonal product of the inner states

Synchronization: a single event causes two simultaneous happenings

Independence: a transition is the same whether the system is in any given inner state

Orthogonality avoids exponential blow-up in the number of states, usual in classical finite-state automata or state diagrams

Formally, orthogonal product is a generalization of the usual product of automata, the difference being that the latter is usually required to be a disjoint product, whereas here some dependence between components can be introduced, by common events or "in G"-like conditions.

Condition and selection entrances

Conditional entrance to an inner state according to guards (like an if/else statement)

Selection occurs when the state to be entered i determined in a simple one-one fashion by the value of a generic event (like a switch statement)

Delays and timeouts

The expression timeout(event, number) represents the event that occurs precisely when the specified number of time units have elapsed from the occurrence of the specified events.


Laying out parts of the statechart not within but outside of their natural neighborhood.

This conventional notation for hierarchical description has the advantage of keeping the neighborhood small yet the parts of interest large.

Taking this to the extreme yields and/or trees, undermining the basic area-dominated graphical philosophy. However, adopting the idea sparingly can be desirable.

Actions and Activities

What 'pure' statecharts represent is the control part of the system, which is responsible for making the time-independent decisions that influence the system's entire behavior.

What is missing is the ability of statecharts to generate events and to change the value of conditions.

These can be expressed with the action notation that can be attached to the label of a transition.

Actions are split-second happenings, instantaneous occurrences that take ideally zero time.

Activities are durable, they take some time, whereas actions are instantaneous. In order to enable statecharts to control activities too, it needs two special actions to start and stop activities. start(X) and stop(X)

Possible extensions to the formalism

Parameterized states

Different states have identical internal structure. Some of the most common ones are those situations that are best viewed as a single state with a parameter.

Overlapping states

Statechart don't need to be entirely tree-like.

Overlapping states act as OR, they can be used economically to describe a variety of synchronization primitives, and to reflect many natural situations in complex systems. Note that they cause semantical problems, especially when the overlapping involves orthogonal components.

Incorporating temporal logic

It is possible to specify ahead of time many kinds of global constraints in temporal logic, such as eventualities, absence from deadlock, and timing constraints.

Then attempt to verify that the statechart-based description satisfies the temporal logic clauses.

Recursive and probabilistic statecharts

Recursion can be done by specifying the name of a separate statechart.

Probabilistic statecharts can be done by allowing nondeterminism.

Practical experience and implementation

The statechart formalism was conceived and developed while the author was consulting for the Israel Aircraft Industries on the development of a complex state-of-the-art avionics system for an advanced airplane.

This projects can demand bewildering levels of competence from many people of diverse backgrounds.

Thee people have to continuously discuss, specify, design, construct and modify parts of the system.

The languages and tools they use in doing their work, and for communicating
ideas and decisions to others, has an effect on the quality, maintainability
and expedition of the system that cannot be overestimated.

Things end users care about but programmers don't

(And we agree with them, but our tools make it really hard to provide)


  • Change color of things

  • Nice default color palette

  • Use my preferred color palette

  • Use this color from here

  • Import/export color themes

  • Some words or identifiers should always have a specific color

  • The same thing should have the same color everywhere

  • Good contrast

  • Automatic contrast of text if I pick a dark/light background

  • Automatically generate color for large number of items

    • All the colors should belong to the same palette

    • Don't generate colors that are hard to tell apart

    • Don't place similar colors close to each other

    • Don't use the background color on things


  • Apply basic formatting to text

  • Align text

  • Use the fonts I use in Office

  • WYSIWYG editor that behaves like Word

  • Number alignment

  • Number/date formatting to the right locale

  • Decimal formatting to the right/fixed number of decimal places

  • No weird .0000004 formatting anywhere

  • No .00 for integers

  • Emoji support


  • Dark theme

  • My Theme

  • Company Branding

  • Put the logo in places

  • My logo on the Login Page


  • Integrate with system accounts

  • Use accounts/permissions from Active Directory

  • Import from Excel/CSV

  • Import from email/email attachment

  • Export to Excel

  • Export to PDF/Image

  • Record a short video

    • As a GIF

  • Send as email

    • Send as email periodically

    • Send as PDF attachment on an email

  • Import/attach images

    • Crop before upload

    • Compress

    • Change Format

  • Use image as background but stretch the right way

  • Notifications on the app

    • On apps on my phone

    • By SMS

    • On our systems

    • On my mail


  • Good error handling

  • Good error descriptions

    • Translated error messages

  • Tell me what to do to solve an error

  • Tell me what this does before I click on it

  • Support touch gestures and mouse

  • Keyboard shortcuts

    • Customizable

  • Undo everywhere

  • Multiple undo

  • Recover deleted things

  • Ask before deleting

  • Copy and paste

  • Templates

  • Detailed and up to date guides in text with screenshots at each step and highlights

    • And in Video

    • Screenshots that stay up to date as the product evolves

    • That are in sync with the version I'm using

    • That adapt to my custom setup

  • Up to date and detailed documentation

  • Translated to my language

  • Sorting everywhere

    • Natural sort

    • Sort by multiple criteria

  • Filter everywhere

    • Fuzzy filtering

    • Case sensitive/insensitive filtering

    • Filter by multiple/complex criteria

  • Track what is used where and warn me when deleting

  • Optional cascade deletion

  • Native and simple date picker on every platform

  • Sorted lists/selects (by the label!)

    • Natural Sorting

  • Dropdowns with filtering but that behave like the native controls

  • Preview things

  • Consistent button ordering/labels

  • Consistent capitalization

  • Progress bars for slow/async operations

  • Responsive UI during slow operations

  • Disabling buttons during slow operations

  • Handling double clicks on things that should be clicked once

  • Clear indication of what can be clicked


  • Deploy on exotic configurations/platforms

  • Deploy on old/unsupported versions

  • Deploy on what we have

  • Deploy/Run without internet connection

  • Handle Excel, CSV, JSON, XML

    • Handle malformed versions of all of the above

  • Handle (guess) dates with no timezone

  • Handle ambiguous/changing date formats

  • Integrate with obscure/outdated software/format

  • It should work on my old android phone browser/IE 11

  • Unicode

    • Handle inputs with unknown and variable encodings


  • Easy to install

  • Easy to update

  • Easy to backup

  • Easy to recover

  • Works with the database and version we use

  • Can be mounted on a path that is not the root of the domain

Deprecating the first function, a compatibility strategy

NOTE: This post refers to an old version of instadeq that is no longer available.

It happened, a user reported that a function was behaving weird, she did date ( current date ) to record and the month value in the resulting record was wrong, it was 10 instead of 11.

As a programmer I noticed instantly, in the date _ to record implementation I forgot the fact that in Javascript Date.getMonth returns zero indexed months, that means January is 0 and December is 11.

I wanted to fix the error, but there are many users out there that may be using this function, how could I fix it without breaking existing visualizations?

My first quick idea was one I used in the past for other kind of migrations, when reading the visualization configuration next time, check if there's a call to the "old" version, if so, change it for a call to the "new" and fixed one.

There's a problem with that solution, if the user noticed the problem and used the month field somewhere else in the logic, the change will break that logic by having months from 2 to 13.

Building a program transformation that would detect and fix that is really hard, and I may introduce weird changes that will surprise users.

That's why I decided to introduce a new version of the function alongside the current one, they both have the same representation in the UI, from now on only the new one can be selected, but the old one will still be available for existing logic that uses it.

The only difference is that when displaying the old version, a warning sign will appear, on hover it will explain that this version had a problem and that you should select the new version and change any month + 1 logic you may have.

Here's an example with the old and new function one after the other, notice the month field in a and b on the left side.


On mouse hover:


This was done in a generic way to deprecate other functions in the future the same way.

We will try hard to avoid having to use it.

Lost in Migration: Missing Gestures and a Proposal

The pains that remain are the missing freedoms

-- Liminar Manifesto

There's a tension between a clean, simple user interface that can be displayed in small screens and used with touch; and a discoverable and beginner friendly one that provides with hints, context and extra information to help new users discover an application's capabilities.

At instadeq, we try to provide hints, context, and extra information by [ab]using tooltips.

Almost every part that can be interacted with, sets the correct cursor and provides a tooltip, to be sure that the information is noticed by users we display the tooltips at the bottom right corner of our application.

This is done to avoid the "hover and wait" interaction to discover if something has a tooltip, and the "I accidentally moved my mouse and the tooltip went away" problem.

All of this careful considerations go out of the window when using an application on a touch device.

There's no hover support there, there's no cursor change to hint that the user can interact with a thing and because of this, there's no easy and standard way to display tooltips.

Many different ways have been proposed and used but even then the user has to discover those and learn an application specific way of doing it.

Thinking and talking about this on the [1] I came with an idea.

A new gesture to discover user interface components that want to provide more context about themselves.

For lack of a name let's call it the enquire gesture.

The idea behind the gesture is twofold:

  • To inquire about all the UI components that can provide more information about themselves

  • To inquire a specific component for its details directly

The gesture works with a shape we all know, the question mark: ?

The interaction goes as follows:

  • The user starts using an application, is on a new screen and doesn't know what is possible on that screen

  • The user draws the top part of the question mark anywhere on the screen (the part without the dot)

  • The 'inquire-all' event is triggered

    • Here the application can highlight the components that have contextual information

  • The user either waiting for the components highlight or continuing with the gesture taps the question mark's dot on the component he/she wants to know more about

  • The ('inquire', target) event is triggered where target is a reference to the component that was tapped

  • The application can then display more information about that component the way it prefers (tooltip, dialog, popover, embedded UI section)

This gesture not only provides a way to replace tooltips but also a way to "come back" to the tour that some applications implement.

The thing with the tour feature in some applications is that it happens when you want to jump straight to the application to play with it and/or don't have time/willingness to go through a lot of information that you don't know when or if you are going to need.

But at the same time, you are afraid that if you skip the tour you won't be able to go back to it whenever you want.

But then, when you go back, you hope there's a skip button so you can go to the place you are interested.

As you may realize, I don't like many things about the tour feature :)

With the enquire gesture, you can ask the application to highlight the "tour locations" and you can jump straight to the ones you are interested, in in the order you want, whenever you want.

So, here's my proposal for a new gesture to replace tooltips and the tour feature in a standard way, feedback and implementations welcome.