Ian P. Badtrousers

004 User interface

Welcome to back to the series of notes on Logos, the language language of computing.

Standard dialect had to sustain a few major changes to it since the note last year, albeit not in any crazy way. The biggest you’ll probably notice is the new { comment } syntax and a bunch of changes to the punctuation, such as the new <- assignment operator.

In this note we’ll talk about something very unique, which makes logos a truly game–changing computational framework. Today, we’re going to talk about user experience, the method which does with how user experience is constructed and managed in Logos.

Warning! There’s a good chance that once you attain a good grasp on this, you won’t ever be able enjoy UX in the programming languages!


So what is the user interface? As it happens with a user–facing program, there’s quite a few ways how the programmer would expose it. For example: certain desktop applications rely on forms, buttons, layouts, and stuff like that. Web applications, on the other hand, rely on HTML-powered web clients.

Logotic applicaitons are meant to be accessed through Terminal, a rich hardware–accelerated embeddable web component capable of interacting with remotely and locally running applications. Terminal provides a wide array of capabilities, including but not limited to: a manual mouse & keyboard interface, voice control, and augmentation.

In order to interact with an application: (1) install Avtivka virtual machine on any operating system, (2) load the application dialect, (3) connect to it with the terminal.

UX is realised by the dialect author, but at no point any knowledge of web programming is required of them whatsoever. How is this even possible?

To find out, let’s first establish a hierarchy of user experience levels, to each—a discourse of its own, extending upon the previous ones.

0|	D
Application dialect: the totality of all signs and propositions.

1|	D‡ (M - mnemonic base)
Mnemonic extension of D, a set of short symbolic anchors corresponding to a maximum subset of meaningful propositions in D. For example, you can think of Vim, where mnemonics such as d2w "delete 2 words" or y$ "yank till the end of line".

2|	Dø (V - voice control)
Vocal extension of D, a vocal pathway graph in which D‡ are the vertices, representing a state machine of mnemonic permutations. Each permutation represent some voice action.

3|	D• (V - visual representation)
Visual extension of D, describes the layout and transformations of the related, or accompanying paths, existing in each other's vicinity. This extension is meant to translate concrete mnemonic actions into visual cues.

4|	D¬ (A - augmentative experience)
Additional, augmentative extension for D•, built around the camera of actor operating in D, such as limb movements, saccades, gestures and facial expressions. With the support of Kinekt and other gesture–sensitive hardware, actor's actions can be mapped to mnemonics, commands (1-2) and visual cues (3) simultaneously.

5|	D* (MVVA)
Hypergraph of D, containing all the aforementioned extensions; complex logotic representation of the dialect, which allows for concurrent rich multi–level user experience.

The higher levels extend the primitives from the lower levels. For example, cannot introduce mnemonic anchors that would not originally be found in D‡.

In essence, the MVVA model offers inductive method to UX construction, which relies on each of the accessibility layers being completed before even the most basic visualisation can be done. This action–first approach guarantees both continuity and consistency in any aforementioned scenario.

D‡ Mnemonic base

The idea behind this is very similar to autoencoders in neural networks.

An autoencoder is the model that allows for encoding of higher–dimensional input–output signal using lower–dimensional data. The autoencoders are tuned in such a way that output has to most resemble the input. A good autoencoder can essentially compress signal from a large into a much smaller number of data points.

Schematic depiction of the Multi-Task Graph Autoencoder (MTGAE ...

Autoencoders in machine learning

(Now, imagine that velvet points in the middle are the mnemonics.)

In machine learning, this allows to learn efficient data codings in unsupervised manner, but in–logos we borrows this concept with a much different idea in mind: the forceful determination of command space. Ever wondered why most of the voice control tech is so incredibly shit?I suggest this educated guess: Because most of the time developers would attempt to extract the commands from the unrestricted and completely unstructured speech space, with noise as unrestricted as the signal.

The complexity of the whole D* supergraph directly depends on the base D‡ mnemonic set. Think of the resulting set as a barebone compressed representation of D, which must be enough to retain boh the necessary visual cues and the supported actions.

Voice control

This is arguably the most important and/or the most fun layer that there is.

With respect to D‡, quite literal alphabet, the set is all about commands now. Discoursively speaking, the D‡ dialect provided all the necessary signs, it’s up to D‡ now to arrange these signs into permutations. Please remember that the user interface layers are literally the dialects, too, so they are perfectly capable and can do everything a regular dialect would be able to do!

For each vocal path (a permutation of mnemonics) there is, each and every such proposition in the dialect must also show for a series of comments, pronouncing these mnemonics in multiple different ways for the purpose of machine learning algorithms to pick up and differentiate between different control commands.

Voice commands do not necessarily have to mutate state, some should rather just yield specific data or make inquries into specifics of the state—for use in the more complicated layers.

D• Visual representation

The purpose of this interface is to provide layouts, intermediate structures, and the visual facts as to how certain elements relate to each other visually.

This is the first layer to somewhat rely on the structural dialects, provided by Terminal. Technically speaking, D• is preoccupied with different canonical forms of signs and the relationships between πairs of sings.

Terminal and its structural dialects are designed in such a way, this information is thought to be completely sufficient in order to display everything on the screen. The additional information may or may not be provided at mercy of the set.

Augmentative experience

AR/VR stands for Augmented Reality / Virtual Reality.

The last remaining piece of the puzzle, augmentative experience layer uses shared AR dialects of Terminal in order to close the loop and utilize every usable sorrounding surface and space, as well as the limb, eye, hand and body movement of the actor to specifically tailor the logos experience in space.

Think of it like that: augmentative experience can’t be providing any functionality, not otherwise already covered by both the voice and the visual extensions.

Augementative experience is preoccupied with fine touch tuning—making what’s already intuitive even moreso, while in some cases enclosing the very complicated an nuanced mnemonics into a simple series of intuitive movements.


“A new batch of information for analysis—incoming!”

Published: Wednesday, 1 Jul 2020

(powered by Disqus)