As I Walked Out One Evening

As I walked out one evening,
   Walking down Bristol Street,
The crowds upon the pavement
   Were fields of harvest wheat.

And down by the brimming river
   I heard a lover sing
Under an arch of the railway:
   ‘Love has no ending.

‘I’ll love you, dear, I’ll love you
   Till China and Africa meet,
And the river jumps over the mountain
   And the salmon sing in the street,

‘I’ll love you till the ocean
   Is folded and hung up to dry
And the seven stars go squawking
   Like geese about the sky.

‘The years shall run like rabbits,
   For in my arms I hold
The Flower of the Ages,
   And the first love of the world.’

But all the clocks in the city
   Began to whirr and chime:
‘O let not Time deceive you,
   You cannot conquer Time.

‘In the burrows of the Nightmare
   Where Justice naked is,
Time watches from the shadow
   And coughs when you would kiss.

‘In headaches and in worry
   Vaguely life leaks away,
And Time will have his fancy
   To-morrow or to-day.

‘Into many a green valley
   Drifts the appalling snow;
Time breaks the threaded dances
   And the diver’s brilliant bow.

‘O plunge your hands in water,
   Plunge them in up to the wrist;
Stare, stare in the basin
   And wonder what you’ve missed.

‘The glacier knocks in the cupboard,
   The desert sighs in the bed,
And the crack in the tea-cup opens
   A lane to the land of the dead.

‘Where the beggars raffle the banknotes
   And the Giant is enchanting to Jack,
And the Lily-white Boy is a Roarer,
   And Jill goes down on her back.

‘O look, look in the mirror,
   O look in your distress:
Life remains a blessing
   Although you cannot bless.

‘O stand, stand at the window
   As the tears scald and start;
You shall love your crooked neighbour
   With your crooked heart.’

It was late, late in the evening,
   The lovers they were gone;
The clocks had ceased their chiming,
   And the deep river ran on.

W. H. Auden – 1907-1973

A Psalm of Life

What The Heart Of The Young Man Said To The Psalmist.

Tell me not, in mournful numbers,
   Life is but an empty dream!
For the soul is dead that slumbers,
   And things are not what they seem.

Life is real! Life is earnest!
   And the grave is not its goal;
Dust thou art, to dust returnest,
   Was not spoken of the soul.

Not enjoyment, and not sorrow,
   Is our destined end or way;
But to act, that each to-morrow
   Find us farther than to-day.

Art is long, and Time is fleeting,
   And our hearts, though stout and brave,
Still, like muffled drums, are beating
   Funeral marches to the grave.

In the world’s broad field of battle,
   In the bivouac of Life,
Be not like dumb, driven cattle!
   Be a hero in the strife!

Trust no Future, howe’er pleasant!
   Let the dead Past bury its dead!
Act,— act in the living Present!
   Heart within, and God o’erhead!

Lives of great men all remind us
   We can make our lives sublime,
And, departing, leave behind us
   Footprints on the sands of time;

Footprints, that perhaps another,
   Sailing o’er life’s solemn main,
A forlorn and shipwrecked brother,
   Seeing, shall take heart again.

Let us, then, be up and doing,
   With a heart for any fate;
Still achieving, still pursuing,
   Learn to labor and to wait.

Henry Wadsworth Longfellow, 1838

python 3 / venv

abstract There have been thousands of articles written about Python virtual environments. As of 2020-10-07, a quick search on Google for “python virtualenv article” returns “about 1,450,000 results”. In the majority of cases I have seen, they are partially filled with impractical advice. They gush about pip, pipx, venv, virtualenvwrapper, pipenv, poetry and others, but they don’t explain how to achieve an ergonomic execution of a Python program once the virtualenv is in place, and in all environments. To be clear, too many of these articles imply, infer, or gloss over an ugly reality of their suggested approach — that that command line used to run project components will be different between environments, thus turning something trivial into something diabolical.

This article attempts to correct that by explaining how to achieve a smooth as possible deployment experience across environments (from local development, to remote production). This is at once both pretty basic and very important, but I have found that “doing the basic stuff really well” has proved to be a profitable approach.

preliminaries / terminology

It’s important to be clear about what it is we are discussing.

versions When “Python” is mentioned here, version 3.6 or later of CPython on a Debian-based Linux installation is assumed. Win32 and Python 2, fine platforms that they are, do not come under consideration, except perhaps by way of comparison to illustrate a concept or example.

venv The "venv" module does the same job as the “virtualenv” module from Python 2. It creates and populates the .venv/ directory.

pip The “pip” module is the tool which installs modules into .venv/lib/.../site-packages/ for use by the project. By the convention described here, the packages and their versions are enumerated in the file etc/pip/requirement.txt (singular)

bash We use bash for a wrapper script to make the .venv/ environment easily accessible. [1]

structure Source code arrangement is a topic all-too-often overlooked. I suggest that the filesystem-level layout of your project is as just as important as any other design decision or ergonomic consideration. Think of it like a sign saying “Please keep the workshop and van clean and tidy.” It’s a good idea, even if we don’t manage it all the time, but so much else is easier and quicker if we do.

polyglot Python is not the only game in town. It is common for large projects to use more than one language. These languages and their associated runtime systems will have different conventions on how things work. To reduce friction in daily navigation, all languages, their libraries and tooling, must live in predictable, obvious locations in the project tree. A little consideration goes a long way.

Things that this solution does not do.

Multiple python versions The solution presented here is a minimal-but-complete solution for the common case. It does not attempt to to rival or compete with any of the capabilities of virtualenvwrapper, pipenv, poetry and others. I understand that library maintainers may wish to run their unit tests on multiple versions of the Python runtime, and this is a laudable goal.

Multiple transitive library dependencies If your project requires both library A and B, and A requires library C version 4, and B requires library C version 5 we have a problem which no virtualenv management tool can reasonably resolve unless it starts adjusting module locations at installation time, and rewriting imports of library C within libraries A and B. While this may be conceivable, it is not a robust approach, and on balance should be avoided.


bin/venv-create This invokes the venv module to create the .venv/ directory, and pip to populate the latter with the modules listed in etc/pip/requirement.txt


There are two wrapper scripts for convenience / practicality. (A PhD student working as a TA once asked me what a wrapper script was. I mean, had he no imagination?) The main purpose of wrapper scripts is to ensure the sanity of the execution environment, so to call them “convenience scripts” is to understate their necessity.

bin/venv-python This sets PYTHONPATH to include src/python/main/, and PYTHONDONTWRITEBYTECODE and then calls .venv/bin/python, passing along any arguments.

If invoked via a symlink, it invokes .venv/bin/python3 with the “-m” option, passing along any arguments, the first one being the module name, which it gets from the source name of the symlink. This allows us to create something the symlink

bin/jupyterlab -> venv-python

and running bin/jupyterlab does the right thing, with the expected environment, without typing a long path and all is well.

bin/run-main Purely as a matter of convention, this runs src/main/python/ via bin/venv-python — you will likely want to add further “run-*” scripts as copies of this to suit your requirements.

That is pretty much it — some simple execution machinery underneath to create a predictable interface above, allowing the user to run the system in the same fashion in all environments. In the end, isn’t that what it’s all about?

complaints department

Thank you for your time and attention. If you have anything to say (either good or bad) about what I have written above, I would be very grateful to hear from you. Alternatively, if you have nothing to say but you like it anyway, then please tell a friend.

The solution presented here avoids much of this.

[1] The GNU bash shell is to be tamed, not avoided. If you think that knowing this is beneath you, then you need to think again. A serious chef keeps sharp knives.

The Second Coming

Turning and turning in the widening gyre
The falcon cannot hear the falconer;
Things fall apart; the centre cannot hold;
Mere anarchy is loosed upon the world,
The blood-dimmed tide is loosed, and everywhere
The ceremony of innocence is drowned;
The best lack all conviction, while the worst
Are full of passionate intensity.

Surely some revelation is at hand;
Surely the Second Coming is at hand.
The Second Coming! Hardly are those words out
When a vast image out of Spiritus Mundi
Troubles my sight: somewhere in sands of the desert
A shape with lion body and the head of a man,
A gaze blank and pitiless as the sun,
Is moving its slow thighs, while all about it
Reel shadows of the indignant desert birds.
The darkness drops again; but now I know
That twenty centuries of stony sleep
Were vexed to nightmare by a rocking cradle,
And what rough beast, its hour come round at last,
Slouches towards Bethlehem to be born?

W. B. Yeats, 1865-1939

directory / file duality

UNIX neophytes are often to be heard raving about how “everything is a file”. In UNIX, things are simple, after all, and this is to be celebrated, we are told.

What they really mean to say is that “every process accesses data via file descriptors using the open(), close(), read(), and write() system calls”, but neither does that roll off the tongue as easily nor sound like a reason for celebration. Now, this uniformity is all well and good from an API perspective, but if everything were indeed a file, then every file, directory, and network socket would be addressable by name system-wide (or perhaps in the same namespace if one is thinking in terms of Linux containers), and not just per-process file descriptor), and the celebrations would be back on.

This article is not about that, though.

In the current POSIX setup, directories are opened via opendir(), read by readdir(), and closed by closedir(). These three are library functions, not system calls (and perhaps this is the problem). Regular files are opened via open(), read from by read(), written to by write(), and closed by close().

But imagine if a given path on a system could be opened either as a file or as a directory depending on the desired usage. What user-level data representations would this enable? What would this prevent? What, if anything, would this added functionality break?

This would allow a normal-looking regular file such as some/where/file.dat to possess “sub-resources” (this is not an official term, just one I’ll use in this article, and they’ll appear as underlined italic for clarity) such as some/where/file.dat/index-001.txt, some/where/file.dat/summary.txt, or some/where/file.dat/results.txt. An entire directory tree could exist in the same places as these sub-resources given as examples.

Similarly, a directory such as some/where/ which looks like an application directory could be opened as a file and therefrom one could some metadata might be readable, such as a set of compatibility requirements, or application signatures.

In short then, under this scheme, all files may be opened as directories, and all directories may be opened as files. The former gives the ability to add multiple files of metadata to an existing directory of data, for example. The latter allows for applying metadata labels to a directory, among other possibilities.


HP Z240 SFF / M.2 SSD

M.2 SSD is Samsung MZV7S2T0BW This is a 2TB device.

PCI carrier board is HP MS-4365 Make sure you get one with a heatsink and two thermal conductive pads. One of the pads is slightly thicker than the other. The thicker one goes between the heatsink and the M.2 SSD. The thinner one probably won’t be any use.

With the heatsink, the M.2 SSD will run at about 45C. Without the heatsink, it will run at about 65C or higher. It will throttle its performance at these higher temperatures, and presumably wear out sooner rather than later.

HP Z240 SFF has four PCI slots. Install the MS-4365 carrier board into the 1 PCI Express Gen3 slot x 16 mechanical / x 4 electrical (LP, half length) slot with the low-profile tang. Connect the board to the terminal on the motherboard for the activity light. Note that this is not a SATA device, so the activity light on the front of the exterior will not illuminate when this device is err, active.


HP Z240 Small Form Factor Workstation Specifications

Samsung 970 EVO Plus 2 TB PCIe NVMe M.2 (2280) Internal Solid State Drive (SSD) (MZ-V7S2T0)

Quotes / random

“The key to understanding complicated things is to know what not to look at, and what not to compute, and what not to think.” Gerald Jay Sussman

“Everything comes to us that belongs to us if we create the capacity to receive it.” Rabindranath Tagore

“You never change things by fighting the existing reality. To change something, build a new model that makes the existing model obsolete.” Buckminster Fuller

“A huge vocabulary is not always an advantage. Simple language… can be more effective than complex language, which can lead to stiltedness or suggest dishonesty.” John Gardner, The Art of Fiction

the next display server

security Security matters are not considered here. If this concerns or bothers you, please stop reading.

display server A display server is something which provides the service of display to something else. That “something else” is a client program. The terminology is sometimes confusing because the user interacts with the client program via the display server, the latter being a program running on the user’s device. This setup is very useful when one needs to have a graphical experience with a program running on a remote system.

remark However, the main effect is to confuse new users with the ambiguous terminology. This is the only situation of which I know where the user controls a client thing through a server thing.

browser The web browser is a case of arrested development. It’s not a display server, but it’s where web applications display themselves. Much has been written about “the browser as operating system”. This sounds clever, but does not really provide much insight, and is only true for a narrow sense of the term “operating system”.

browser apps these occupy an anonymous ground running in the browser on the desktop or on a mobile device.

electron apps these live in a peculiar location — running in a browser which can only execute JavaScript — but one that is used as a desktop application.

universality Given that digital content is made up of many formats, we need a tool to be able to (at least) render all of them — to literally browse collections of local and remote content, and to do so without having to install another application on one’s device.

renderer A universal document renderer is able to gain new abilities to display newly-encountered media types (such as audio), and newly-encountered encoding formats (such as MP3, OGG, JPEG, MOV, …), and does so without interrupting the user, except perhaps for confirming the addition of the new ability.

content types A universal document renderer has a bootstrap architecture, and broadly speaking, is little more than a framework for rendering plugins of various media types. This demotes the DOM / HTML / CSS combination from its current triumvirate status to be that of “just another content type”, and permits new expressions of interactive media.

transfer protocols HTTP could also lose its place as the dominant transfer protocol. URL notation is sufficient to enable the multi-protocol handling central to the bootstrapping nature of the system. HTTP remains as a basic transport for “DOM / HTML / CSS” content and renderer implementations.

implementations Code for rendering lives in repositories on the network. Universal document renderers are directed by content type metadata to fetch implementations of decoders for newly-encountered content types from these repositories. Once fetched, they remain cached locally by the framework until a new version is available, or they are no longer required and may be deleted. Universal document renderers may also check repositories periodically to ensure they have the desired (usually the latest) versions.

conclusion Browsers are lacking in flexibility to be classified as universal document renderers. Browsers have not gained the capability of rendering all possible document formats. In practice, the dominant document format is DOM via HTML and CSS, with PDF closely behind. Video (and to a lesser extent audio), have their own codecs but this is decided mainly by the main video sites and browser producers.

However, as browsers have evolved, they have effectively become a possible successor to display servers such as X11. This may be attributed to the efforts of developers of web frameworks, more than to those of browser authors.

The “next display server” is a universal document renderer, incremental in nature, and invisible in action. A display server is a system-level concept. A universal document renderer is a user- and application-level concept.

Strongtalk-2020 / beginnings

Strongtalk-2020 is the name I have given to of one of my recreational computing projects. It’s the continuation of the work done in the late 1990s by an all-star cast of programmers who went on to create many other amazing things including, but by no means limited to:

  • the Hotspot JIT compiler for the Java VM
  • the V8 Javascript VM
  • the Dart programming language
  • the Newspeak programming platform
  • the Java programming language specification
  • warehouse-scale computing at Google

This is quite an array of achievements — and was more than enough to pique my interest in the Strongtalk platform and investigate what work would be required to bring it up to date to run on a modern-day computer system.

Performing such a task is (at least) an interesting thought experiment, and there have been a number of attempts by people far more capable than me to resuscitate it.

Here are some of the challenges facts in the project visible from the outset:

  • 32-bit computing model
  • C++ 98 dialect of C++, and little use of the C++ standard library.
  • diabolical use of the preprocessor
  • Windows as the primary platform
  • Project appears to have been abandoned several times

Undeterred by the above, I started to see if I could get it compiled on a modern Linux installation. I consider Debian 10 to be the best option for what I need in 2020. Your experiences and requirements may be different. 🙂

I consider myself fortunate to have been involved in an ambitious porting job of an ANSI C99 project to many different UNIX platforms, as well as Win32 and MacOS X. (I was mainly involved in the UNIX part of things.) Although I made a few decisions then I would not make now, the experience has stuck with me to ensure that whatever one creates on a computer is usable by others. Emboldened by this experience I felt sure that I could manage it, and even if the project never made it to a worthwhile checkpoint, the task would be educational.

It’s always good to start these sort of things at the end and work backwards to the current status to see what’s required. These were the rough targets I had in mind.

  • Use the latest revision of the C++ standard, C++20, it having just been agreed by the C++ working group in February 2020.
  • Convert the code to be 64-bit only and drop support for 32-bit architectures.
  • Run on both Linux and Windows.
  • Conservatively make use of modern-and-popular C++ libraries, for important-but-peripheral aspects of the system such as logging and unit tests.
  • Slightly less conservatively, make use of modern and well-tested libraries, for important-and-central aspects of the system which required updating. The only component to which this applies is the x86-64 runtime macro assembler for the JIT compiler.
  • Even less conservatively, refresh the GUI toolkit.

Some of the less precisely defined goals are:

  • Keep the source tree the same shape where possible.
  • Improve the naming of classes and variables (usually this means making them longer and more descriptive)