abstract There have been thousands of articles written about Python virtual environments. As of 2020-10-07, a quick search on Google for “python virtualenv article” returns “about 1,450,000 results”. In the majority of cases I have seen, they are partially filled with impractical advice. They gush about pip, pipx, venv, virtualenvwrapper, pipenv, poetry and others, but they don’t explain how to achieve an ergonomic execution of a Python program once the virtualenv is in place, and in all environments. To be clear, too many of these articles imply, infer, or gloss over an ugly reality of their suggested approach — that that command line used to run project components will be different between environments, thus turning something trivial into something diabolical.
This article attempts to correct that by explaining how to achieve a smooth as possible deployment experience across environments (from local development, to remote production). This is at once both pretty basic and very important, but I have found that “doing the basic stuff really well” has proved to be a profitable approach.
preliminaries / terminology
It’s important to be clear about what it is we are discussing.
versions When “Python” is mentioned here, version 3.6 or later of CPython on a Debian-based Linux installation is assumed. Win32 and Python 2, fine platforms that they are, do not come under consideration, except perhaps by way of comparison to illustrate a concept or example.
venv The "venv" module does the same job as the “virtualenv” module from Python 2. It creates and populates the .venv/ directory.
pip The “pip” module is the tool which installs modules into .venv/lib/.../site-packages/ for use by the project. By the convention described here, the packages and their versions are enumerated in the file etc/pip/requirement.txt (singular)
bash We use bash for a wrapper script to make the .venv/ environment easily accessible. 
structure Source code arrangement is a topic all-too-often overlooked. I suggest that the filesystem-level layout of your project is as just as important as any other design decision or ergonomic consideration. Think of it like a sign saying “Please keep the workshop and van clean and tidy.” It’s a good idea, even if we don’t manage it all the time, but so much else is easier and quicker if we do.
polyglot Python is not the only game in town. It is common for large projects to use more than one language. These languages and their associated runtime systems will have different conventions on how things work. To reduce friction in daily navigation, all languages, their libraries and tooling, must live in predictable, obvious locations in the project tree. A little consideration goes a long way.
Things that this solution does not do.
Multiple python versions The solution presented here is a minimal-but-complete solution for the common case. It does not attempt to to rival or compete with any of the capabilities of virtualenvwrapper, pipenv, poetry and others. I understand that library maintainers may wish to run their unit tests on multiple versions of the Python runtime, and this is a laudable goal.
Multiple transitive library dependencies If your project requires both library A and B, and A requires library C version 4, and B requires library C version 5 we have a problem which no virtualenv management tool can reasonably resolve unless it starts adjusting module locations at installation time, and rewriting imports of library C within libraries A and B. While this may be conceivable, it is not a robust approach, and on balance should be avoided.
bin/venv-create This invokes the venv module to create the .venv/ directory, and pip to populate the latter with the modules listed in etc/pip/requirement.txt
There are two wrapper scripts for convenience / practicality. (A PhD student working as a TA once asked me what a wrapper script was. I mean, had he no imagination?) The main purpose of wrapper scripts is to ensure the sanity of the execution environment, so to call them “convenience scripts” is to understate their necessity.
bin/venv-python This sets PYTHONPATH to include src/python/main/, and PYTHONDONTWRITEBYTECODE and then calls .venv/bin/python, passing along any arguments.
If invoked via a symlink, it invokes .venv/bin/python3 with the “-m” option, passing along any arguments, the first one being the module name, which it gets from the source name of the symlink. This allows us to create something the symlink
bin/jupyterlab -> venv-python
and running bin/jupyterlab does the right thing, with the expected environment, without typing a long path and all is well.
bin/run-main Purely as a matter of convention, this runs src/main/python/main.py via bin/venv-python — you will likely want to add further “run-*” scripts as copies of this to suit your requirements.
That is pretty much it — some simple execution machinery underneath to create a predictable interface above, allowing the user to run the system in the same fashion in all environments. In the end, isn’t that what it’s all about?
Thank you for your time and attention. If you have anything to say (either good or bad) about what I have written above, I would be very grateful to hear from you. Alternatively, if you have nothing to say but you like it anyway, then please tell a friend.
 The GNU bash shell is to be tamed, not avoided. If you think that knowing this is beneath you, then you need to think again. A serious chef keeps sharp knives.
Turning and turning in the widening gyre The falcon cannot hear the falconer; Things fall apart; the centre cannot hold; Mere anarchy is loosed upon the world, The blood-dimmed tide is loosed, and everywhere The ceremony of innocence is drowned; The best lack all conviction, while the worst Are full of passionate intensity.
Surely some revelation is at hand; Surely the Second Coming is at hand. The Second Coming! Hardly are those words out When a vast image out of Spiritus Mundi Troubles my sight: somewhere in sands of the desert A shape with lion body and the head of a man, A gaze blank and pitiless as the sun, Is moving its slow thighs, while all about it Reel shadows of the indignant desert birds. The darkness drops again; but now I know That twenty centuries of stony sleep Were vexed to nightmare by a rocking cradle, And what rough beast, its hour come round at last, Slouches towards Bethlehem to be born?
UNIX neophytes are often to be heard raving about how “everything is a file”. In UNIX, things are simple, after all, and this is to be celebrated, we are told.
What they really mean to say is that “every process accesses data via file descriptors using the open(), close(), read(), and write() system calls”, but neither does that roll off the tongue as easily nor sound like a reason for celebration. Now, this uniformity is all well and good from an API perspective, but if everything were indeed a file, then every file, directory, and network socket would be addressable by name system-wide (or perhaps in the same namespace if one is thinking in terms of Linux containers), and not just per-process file descriptor), and the celebrations would be back on.
This article is not about that, though.
In the current POSIX setup, directories are opened via opendir(), read by readdir(), and closed by closedir(). These three are library functions, not system calls (and perhaps this is the problem). Regular files are opened via open(), read from by read(), written to by write(), and closed by close().
But imagine if a given path on a system could be opened either as a file or as a directory depending on the desired usage. What user-level data representations would this enable? What would this prevent? What, if anything, would this added functionality break?
This would allow a normal-looking regular file such as some/where/file.dat to possess “sub-resources” (this is not an official term, just one I’ll use in this article, and they’ll appear as underlined italic for clarity) such as some/where/file.dat/index-001.txt, some/where/file.dat/summary.txt, or some/where/file.dat/results.txt. An entire directory tree could exist in the same places as these sub-resources given as examples.
Similarly, a directory such as some/where/amazing.app/ which looks like an application directory could be opened as a file and therefrom one could some metadata might be readable, such as a set of compatibility requirements, or application signatures.
In short then, under this scheme, all files may be opened as directories, and all directories may be opened as files. The former gives the ability to add multiple files of metadata to an existing directory of data, for example. The latter allows for applying metadata labels to a directory, among other possibilities.
M.2 SSD is Samsung MZV7S2T0BW This is a 2TB device.
PCI carrier board is HP MS-4365 Make sure you get one with a heatsink and two thermal conductive pads. One of the pads is slightly thicker than the other. The thicker one goes between the heatsink and the M.2 SSD. The thinner one probably won’t be any use.
With the heatsink, the M.2 SSD will run at about 45C. Without the heatsink, it will run at about 65C or higher. It will throttle its performance at these higher temperatures, and presumably wear out sooner rather than later.
HP Z240 SFF has four PCI slots. Install the MS-4365 carrier board into the 1 PCI Express Gen3 slot x 16 mechanical / x 4 electrical (LP, half length) slot with the low-profile tang. Connect the board to the terminal on the motherboard for the activity light. Note that this is not a SATA device, so the activity light on the front of the exterior will not illuminate when this device is err, active.
security Security matters are not considered here. If this concerns or bothers you, please stop reading.
display server A display server is something which provides the service of display to something else. That “something else” is a client program. The terminology is sometimes confusing because the user interacts with the client program via the display server, the latter being a program running on the user’s device. This setup is very useful when one needs to have a graphical experience with a program running on a remote system.
remark However, the main effect is to confuse new users with the ambiguous terminology. This is the only situation of which I know where the user controls a client thing through a server thing.
browser The web browser is a case of arrested development. It’s not a display server, but it’s where web applications display themselves. Much has been written about “the browser as operating system”. This sounds clever, but does not really provide much insight, and is only true for a narrow sense of the term “operating system”.
browser apps these occupy an anonymous ground running in the browser on the desktop or on a mobile device.
universality Given that digital content is made up of many formats, we need a tool to be able to (at least) render all of them — to literally browse collections of local and remote content, and to do so without having to install another application on one’s device.
renderer A universal document renderer is able to gain new abilities to display newly-encountered media types (such as audio), and newly-encountered encoding formats (such as MP3, OGG, JPEG, MOV, …), and does so without interrupting the user, except perhaps for confirming the addition of the new ability.
content types A universal document renderer has a bootstrap architecture, and broadly speaking, is little more than a framework for rendering plugins of various media types. This demotes the DOM / HTML / CSS combination from its current triumvirate status to be that of “just another content type”, and permits new expressions of interactive media.
transfer protocols HTTP could also lose its place as the dominant transfer protocol. URL notation is sufficient to enable the multi-protocol handling central to the bootstrapping nature of the system. HTTP remains as a basic transport for “DOM / HTML / CSS” content and renderer implementations.
implementations Code for rendering lives in repositories on the network. Universal document renderers are directed by content type metadata to fetch implementations of decoders for newly-encountered content types from these repositories. Once fetched, they remain cached locally by the framework until a new version is available, or they are no longer required and may be deleted. Universal document renderers may also check repositories periodically to ensure they have the desired (usually the latest) versions.
conclusion Browsers are lacking in flexibility to be classified as universal document renderers. Browsers have not gained the capability of rendering all possible document formats. In practice, the dominant document format is DOM via HTML and CSS, with PDF closely behind. Video (and to a lesser extent audio), have their own codecs but this is decided mainly by the main video sites and browser producers.
However, as browsers have evolved, they have effectively become a possible successor to display servers such as X11. This may be attributed to the efforts of developers of web frameworks, more than to those of browser authors.
The “next display server” is a universal document renderer, incremental in nature, and invisible in action. A display server is a system-level concept. A universal document renderer is a user- and application-level concept.
Strongtalk-2020 is the name I have given to of one of my recreational computing projects. It’s the continuation of the work done in the late 1990s by an all-star cast of programmers who went on to create many other amazing things including, but by no means limited to:
the Hotspot JIT compiler for the Java VM
the Dart programming language
the Newspeak programming platform
the Java programming language specification
warehouse-scale computing at Google
This is quite an array of achievements — and was more than enough to pique my interest in the Strongtalk platform and investigate what work would be required to bring it up to date to run on a modern-day computer system.
Performing such a task is (at least) an interesting thought experiment, and there have been a number of attempts by people far more capable than me to resuscitate it.
Here are some of the challenges facts in the project visible from the outset:
32-bit computing model
C++ 98 dialect of C++, and little use of the C++ standard library.
diabolical use of the preprocessor
Windows as the primary platform
Project appears to have been abandoned several times
Undeterred by the above, I started to see if I could get it compiled on a modern Linux installation. I consider Debian 10 to be the best option for what I need in 2020. Your experiences and requirements may be different. 🙂
I consider myself fortunate to have been involved in an ambitious porting job of an ANSI C99 project to many different UNIX platforms, as well as Win32 and MacOS X. (I was mainly involved in the UNIX part of things.) Although I made a few decisions then I would not make now, the experience has stuck with me to ensure that whatever one creates on a computer is usable by others. Emboldened by this experience I felt sure that I could manage it, and even if the project never made it to a worthwhile checkpoint, the task would be educational.
It’s always good to start these sort of things at the end and work backwards to the current status to see what’s required. These were the rough targets I had in mind.
Use the latest revision of the C++ standard, C++20, it having just been agreed by the C++ working group in February 2020.
Convert the code to be 64-bit only and drop support for 32-bit architectures.
Run on both Linux and Windows.
Conservatively make use of modern-and-popular C++ libraries, for important-but-peripheral aspects of the system such as logging and unit tests.
Slightly less conservatively, make use of modern and well-tested libraries, for important-and-central aspects of the system which required updating. The only component to which this applies is the x86-64 runtime macro assembler for the JIT compiler.
Even less conservatively, refresh the GUI toolkit.
Some of the less precisely defined goals are:
Keep the source tree the same shape where possible.
Improve the naming of classes and variables (usually this means making them longer and more descriptive)