RSS Atom Add a new post titled:

TeXmacs Static Website

Created by Steven Baltakatei Sandoval on 2021-03-04T04:26Z under a CC BY-SA 4.0 license and last updated on 2021-03-04T06:06Z.

Summary

I created a static website using TeXmacs. It can be found here.

Background

I rewrote an older blog post about a distance-bounding protocol that I authored in markdown with MathML tags. The math typesetting features of TeXmacs along with its static website generator and default CSS settings made for a much nicer-looking site.

I used the Notes on TeXmacs blog as a template for some features, although I didn't use all the Schema features (that other blog automatically updates an ATOM feed among other things). Redirecting from index.html to another page was a feature I used. I may adopt the Schema macros and some CSS, but even with just the bare TeXmacs website generator settings, it looks pretty good!

Posted 2021-03-04T05:48:45+0000

Freedombox Static Website Research

Created by Steven Baltakatei Sandoval on 2021-02-17T22:32Z under a CC BY-SA 4.0 license and last updated on 2021-02-18T00:02Z.

Background

I have been investigating other possible methods for publishing content accessible under my reboil.com domain. Currently, this ikiwiki blog is accessible via: https://reboil.com/ikiwiki/blog/ . However, the potential for formatting is not great; I would not use this alone when publishing mathematical equations, for example (i.e. texmacs).

The Freedombox I own and run this blog off of has been great for introducing me to the concepts of securing my own personal webpages served by Apache. It permits me to publish wiki and blog content via Mediawiki or Ikiwiki. Although I am familiar with the wikitext markup of Mediawiki thanks to some time I have spent editing Wikipedia pages, I prefer simpler solutions that don't involve accepting public input in the form of comments or account registration. I just want to be able to publish my own works.

Investigations

I recently investigated how I could use org mode (a note organization application within Emacs) to automatically render HTML pages for serving within an Ikiwiki blog. However, I ultimately decided that none of the org mode plugins for Ikiwiki were suitable for me.

I later investigated the possibility of using a mathematical typesetting program called TeXmacs to render a static website using its own WYSIWYG interface. The disadvantage of authoring webpages in TeXmacs is that authoring the source files (file extension .tm) properly requires running the graphical WYSIWYG interface in order to immediately see the typesetting results. Markdown, by contrast, is a format in which the source is also the text. TeXmacs itself has a static website generator function that takes a source directory full of .tm files and outputs rendered .xhtml files viewable by a web browser; CSS preferences are set in the TeXmacs Preferences settings. Some built-in CSS preferences make the resulting webpage appear quite nice. The disadvantage of this method is that quick authoring of blog posts requires firing up TeXmacs to render new a new .tm source file for each blog post I compose.

I also saw that FreedomBox developers have decided to add Wordpress as an app alongside Ikiwiki as part of their 2021 Roadmap. A discussion on the forum indicates this decision was made due to user feedback that publishing a website on the FreedomBox still requires some technical know-how regarding GNU/Linux file permissions and modifying configuration files via the command line interface through an ssh connection. I'm reminded of my time messing with Ikiwiki's /var/lib/ikiwiki/blog.setup configuration files in order to enable or disable built-in plugins. I am wary of using Wordpress, since popular plugins for it have been a regular source of security breaches according to my time listening to the Security Now podcast I have been following for years.

Proposal

So, for now, I think I will stick to using Ikiwiki for composing simple text-only blog posts in org mode and then converting them to markdown for Ikiwiki to process. However, if ever images or mathematical equations need to be published, I think I will create a static website using TeXmacs and serve it under my reboil.com Freedombox via a root cron job that git pull's a repo containing the TeXmacs site generator output and rsync's select parts of the repository to the FreedomBox's /var/www/html/ directory.

blog seems appropriate for the Ikiwiki site I have since it implies a "log", a stream of ideas that don't necessarily contain essential structured information. However, the TeXmacs pages I make will, by their nature, be capable of much more custom formatting thanks to TeXmac's deep MathML support and pleasant typesetting features (headers, equation numbering, image linking, etc.). Therefore, also calling the TeXmacs static web site a "blog" seems inappropriate. "Notes", "Articles", "Analects", or "Documents" seem more appropriate to describe what TeXmacs produces when rendering source .tm files. I like "Articles", since it invoke the idea of "newspaper articles" or "column articles"; basically, relatively independent parts of a larger typeset publication. This 1913 definition from the Webster dictionary highlights the meaning I'd like to emphasize:

Article \Ar"ti*cle\, n. [F., fr. L. articulus, dim. of artus joint, akin to Gr. ?, fr. a root ar to join, fit. See {Art}, n.]

  1. A distinct portion of an instrument, discourse, literary work, or any other writing, consisting of two or more particulars, or treating of various topics; as, an article in the Constitution. Hence: A clause in a contract, system of regulations, treaty, or the like; a term, condition, or stipulation in a contract; a concise statement; as, articles of agreement. [1913 Webster]

  2. A literary composition, forming an independent portion of a magazine, newspaper, or cyclopedia. [1913 Webster]

Project code update

Here is a set of project codes related to my reboil.com static website.

BK-2020-08: Ikiwiki blog

BK-2020-08-2: Ikiwiki blog binary blobs

BK-2020-08-3: TeXmacs articles

BK-2020-08-4: TeXmacs articles binary blobs

  • Git repository
  • Note: a submodule of the BK-2020-08-3 git repository.

Conclusion

I think I will call my TeXmacs-powered static website articles, as in "articles of a newspaper" or, more ambitiously "articles of an academic journal". I will host it at reboil.com/articles/, probably using a cron job in my Freedombox to automatically rsync article files rendered and committed to a git repository.

Posted 2021-02-18T12:11:29+0000

Citation Needed Hunt

Created by Steven Baltakatei Sandoval on 2019-07-16T14:23Z under a CC BY-SA 4.0 license and last updated on 2020-12-22T19:10Z.

Citation Hunt

A tool for looking up random sentences with {{citation needed}} tags in Wikipedia is Citation Hunt.

This can be used to help find a random place to start improving Wikipedia.

It is not an efficient method for improving Wikipedia (given how easy for an editor to add a {{citation needed}} tag compared to how difficult it is to understand and locate an appropriate source). However, I think it is more useful than clicking Wikipedia's "Random article" link since it can help focus your mind on a single sentence claim; when I open a random wikipedia article and see dozens of reference tags and paragraphs of text the phrase "Where do I even start?" comes to mind.

References


This work by Steven Baltakatei Sandoval is licensed under CC BY-SA 4.0

Posted 2021-02-17T16:18:01+0000

Russian Roulette

Created by Steven Baltakatei Sandoval on 2019-07-18T05:04Z under a CC BY-SA 4.0 license and last updated on 2020-12-23T00:52Z.

Background

I took it upon myself to review a {{citation needed}} tag on the Russian roulette page on Wikipedia.

I found a reference that cited the Oxford English Dictionary which itself cited a 1937-01-30 issue of Collier's, a magazine containing short stories. The issue conatined a short story named "Russian Roulette" by a person named Georges Surdez. I found a source for the document here and here.

It's interesting to me that a the Oxford English Dictionary cites a document that is rather obscure. It makes me wonder what a library filled with every source that the Oxford English Dictionary cites would look like. It seems like an ambitious project that would be necessary to preserve the english language's history in a technically satisfying manner. Something to think about.

Wikipedia edit

The wikipedia article containing the updated information as of 2019-07-16T22:54:07 is here:

I had removed usage of "Russian Poker" from a description of a 2019-01 incident in which a police officer shot another police officer in what the New York Times describes as "Russian Roulette" but which no source (which I could find) reporting on the incident described as "Russian Poker". I think using that particular phrase to describe an incident that no source describes as such would be creating information out of nothing ("original research"). In this case, the information created is the strengthening of the link between the phrase "Russian Poker" and the concept of pulling the trigger on a possibly-loaded firearm while aimed at another person. I said as much in my descriptions of the edits.

I confirmed that the Collier's quote is partially referenced in a printed copy of the OED2 (page 295) in my local library. The relevant sections are:

> `REVOLUTION` *sb*. `I I`; **Russian roulette**, an act of
bravado in which a person loads (usu.) one
chamber of a revolver, spins the cylinder, holds
the barrel to his head, and pulls the trigger; also
*fig*.;

> Revolution had never taken place. **1937** `G. SURDEZ` in
*Collier's* 30 Jan. 16 ‘Did you ever hear of Russian roulette?’
…With the Russian army in Rumania, around 1917,…some
officer would suddenly pull out his revolver,…remove a
cartridge from the cylinder, spin the cylinder, snap it back in
place, put it to his head and pull the trigger.

Citation Hunt

I had originally found this page to edit via a Citation Hunt webpage that looks up random {{citation needed}} tags in Wikipedia articles and presents them to the user for consideration. URL is here.

I'm also considering using markdown to format text but it hurts legibility if I'm using vanilla emacs. (edit(2020-12-22T19:22Z): I rewrote this article in markdown.)


This work by Steven Baltakatei Sandoval is licensed under CC BY-SA 4.0

Posted 2021-02-17T16:18:01+0000

Kyoani Arson Attack

Created by Steven Baltakatei Sandoval on 2019-07-18T23:21Z under a CC BY-SA 4.0 license and last updated on 2020-12-23T00:53Z.

Wikipedia article

I helped to proofread references on the wikipedia article for the Kyoto Animation arson attack.

33 dead. Attack occured at KyoAni's Studio 1 facility where normally about 70 people work.


This work by Steven Baltakatei Sandoval is licensed under CC BY-SA 4.0

Posted 2021-02-17T16:18:01+0000

Markup formats

Written on 002019-07-28T19:48Z by baltakatei

Wikitext

Since I use emacs as my editor I thought I'd see if there was a set of emacs tools designed to facilitate editing wikitext (what you edit when you click the "edit source" tab on the top of any Wikipedia article).

One tool I wish that existed is the ability to automatically format default in-line references into a format more visible for human eyes. Here is an example from the wikitext for the Kyoto Animation arson attack article:

<ref>{{Cite web|url=https://www3.nhk.or.jp/nhkworld/en/news/20190727_02/|title=Nearly 6 million dollars donated after Kyoto blaze {{!}} NHK WORLD-JAPAN News|website=NHK WORLD|language=en|access-date=2019-07-27}}</ref>

I want a tool that can convert that text into this:

<ref>{{Cite web
|url          = https://www3.nhk.or.jp/nhkworld/en/news/20190727_02/
|title        = Nearly 6 million dollars donated after Kyoto blaze {{!}} NHK WORLD-JAPAN News
|website      = NHK WORLD
|language     = en
|access-date  = 2019-07-27
}}</ref>

I have been reading up on how the Emacs Lisp language works so I can write my own custom function to perform this formatting automatically on a region of text I select in emacs (I prefer to edit article source in emacs since I really like the keyboard shortcuts). I'm currently learning the basics.

LaTeX

I also had some curiosity about possibly using emacs for composing documents using the LaTeX markdup language. I imagine that would be useful for producing documents for explaining mathematical concepts in general.

This blog post located at dated 2010-05-13, and titled Emacs as the Ultimate LaTeX Editor seems promising. It recommends the use of the AUCTeX package available in the debian repository (wikipedia page). It had a followup post which explained how LaTeX equation previous could be seen within the emacs GUI editor.

Markdown

I also decided I'd try and write this page using Markdown and the M-x markdown-preview function (which converts the markdown markup into an HTML file which it has my browser open).

I figured out I can use the markdown package to convert markdown files into HTML via:

$ markdown mytestfile.md > mytestfile.html

It looks better than raw text files, in any case. Maybe one day I'll get fancy and use texinfo or something from which I can auto-generate a static HTML website. For now, though, I'll focus on getting stuff written.

:: :: :: ::

Posted 2021-02-17T16:18:01+0000

2019-08-09T18:14:36Z; baltakatei>

Four Freedoms and Three Purposes

Below are some notes regarding my thoughts on how to identify a purpose for one's actions after you have discovered that there is no built-in purpose engraved into the laws of physics. See existential nihilism. Skip to the

Four Freedoms

Source: Free Software Free Society (PDF)

Description: The Four Freedoms answer the question: "What abilities must a software programmer have in order to have control over computer programs they create?"

0. The freedom to run the program as you wish, for any purpose.

1. The freedom to study how the program works, and change it so it does your computing as you wish. Access to the source code is a precondition for this.

2. The freedom to redistribute copies so you can help your neighbor.

3. The freedom to distribute copies of your modified versions to others. By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this.

I believe these principles can be generalized to cover any apparatus constructed of matter (including the human body and augmentations to human capabilities). As of 2019, I am unaware of these principles being applied in any significant scale to machinery required to sustain the current human population on planet Earth. Mechanical fabrication prints, P&IDs, PFDs, and industry consensus standards for machines involved in water purification, food production, waste processing, and other technologies required to sustain the current earthbound human population are mostly "closed-source". I imagine the four freedoms are not being applied to improvements in such technologies due to the fact that currently new improvements occur frequently. These improvements are protected by copyright and patent laws designed to accelerate creation of such improvements by granting patent owners temporary government-enforced monopolies in the manufacture of machines utilizing such improvements.

However, if new industrial machinery improvements are developed and released under "copyleft" licenses (ex: "Creative Commons") then collaborative efforts may outcompete the temporary government-enforced monopoly model (patents). For comparison, I direct you to the emergence of "free/libre, and open source software" (FLOSS, a.k.a. FOSS) development as the source of software used in most production environments (ex: GNU/Linux). This would mean that a person in a future where complex technology beyond the median human's understanding is required simply for basic human survival would at least have the option of "opting out" from having to agree to a license agreement in order to not die.

In in other words, the Free Software Foundation lacks an industrial arm to ensure people can choose to rely on freedom-respecting industrial machinery they need to survive. I think such an arm needs to be created, the sooner the better.

In order to control over one's existence, all tools one uses to survive must satisfy the four freedoms.

Four freedoms (generalized)

Redefine "program" to include "any machine" and "source code" to include "technical documentation and source code required to fabricate the machine".

Note: For generalization across all baryonic matter, include in definition of "machine": "atomically precise molecular assembler".

One problem I see in the four freedoms is that there is no explicit provision addressing use of software to destroy or inflict physical harm. I imagine this omission is an artifact of the fact that software requires hardware to interact with physical reality. Physical hardware capable of causing energy or matter to flow is needed to inflict physical harm. Most weapons work by dumping sufficient energy into a small enough volume of space (ex: bullets) or causing certain types of disruptive material to flow (ex: poison). A computer control program does not directly cause harm; the final control element does (a hammer strikes a firing pin; an actuator opens a valve on a poison canister; a metal switch completes an electrical circuit). This raises the question: "How does one apply the four freedoms to hardware that may be used to kill someone?". Traditional agriculture tools such as scythes, sickles, pitchforks, horses, shovels, and sledgehammers all can be used to create food or to kill and destroy.

The question becomes one of purpose and motive. In general, I understand the purpose of a nation-state government to be the holder of a monopoly on violence. Therefore, a nation-state should be interested in controlling the production and possession of weapons and tools which can be used as weapons. A nation-state with strong gun control actively profiles users of potentially lethal tools for past misbehavior and statistical likelihood misuse. In other words, gun control laws restricts freedom globally in order to reduce the number of individuals who may possess lethal tools giving them power to cause physical harm at scale since infliction of lethal physical harm itself deprives victims of all freedom.

Freedom is being able to make decisions that affect mainly you; power is being able to make decisions that affect others more than you. If we confuse power with freedom, we will fail to uphold real freedom. (Free Software, Free Society, v3, Ch. 46, Pg. 257)

The resriction of freedom for lethal tool imposed by a nation-state upon users may come in the form of punishment for possession in certain areas. It may also come in the form of complete prohibition of sale or possession of certain lethal tools.

However, nation-states come and go on the timescale of human generations. What principles should an existential nihilist follow when not even the laws of physics endorse any particular creed or moral code? If I want to help implement the four freedoms for hardware I should be prepared to explain the problem even to someone who doesn't necessarily share my value system.

This next section is going to be one huge tangent but it led to some interesting thoughts that I thought I'd make public.

Purpose for existence in a purposeless cosmos

A rational observer should conclude that there is no all-powerful God or intrinsic meaning to existence. Nation-states historically have been built around shared hallucinations of religion, humanism, or money. If negociations must be made with a group of inscrutable aliens or foreigners, what purposes for existing drive their value system? What purposes may already be shared in common between your familiar group and the foreign group prior to first contact? If coexisting with such foreigners is unavoidable and they share/possess lethal technologies without regard to your local government regulations, how can such common purposes be used to reestablish trade restrictions or to reevaluate the efficacy of existing restrictions?

These are thoughts that come to mind when I try to answer the question "What universal purposes might we share with foreigners whom we have never met?". The simple answer of "loving and supporting your family" comes to mind from my time working to open conversations with strangers as a missionary for the Mormon church. However, I want to define such a phrase methodically. What value and problems come from prioritizing resource allocation towards blood family members who may act irrationally as opposed to allocation towards non-blood friends who do act rationally?

An analogy to solving reaction rate kinetics

One strategy I have found useful when tackling a difficult question is to produce a set of answers, each element of which addresses a specific aspect of the original question. For example, when answering the question "What governs the amount of heat in a plastic polymerization reactor?", I would be inclined to answer with a rate equation composed of multiple terms added together. Usually it would be of the form:

(rate in) - (rate out) + (rate of generation) = (rate of accumulation)

The rates might be energy flows and/or mass flows. There may be multiple equations, one for each type of material present within the reactor. Two types of material might go in and therefore have positive rate in terms in their equations. If it is a steady-state reactor, the rate of accumulation term should be set to zero. The rate of generation might be the generation of heat that must be dissipated by convection, diffusion, and radiation processes. Concentrations of material might be invovled. Each rate may be a function of chemical concentrations of various combiantions of elements according to a separate set of rate governing equations. See https://che.engin.umich.edu/people/scott-fogler/ .

However, my point is that a seemingly unsolvable problem can be made usefully solvable by splitting the problem into simpler component processes.

Components of 42

With this in mind, I thought it might be useful to split up the question "What is a common purpose for existing that any sentient being might share with me?" (a stand-in for Douglas Adams' more ambitious and vague question "What is the answer to Life, the Universe, and Everything" to which the answer is famously, "42") into a set of "purposes" which a sentient (read: thinking) being made of matter might set as its purpose for living. Obviously, the amount of possible purposes and permutations of purposes makes the assembly of an exhaustive list impossible. But by aiming to reduce the number of purposes to the minimum required to generate all purposes, then perhaps a set of fundamental "meanings of existence" might be reached.

Each "purpose" that on could dedicate their life to is an action. "To make lots of money" might be one but what is money for and why not something like seashells or gold? "To support my family" is a common one I think many in family-oriented religions would agree is their purpose for existing but is there a more general way of describing what a family is that could also describe other human relationship configurations like mentorship or slavery or even money? If I could define such a system then I would have a minimum moral code that I could reasonably expect another rational observer of reality to have adopted already. Like weathermen who never met one another yet still able to share stories with useful information based the fact that they have been observing the same givens, perhaps I can use this minimum set of purposes as a communication tool for my friends and strangers we meet in the future. Ideas such as the four freedoms described above require a definition for "free/libre" which I may be able to describe within this system.

The end result of these ruminations are a set of four actions which I've converted into pairs of words in the style of english legal doublets (ex: cease and desist, free and clear, null and void, terms and conditions, aid and abet). I am calling three of them "self-evident purposes" since they are living processes that should be based only upon what other sentient beings can be expected to share in common with us. A fourth action is an implied aspect for the three self-evident purposes but isn't a purpose by itself ("Destroy and Forget"). However, it is a self-evident action present in every other process which deserves identification. After that, I proceed to define actions that can be described as permutations of the self-evident purpose-actions but which I do not necessarily believe may be shared among even humans on earth as purposes, much less all sentient entities. Some of the derived actions may be life purposes.

Self-evident system of purpose

Self-evident purposes

Source: introspection

These actions are answers to the question: "What common purposes for existence are we likely to share with any stranger we encounter?"

The question, in other words: "What are members of the smallest set of actions that you will likely share with any alien/stranger?"

  1. 👁 Observe and Discover - To see reality as it really is. To take in new givens.

  2. 📚 Integrate and Correlate - To create stories explaining what you see. To record history. To correlate givens with other givens.

  3. 🖧 Reticulate and Network - To trade stories with entities different from yourself. To form relationship networks with others.

Self-evident actions

  • 💣 Destroy and Forget - To nullify an action.

Note on Destroy and Forget

This is a weird one since I have a hard time imagining a sentient being whose nature is Destruction but it is a fundamental action required to derive many other actions. It feels like the concept of "zero" in math. Multiplying by 0 is not useful when performed in isolation but is required when part of a process of selectively ignoring aspects of a signal for the purpose of amplifying useful signals. Part of the act of creation is the removal of the construction waste. Observe and Discover involves collecting data; part of integrating and correlating information is the ignoring of many observed data points in favor of highlighting certain data points that promote/accelerate all activities. Likewise, if only creation and strengthening of relationships was permitted with no ability to dissolve/unlink relationships then new relationships would be inhibited if material resources must be dedicated to maintain each relationship; Destruction of old relationships (chemical bonds or social ties) is necessary.

However, Destroy and Forget isn't really a useful "purpose to live" unless used in combination with the other self-evident actions. Likewise, to Observe and Discover isn't really useful if data is not correlated or shared. Sharing of data can happen unwillingly (forced reticulation) as can destruction (unwilling destruction).

Derived Actions

Below are other actions defined in terms of the above self-evident common actions:

  • Touch - To form relationships (Reticulation) mediated by physical forces (electron-election repulsion, photon emission/absorption).

  • Create - To simultaneously Integrate and Touch.

  • Defend - To secure integrity of other actions via Touch.

  • Replicate - To Create copies of Observers in order to increase the number of points of view from which reality can be Observed or to Defend against Destruction.

  • Expand - To increase spatial scope of Observation or Reticulation activities.

  • Control - To selectively Destroy actions (Observation, Integration, Reticulation, and all permutations of such).

  • Conquer - To Control in order to Expand.

  • Separate - To selectively Destroy undesired relationships between certain entities.

  • Control - To form a relationship

  • Liberate - To Destroy Control.

  • (etc.)

Comments

I am not completely committed to there being only three self-evident purposes. Perhaps another can be added but I think there should be a small number for this system to be useful.

One idea that amuses me is for a society that divides itself into different factions, each with certain principle actions guiding faction members.

A person of an Observe and Discover faction might primarily be involved with activities that expand the scope of what a civilization knows. This might include radio astronomy telescopes and cosmology. It might also have subfactions dedicated towards more introspective observations such as internal surveillance of a civilization's activities. Researchers would be primarily focused on developing tools that allow them to see farther.

A person of an Integrate and Correlate faction would primarily be involved in fact-checking and using observations from O&D to update abstract models of the civilization's internal state, the civilization's impact on external reality, and creating predictions of possible future problems based on history.

A person of a Reticulate and Network faction would primarily be involved in forming communication channels between nodes within and without of the civilization in order to collect useful information from internal and foreign I&C factions.

Many other actions such as Defend or Separate or Conquer or Liberate could become focal points of many factions. However, any civilization, local or foreign, would be guaranteed to have the factions of O&D, I&C, and R&N. As effects of the ongoing heat death of the cosmos continue, these self-evident purposes will persist into deep time. Members of civilizations that nullify these common actions will handicap themselves.

  • Nullification of Observe and Discover causes blindness.

  • Nullification of Integrate and Correlate causes madness.

  • Nullification of Reticulate and Network causes ignorance.

Application to the Four Freedoms

How does this system of belief help me to apply the Four Freedoms in a more general context that includes everything from factories to molecular assemblers? I'm not sure. One thing I do know is that my butt is tired from all this typing while sitting.

I made this tangent into self-evident purposes for existence with the hope that I could identify a way to explain to someone the value of applying the Four Freedoms even to potentially lethal machinery. The dream I have been trying to find a way towards is one in which every person has the option to manufacture their own life support equipment in an uninhabitable environment such as the vacuum of space, any future human space colony, even the surface of planet Earth itself if exponential population growth continues, or even a virtual environment where all humans are forced to live as non-biological emulated brains in a completely artificial substrate in machine cities. I went on a tangent because I wanted to explore a belief system that might survive even in places where people might live under even in combinations of such inhospitable environments. As of the year 2019, most humans would die if they switched to hunter-gatherer mode instead of relying upon machines to feed, clothe, and shelter them.

So what does these self-evident purposes buy me regarding applying the Four Freedoms to all hardware?

I think relevant questions for people concerned about losing their own freedoms caused by application of the Four Freedoms to all machines are:

  • What happens when anyone can print a fission bomb from raw materials in hours?

  • What happens when anyone can print a firearm on a whim in minutes?

The O&D and I&C factions must step up to the plate and keep up with manufacturing technology advances in order to search for signatures of uranium refining and other signs that advanced weaponry are being fabricated. R&N factions must quickly share information to track patterns of material movement. Human fragility must be buttressed by brain emulation backup and/or clone bodies that can withstand disasterse. Law enforcement I&C factions must augment themselves to track and prosecute crimes as fast as new techniques are developed.

The pattern I am seeing here as I talk myself through the problem of Four Freedom lethal machines is speed and complexity. The faster a threat can be developed, the faster society-approved local law enforcement must be able to act and neutralize such threats. If the risk of physical harm is too high then physical redundancies must be planned and implemented to minimize damage. The benefits of Four Freedom manufacturing hardware must outweigh the new threats with increased capabilities offered to people who will defend their access to blueprints despite the increase in personal risk. Automobiles are lethal machines that are a significant cause of deaths in the United States but users defend their rights to use them because of the benefits of personal mobility they offer. Licenses and law enforcement mitigate the risk of misuse but do not eliminate it.

Firearms cause massacres in the United States regularly yet there is a cultural inertia among lawmakers and people that vote for such lawmakers that causes them to refuse to ban firearms. There is no perceivable economic benefit to firearm ownership. The perceived benefit seems to me to be primarily imaginary: "Owning this gun gives me the power to defend myself with lethal force and that makes me feel safe."

Given that firearm ownership is something that remains fiercely defended in the United States, I imagine that at least one nation-state will permit Four Freedom machines to exist and become part of the local culture. The fact that no significant population actively promotes Four Freedom philosophy for manufacturing is probably because the population of "programmers" (ex: engineers on industry consensus standard technical committees) is low and they do not perceive any urgency for free/libre machinery.

Possible fertile ground for the idea of Four Freedom machines are discussions of "Right to Repair" and disgust about planned obselescence. For example, several news stories discussed how farmers were pirating software required to operate John Deere agricultural equipment which apparently uses an expensive license model completely at odds with how farm equipment has traditionally been maintained.

One argument that is coalescing in my mind as I write these thoughts is that if Four Freedoms aren't applied to industrial consensus standards and fabrication blueprints, then a larger and larger fraction of living humans will be priced out of the ability to participate in society. More and more of their resources will be required to buy licenses for services required to maintain employment and the certifications employers require.

Additionally, even "middle class" citizens of an industrialized nation will be vulnerable to actions of the relatively small number of licensors of life support technologies where a Four Freedom machine equivalent option is not available. As manufacturing processes become more centralized for sake of efficiency, repair of "turn key" services such as automobile packages, municipal water treatment equipment, road maintenance equipment, material transport equipment (piping), and other infrastructure products will become more and more subject to license and service agreements. Already I know variable speed drive water pumps come equipped with bluetooth tranceivers that can only be configured via an app and bluetooth device that can cost significant amounts of money with. The pump manufacturer could charge money for the app and the closed source nature of the app makes the pump vulnerable to cyberattack. Problems that the Free Software Foundation argued its Four Freedoms were protecting people from in the realm of software will become more problematic in realms of industrial equipment as the equipment becomes more "smart". This will especially apply to a future where local 3D printing of industrial equipment becomes commonplace. If the digital programs fed to "matter compilers" (MCs) do not come from design "manufacturers" with the source code (today, the equivalent of P&IDs, instrumentation diagrams, mechanical drawigns, control philosophy, etc.), then the MC owners are subject to the designer's will similar to how Microsoft strongarmed its users to use Internet Explorer instead of competing web browsers. Hardware manufacturing can be more free but there has to be an active force for freedom. Otherwise the path of least resistance is centralized control by a small number of licensors.

Conclusion

I'll end this rather lengthy rambling blog post with a short summary. The Four Freedoms applied to the eralm of industrial machinery will force civilization to augment its speed and detection capabilities for lethal tool fabrication. Lethal tool fabrication increases risk of loss of your individual freedom if that tool is used against you. In a tangent I discussed a recurring thought I had regarding a universal set of purposes for living that may help any person to build communciation bridges with foreigners unlike yourself who may think Four Freedom machines are a pipe dream. I conclude by discussing the recent appearance of the Right to Repair idea and how it may be a fertile ground for discussing Four Freedom industrial machinery. I close discussing the value of Four Freedom machinery and potential negative consequences of failing to use Four Freedom machinery (loss of freedom).

baltakatei 🅭🅯🄎 4.0

Creative Commons License

Posted 2021-02-17T16:18:01+0000

<!DOCTYPE html>

Introduction

This is an explanation of how a distance-bounding protocol can be performed using the sending of bit sequences between Verifier V to Prover P.

MathML browser support is required. Firefox 60.0 esr should render MathML symbols correctly.

Background

Explanation, part 1: setup

I will attempt to explain my understanding of the Hancke-Khun distance-bounding protocol in my own words for a layman reader. If there are errors or misrepresentations then I am at fault.

Gerhard P. Hancke and Markus G. Kuhn proposed a distance-bounding protocol as a defense against man-in-the-middle attacks for people who use RFID tokens in order to automatically authenticate themselves for a location-based service such as the opening of a door or purchase at a specific point-of-sale device.

An example of a man-in-the-middle attack for such a building access-control could be two attackers maliciously forwarding radio traffic between an RFID token and a building RFID reader without the RFID token owner's knowledge even in the case where the token is located at a great distance from the reader. The idea to strengthen an RFID token against such an attack is to equip the building RFID reader with some means of proving the token is physically located within a specific distance.

The goal of this project is to apply this concept to the ping time between two computers in order to prove how close the computers are from eachother. A distance-bounding protocol proof uses the distance, speed, and time equation solved for distance.

distance = speed ⋅ time

The speed is set to the speed of light since one conclusion from the theory of special relativity is that no information signal or material can travel faster than light in a vacuum. The time is set to half the ping time (round trip time divided by 2.

distance = speed of light ⋅ ping time 2

In the protocol, a verifier, V, and a prover, P, create a pair of one-time-use pseudorandom bit sequences, R0 and R1, each containing n elements. Each element Ri0 or Ri1 is a bit whose value is either 0 or 1. These sequences can be represented like so:

Ri0 = R10 R20 R30 R40 R50 ⋯ Rn0

Ri1 = R11 R21 R31 R41 R51 ⋯ Rn1

Regarding these bit sequences, V rapidly asks P a stream of n questions. A question may take only one of the two forms:

  1. What is the ith bit of R0, Ri0?

  2. What is the ith bit of R1, Ri1?

The stream of questions start with i=1 and end with i=n.

In order to decide which question V asks P, V generates a private random bit sequence, C, which consists of n elements. The rule V follows is that if Ci=0 then V requests that P supply Ri0. If Ci=1 then V requests that P supply Ri1. In other words, at each round, i, V randomly chooses which of the two questions to ask P.

After sending a question to P, V records the exact time and increments i by 1.

Because cause must precede effect, P cannot provide a correct answer to V until after P receives the question. Since the speed of light is the maximum rate at which any information can travel through space, there is a minimum ping time (or "time of flight") for any given distance between V and P which can be used by the protocol to prove an upper bound to the distance between V and P.

Immediately after receiving a question, P sends to V the value RiCi which is the requested bit from either R0 or R1. The set of these responses can be written as RCi.

Upon receiving each response, V records the exact time in order to calculate that particular question-response round-trip time (or "ping time").

Example 1: how the bit sequences are used

To help explain how this process works below is an example that sets n=16 and walks you through how to calculate the response bit sequence, RCi.

  1. Verifier V and Prover P assemble and agree upon pseudorandom bit sequences R0 and R1

    • Ri0 = 0 1 0 0 1 0 1 1 1 0 1 1 0 0 1 0

    • Ri1 = 1 0 0 0 1 1 1 1 0 1 1 0 1 0 0 1

  2. Verifier V secretly produces a pseudorandom bit sequence Ci:

    • Ci = 0 0 0 0 1 0 1 1 0 0 0 1 1 1 0 1
  3. V sends each bit of Ci , one at a time, starting from i=1 until i=n. V notes the exact time when it sent each value of Ci.

  4. P receives and uses each bit of Ci to determine whether to immediately send the bit Ri0 or Ri1 to V in response. If all bits are received and sent without error, P will eventually have sent the set RCi.

  5. V receives and records the arrival time for each response bit, RiCi. V calculates the round-trip time for each round. The resulting values of RiCi are:

    • RiCi = 0 1 0 0 1 0 1 1 1 0 1 0 1 0 1 1

Below is a table illustrating how the example values for these bit sequences correlate. I have bolded the values of Ri0 and Ri1 which were sent by P in response to the values sent of Ci sent by V.

i 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16
Ri0 0
1 0 0 1 0 1 1 1 0 1 1 0 0 1 0
Ri1 1
0 0 0 1 1 1 1 0 1 1 0 1 0 0 1
Ci 0 0 0 0 1 0 1 1 0 0 0 1
1 1 0 1
RiCi 0 1 0 0 1 0 1 1 1 0 1 0 1 0 1 1

Explanation, part 2: False-Acceptance Rate

At each step V records the round trip time required between the sending of the question and the receiving of the correct answer from P. Given enough correct answers from P, V can then use the average value of the round trip time, tm, of correct responses in order to calculate with some statistical certainty that P is physically located within a distance, d. The distance, d can be calculated using the following two equations (pg 68, Hancke 2005).

d = c ⋅ tm−td 2

tm=2⋅tp+td

In the language of the Hancke paper, variables in the two equations are defined as:

c is the propagation speed, tp is the one way propagation time, tm is the measured total round-trip time, and td is the processing delay of the remote device.

A conservative practice defines td=0 for the processing delay variable. It is conservative because td is a function of the capabilities of the hardware P uses to process requests from V. If both P and V trust eachother to use specific hardware with consistent and accurate estimates for response times then td may be specified. However, the Hancke protocol-Kuhn does not provide a means for proving or incentivizing P to accurately measure and report its own hardware capability.

The highest possible propagation speed, c, according to the laws of physics is the speed of light in a vacuum. According to section 2.1.1.1 of the 8th edition of the International System of Units, a document published by the International Bureau of Weights and Measures, this speed is 299 792 458ms.

The statistical certainty that the round-trip time between P and V is less than tm is 1−pFA where pFA is the "false-accept probability". The value of pFA must be a statistical estimate constrained by the possibility that prover, P, maliciously sends its best guesses before receiving the questions from V. If P dishonestly wishes to convince V that the distance is lower than it really is, then P can achieve a 34 probability of guessing correctly for a given round without having yet received that round's value of Ci. This is because, on average, half of the rounds do not require guessing at all since half the time Ri0=Ri1. The other half of the time P's best strategy, assuming V generated C securely, is to guess 1 or 0 at random.

The false acceptance probability, or "False-Acceptance Rate", pFA, of V accepting the distance-bounding protocol proof of P can be calculated using the following equation found on the sixth page of the Hancke paper. This equation calculates pFA assuming V judges that receiving k correct responses out of n total rounds is acceptable.

p FA = ∑ i = k n n i ⋅ 3 4 i ⋅ 1 4 n − i

The equation states that pFA is equal to the sum of each individual probability where P guessed correctly <mathk or more times (for example: one outcome exists where P guesses perfectly, some outcomes where P makes only one mistake, some outcomes where P makes two mistakes, etc.). The total number of terms in the sum is of n−k+1.

In other words, the final term (the n'th term) of the sum is the probability that P guesses correctly in exactly every single response (one very rare possibility). The penultimate term (the n−1'th term) is the probability that P guesses correctly every single time except for exactly one mistake somewhere (a slightly less rare possibility). The n−2'th term is the probability that P guesses all responses correctly but with two errors somewhere. The n−3'th term is the probability that P guesses all responses correctly but with three errors somewhere, and so forth. The first term of the sum is the probability that P guesses correctly exactly k times out of n responses and therefore provided incorrect responses exactly n−k times. Each term of the sum is the binomial probability function (a.k.a. "binomial distribution formula" or "probability mass function") which should be part of the syllabus for any a typical Statistics course.

Since no factor of the equation for pFA can be made exactly equal to zero it is impossible for Verifier V to completely eliminate the possibility that P could forge this distance-bounding proof. The best V can do to strengthen confidence in the proof's validity is to set the parameters k and n to values that produce an acceptably low value for pFA, the probability of falsely accepting a maliciously constructed proof by Prover P.

Example 2: Calculating False-Acceptance Rate

Below is a copy of the previous example table but with values of Ri0 and Ri1 bolded when Ri0=Ri1. From inspection it should be clear that P does not have to guess roughly half of the rounds since a quarter of the time Ri0=Ri1=0 and a quarter of the time Ri0=Ri1=1.

i 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16
Ri0 0
1 0 0 1 0 1 1 1 0 1 1 0 0 1 0
Ri1 1
0 0 0 1 1 1 1 0 1 1 0 1 0 0 1
Ci 0 0 0 0 1 0 1 1 0 0 0 1
1 1 0 1
RiCi 0 1 0 0 1 0 1 1 1 0 1 0 1 0 1 1

Side note: I believe the inefficiency of allowing the protocol to have instances where Ri0=Ri1 is due to Hancke designing the protocol to be simple in order to accomodate implementation in RFID tags with limited computatioinal ability and over noisy communication channels. The scope of this project doesn't include attempting to improve the protocol but to simply implement it as described in the Hancke paper.

In order to illustrate how the False-Acceptance Rate, pFA, is calculated, let us say that V was programmed to accept 14 correct responses out of 16 (k=14, n=16). In this case pFA could be calculated as described below.

The binomial coefficient factor in the pFA equation can be expanded out, with ! signifying the factorial operation (for example, 5!=5⋅4⋅3⋅2⋅1=120).

p FA = ∑ i = k n n ! i ! n − i ! ⋅ 3 4 i ⋅ 1 4 n − i

The sum consists consist of a total of n−k+1=16−14+1=3 terms.

The last term (i=n=16) is:

16 ! 16 ! 16 − 16 ! ⋅ 3 4 16 ⋅ 1 4 16 − 16 = 1.00226 ⋅ 10 -2

The penultimate term (i=15) is:

16 ! 15 ! 16 − 15 ! ⋅ 3 4 15 ⋅ 1 4 16 − 15

= 5.34538 ⋅ 10 -2

The first term (i=k=14) is:

16 ! 14 ! 16 − 14 ! ⋅ 3 4 14 ⋅ 1 4 16 − 14

= 1.33635 ⋅ 10 -1

The sum of these three terms is:

1.00226 ⋅ 10 -2

+

5.34538 ⋅ 10 -2

+

1.33635 ⋅ 10 -1

= 1.97111 ⋅ 10 -1

Therefore, the False-Acceptance Rate, pFA can be written as:

p FA = ∑ i = k = 14 n = 16 n ! i ! n − i ! ⋅ 3 4 i ⋅ 1 4 n − i

= 1.97111 ⋅ 10 -1

= 19.7111 %

In other words, if V decides to accept only k=14 or more correct bits from from P out of a possible n=16 bits in the bit sequences they exchange, then there is about a 19.7% chance that P could fool V into accepting that the distance between them was lower than it physically is. P could do this by completely disregarding V's questions, C, and sending only best guesses for bit sequence RCi given the structure of R0 and R1.

Posted 2021-02-17T16:18:01+0000

<!DOCTYPE html>

Explanation of the Hancke-Kuhn Distance-Bounding Protocol

Created by Steven Baltakatei Sandoval on 2019-08-15T06:46:35Z under a CC BY-SA 4.0 license and last updated on 2019-08-15T20:16:52Z.

Introduction

It is possible to determine how far two computers are from eachother using the speed of light and ping time. The physical distance is, at most, the ping time multiplied by the speed of light. This documents explains the Hancke-Kuhn protocol that can calculate this upper bound for the distance between a Verifier V and a Prover P through the sending and receiving of certain bit sequences. This calculation is useful for location-based authentication technology (ex: RFID, contactless payment) defending against man-in-the middle attacks.

I have written this explanation in order to help solidify my own understanding of the protocol before I write my own implementation of it at my GitLab repository. It is an explanation in my own words. Any errors or misrepresentations are entirely my own.

A more detailed summary with references to academic papers was published by Cristina Onete which may be found here on her website's publication page.

This document makes use of MathML for displaying equations. Firefox 60.0 esr should render MathML symbols correctly.

Background

Explanation, part 1: setup

In 2005, Gerhard P. Hancke and Markus G. Kuhn proposed a distance-bounding protocol as a defense against man-in-the-middle attacks for people who use RFID tokens in order to automatically authenticate themselves for a location-based service such as the opening of a door or purchase at a specific point-of-sale device.

An example of a man-in-the-middle attack for such a building access-control could be two attackers maliciously forwarding radio traffic between an RFID token and a building RFID reader without the RFID token owner's knowledge even in the case where the token is located at a great distance from the reader. The idea to strengthen an RFID token against such an attack is to equip the building RFID reader with some means of proving the token is physically located within a specific distance.

The goal of this project is to apply this concept to the ping time between two computers in order to prove how close the computers are from eachother. A distance-bounding protocol proof uses the distance, speed, and time equation solved for distance.

distance = speed ⋅ time

The speed is set to the speed of light since one conclusion from the theory of special relativity is that no information signal or material can travel faster than light in a vacuum. The time is set to half the ping time (round trip time divided by 2.

distance = speed of light ⋅ ping time 2

In the protocol, a verifier, V, and a prover, P, create a pair of one-time-use pseudorandom bit sequences, R0 and R1, each containing n elements. Each element Ri0 or Ri1 is a bit whose value is either 0 or 1. These sequences can be represented like so:

Ri0 = R10 R20 R30 R40 R50 ⋯ Rn0

Ri1 = R11 R21 R31 R41 R51 ⋯ Rn1

Regarding these bit sequences, V rapidly asks P a stream of n questions. A question may take only one of the two forms:

  1. What is the ith bit of R0, Ri0?

  2. What is the ith bit of R1, Ri1?

The stream of questions start with i=1 and end with i=n.

In order to decide which question V asks P, V generates a private random bit sequence, C, which consists of n elements. The rule V follows is that if Ci=0 then V requests that P supply Ri0. If Ci=1 then V requests that P supply Ri1. In other words, at each round, i, V randomly chooses which of the two questions to ask P.

After sending a question to P, V records the exact time and increments i by 1.

Because cause must precede effect, P cannot provide a correct answer to V until after P receives the question. Since the speed of light is the maximum rate at which any information can travel through space, there is a minimum ping time (or "time of flight") for any given distance between V and P which can be used by the protocol to prove an upper bound to the distance between V and P.

Immediately after receiving a question, P sends to V the value RiCi which is the requested bit from either R0 or R1. The set of these responses can be written as RCi.

Upon receiving each response, V records the exact time in order to calculate that particular question-response round-trip time (or "ping time").

Example 1: how the bit sequences are used

To help explain how this process works below is an example that sets n=16 and walks you through how to calculate the response bit sequence, RCi.

  1. Verifier V and Prover P assemble and agree upon pseudorandom bit sequences R0 and R1

    • Ri0 = 0 1 0 0 1 0 1 1 1 0 1 1 0 0 1 0

    • Ri1 = 1 0 0 0 1 1 1 1 0 1 1 0 1 0 0 1

  2. Verifier V secretly produces a pseudorandom bit sequence Ci:

    • Ci = 0 0 0 0 1 0 1 1 0 0 0 1 1 1 0 1
  3. V sends each bit of Ci , one at a time, starting from i=1 until i=n. V notes the exact time when it sent each value of Ci.

  4. P receives and uses each bit of Ci to determine whether to immediately send the bit Ri0 or Ri1 to V in response. If all bits are received and sent without error, P will eventually have sent the set RCi.

  5. V receives and records the arrival time for each response bit, RiCi. V calculates the round-trip time for each round. The resulting values of RiCi are:

    • RiCi = 0 1 0 0 1 0 1 1 1 0 1 0 1 0 1 1

Below is a table illustrating how the example values for these bit sequences correlate. I have bolded the values of Ri0 and Ri1 which were sent by P in response to the values sent of Ci sent by V.

i 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16
Ri0 0
1 0 0 1 0 1 1 1 0 1 1 0 0 1 0
Ri1 1
0 0 0 1 1 1 1 0 1 1 0 1 0 0 1
Ci 0 0 0 0 1 0 1 1 0 0 0 1
1 1 0 1
RiCi 0 1 0 0 1 0 1 1 1 0 1 0 1 0 1 1

Explanation, part 2: False-Acceptance Rate

At each step V records the round trip time required between the sending of the question and the receiving of the correct answer from P. Given enough correct answers from P, V can then use the average value of the round trip time, tm, of correct responses in order to calculate with some statistical certainty that P is physically located within a distance, d. The distance, d can be calculated using the following two equations (pg 68, Hancke 2005).

d = c ⋅ tm−td 2

tm=2⋅tp+td

In the language of the Hancke paper, variables in the two equations are defined as:

c is the propagation speed, tp is the one way propagation time, tm is the measured total round-trip time, and td is the processing delay of the remote device.

A conservative practice defines td=0 for the processing delay variable. It is conservative because td is a function of the capabilities of the hardware P uses to process requests from V. If both P and V trust eachother to use specific hardware with consistent and accurate estimates for response times then td may be specified. However, the Hancke protocol-Kuhn does not provide a means for proving or incentivizing P to accurately measure and report its own hardware capability.

The highest possible propagation speed, c, according to the laws of physics is the speed of light in a vacuum. According to section 2.1.1.1 of the 8th edition of the International System of Units, a document published by the International Bureau of Weights and Measures, this speed is 299 792 458ms.

The statistical certainty that the round-trip time between P and V is less than tm is 1−pFA where pFA is the "false-accept probability". The value of pFA must be a statistical estimate constrained by the possibility that prover, P, maliciously sends its best guesses before receiving the questions from V. If P dishonestly wishes to convince V that the distance is lower than it really is, then P can achieve a 34 probability of guessing correctly for a given round without having yet received that round's value of Ci. This is because, on average, half of the rounds do not require guessing at all since half the time Ri0=Ri1. The other half of the time P's best strategy, assuming V generated C securely, is to guess 1 or 0 at random.

The false acceptance probability, or "False-Acceptance Rate", pFA, of V accepting the distance-bounding protocol proof of P can be calculated using the following equation found on the sixth page of the Hancke paper. This equation calculates pFA assuming V judges that receiving k correct responses out of n total rounds is acceptable.

p FA = ∑ i = k n n i ⋅ 3 4 i ⋅ 1 4 n − i

The equation states that pFA is equal to the sum of each individual probability where P guessed correctly <mathk or more times (for example: one outcome exists where P guesses perfectly, some outcomes where P makes only one mistake, some outcomes where P makes two mistakes, etc.). The total number of terms in the sum is of n−k+1.

In other words, the final term (the n'th term) of the sum is the probability that P guesses correctly in exactly every single response (one very rare possibility). The penultimate term (the n−1'th term) is the probability that P guesses correctly every single time except for exactly one mistake somewhere (a slightly less rare possibility). The n−2'th term is the probability that P guesses all responses correctly but with two errors somewhere. The n−3'th term is the probability that P guesses all responses correctly but with three errors somewhere, and so forth. The first term of the sum is the probability that P guesses correctly exactly k times out of n responses and therefore provided incorrect responses exactly n−k times. Each term of the sum is the binomial probability function (a.k.a. "binomial distribution formula" or "probability mass function") which should be part of the syllabus for any a typical Statistics course.

Since no factor of the equation for pFA can be made exactly equal to zero it is impossible for Verifier V to completely eliminate the possibility that P could forge this distance-bounding proof. The best V can do to strengthen confidence in the proof's validity is to set the parameters k and n to values that produce an acceptably low value for pFA, the probability of falsely accepting a maliciously constructed proof by Prover P.

Example 2: Calculating False-Acceptance Rate

Below is a copy of the previous example table but with values of Ri0 and Ri1 bolded when Ri0=Ri1. From inspection it should be clear that P does not have to guess roughly half of the rounds since a quarter of the time Ri0=Ri1=0 and a quarter of the time Ri0=Ri1=1.

i 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16
Ri0 0
1 0 0 1 0 1 1 1 0 1 1 0 0 1 0
Ri1 1
0 0 0 1 1 1 1 0 1 1 0 1 0 0 1
Ci 0 0 0 0 1 0 1 1 0 0 0 1
1 1 0 1
RiCi 0 1 0 0 1 0 1 1 1 0 1 0 1 0 1 1

Side note: I believe the inefficiency of allowing the protocol to have instances where Ri0=Ri1 is due to Hancke designing the protocol to be simple in order to accomodate implementation in RFID tags with limited computatioinal ability and over noisy communication channels. The scope of my [Proof of Ping project][glbk_2019_pypop] doesn't include attempting to improve the protocol but to simply implement it as described in the Hancke paper.

In order to illustrate how the False-Acceptance Rate, pFA, is calculated, let us say that V was programmed to accept 14 correct responses out of 16 (k=14, n=16). For this case the calculation of pFA is detailed in this spreadsheet file (in ODS format) as well directly below.

The binomial coefficient factor in the pFA equation can be expanded out, with ! signifying the factorial operation (for example, 5!=5⋅4⋅3⋅2⋅1=120).

p FA = ∑ i = k n n ! i ! n − i ! ⋅ 3 4 i ⋅ 1 4 n − i

The sum consists consist of a total of n−k+1=16−14+1=3 terms.

The last term (i=n=16) is:

16 ! 16 ! 16 − 16 ! ⋅ 3 4 16 ⋅ 1 4 16 − 16 = 1.00226 ⋅ 10 -2

The penultimate term (i=15) is:

16 ! 15 ! 16 − 15 ! ⋅ 3 4 15 ⋅ 1 4 16 − 15

= 5.34538 ⋅ 10 -2

The first term (i=k=14) is:

16 ! 14 ! 16 − 14 ! ⋅ 3 4 14 ⋅ 1 4 16 − 14

= 1.33635 ⋅ 10 -1

The sum of these three terms is:

1.00226 ⋅ 10 -2

+

5.34538 ⋅ 10 -2

+

1.33635 ⋅ 10 -1

= 1.97111 ⋅ 10 -1

Therefore, the False-Acceptance Rate, pFA can be written as:

p FA = ∑ i = k = 14 n = 16 n ! i ! n − i ! ⋅ 3 4 i ⋅ 1 4 n − i

= 1.97111 ⋅ 10 -1

= 19.7111 %

In other words, if V decides to accept only k=14 or more correct bits from from P out of a possible n=16 bits in the bit sequences they exchange, then there is about a 19.7% chance that P could fool V into accepting that the distance between them was lower than it physically is. P could do this by completely disregarding V's questions, C, and sending only best guesses for bit sequence RCi given the structure of R0 and R1.


🅭🅯🄯4.0
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Posted 2021-02-17T16:18:01+0000

<!DOCTYPE html>

Automatic Firefox ssh Proxy Script, Lesson Learned

Created by Steven Baltakatei Sandoval on 002019-08-16T05:06:42Z under a CC BY-SA 4.0 license and last updated on 2019-08-16T08:22:33Z.

Abstract

I mistakenly thought I encountered a bug in ssh while testing bash scripts to automatically fire up an instance of Firefox that sends its traffic through a remote proxy. I mistakenly thought this imaginary bug had to do with how ssh performs variable substitution in its ControlPath parameter. However, the real bug was in my script.

Context

To be more verbose, I thought I encountered a bug in OpenSSH while constructing a bash script to automatically launch Firefox under a SOCKS5 proxy for the purpose of being able to make edits to Wikipedia from my home's internet connection despite using a VPN while travelling abroad. Wikipedia blacklists ranges of ISP addresses which also happen to include my VPN provider. According to $ ssh -V, my version of ssh was: OpenSSH_7.9p1 Debian-10, OpenSSL 1.1.1c 28 May 2019. I found a useful bash script in this Reddit post by user /u/anthropoid. It is a useful script that would be useful to many people:

#!/bin/bash

# STEP 1: Start an SSH master connection (-M)
#         in the background (-f)
#         without running a remote command (-N)
ssh -M -o ControlPath=/tmp/socks.%n.%p.%r -f -N -D 8123 -C evily2k@102.2.115.73

# STEP 2: Launch Firefox in the foreground
firefox -P -no-remote

# STEP 3: When user is done with Firefox, send an "exit" command to the master connection
ssh -o ControlPath=/tmp/socks.%n.%p.%r -O exit evily2k@102.2.115.73

My Mistake

However, I needed to modify two lines in this script. I needed to supply a -p <port number> flag to both the "STEP 1" and "STEP 3" commands in order to accomodate my home router's configuration. I had configured my home router's port forwarding settings to send inbound traffic it received on a specific port (ex: port "12345") to a specific IP address within my home's local network. This IP address was an IP address I had told my router to always assign to my primary home computer. I had to supply this -p 12345 flag whenever I needed to remotely log into my home computer via GnuPG and SSH. For example, I might use the following ssh command to log in.

$  ssh -p 12345 evily2k@102.2.115.73

So, I modified the two lines and added the -p flag. Below is the bash script (with comments removed for clarity).

#!/bin/bash

# STEP 1
ssh -p 12345 -M -o ControlPath=/tmp/socks.%n.%p.%r -f -N -D 8123 -C evily2k@102.2.115.73:12345

# STEP 2
firefox -P -no-remote

# STEP 3
ssh -p 12345 -o ControlPath=/tmp/socks.%n.%p.%r -O exit evily2k@102.2.115.73

When I ran this script I was provided this error:

Control socket connect(/tmp/socks.102.2.115.73:12345.12345.evily2k): No such file or directory

This is because I made a mistake in modifying STEP 1. This line:

ssh -p 12345 -M -o ControlPath=/tmp/socks.%n.%p.%r -f -N -D 8123 -C evily2k@102.2.115.73:12345

—should have read:

ssh -p 12345 -M -o ControlPath=/tmp/socks.%n.%p.%r -f -N -D 8123 -C evily2k@102.2.115.73

The Correction

Now that I discovered my mistake, a working bash script allowing me to ssh home through my port forwarding router is:

#!/bin/bash

# STEP 1: Start an SSH master connection (-M)
#         in the background (-f)
#         without running a remote command (-N)
ssh -p 12345 -M -o ControlPath=/tmp/socks.%n.%p.%r -f -N -D 8123 -C evily2k@102.2.115.73

# STEP 2: Launch Firefox in the foreground
firefox -P -no-remote

# STEP 3: When user is done with Firefox, send an "exit" command to the master connection
ssh -p 12345 -o ControlPath=/tmp/socks.%n.%p.%r -O exit evily2k@102.2.115.73

This script works. Feel free to skip the rest of this document if all you need is a working script. To customize it for your purposes, you might do the following:

  • Substitute 102.2.115.73 for your remote home computer's public IP address.
  • Substitute evily2k for the user name of the remote home computer running ssh (note: I am assuming you already know how to use ssh to establish a remote connection; I refer to this guide to remind me how to set up GnuPG to work with ssh on a new Debian instalation).
  • Substitute firefox for /home/evily2k/Downloads/firefox/firefox (depending on where you extracted the Firefox tar.bz2 file and location of the firefox executable file).
  • Create and configure a Firefox profile to use SOCKS5 in "Connection Settings" at 127.0.0.1 and port 8123 and delete all other profiles.
  • Substitute -p 12345 according to your remote home computer's router port forwarding settings (ex: route external port 12345 traffic to your home computer's static IP address of 192.168.0.4 at port 22).

How I found the mistake

If you really want more details on this particular story, continue reading. I started my troubleshooting process by attempting to understand the error message that included the phrase "No such file or directory". At first, it appeared to me that there was a problem with how ssh performed variable substitution in the ControlPath parameter I supplied it. I had supplied ControlPath=/tmp/socks.%n.%p.%r in both ssh commands for STEP 1 and STEP 2 and yet the error message indicated that ssh could not find the temporary file at /tmp/socks.102.2.115.73:12345.12345.evily2k that it required in order to successfully perform the exit command in STEP 3. I tried changing the percent variables in ControlPath=/tmp/socks.%n.%p.%r to things like ControlPath=/tmp/socks.%n.%p.%r.%a, ControlPath=/tmp/socks.%n.%p.%r.%b, ControlPath=/tmp/socks.%n.%p.%r.%c, and so forth in order to see how the error messages changed. I learned that %C produces what looks like a hash but most letters are blank. There are no multi-letter substitutions (ex: %aa only substitutes the %a part, leaving the second a alone).

In any case, I rewrote the script from scratch with a different ControlPath and ended up coincidentally writing the script without the superfluous :12345. I then mistakenly thought that it was the ControlPath change I made that allowed the script to work and, therefore, there was a problem with OpenSSH. I had a script that opened an ssh proxy, started up Firefox, and then closed the ssh connection when I closed Firefox. I stopped my troubleshooting and left somewhat satisfied that I had:

  1. a working script

  2. a potential bug in OpenSSH

I didn't discover the "bug" was caused entirely by my own mistake until after I spent a day doing other activities unrelated to this particular problem. After time passed, I decided to write up this "bug" in a blog post since I thought other people might also encounter this problem and I wanted to help them out. In order to make sure the information I provided was useful, I tested a few assumptions such as "The script as I have written it in this blog post draft actually reproduces the exact problem I am describing". I had clear memories of seeing the bug but when I made the change (adding -p 12345) that I thought would cause the bug to appear the script worked flawlessly. The script I tested for this blog post was copy-pasted from the reddit post so if my assumption was correct, the script should have reproduced the bug. The bug was not reproduced. The bug was not reproduced because in applying my customizations to the original script I only applied the specific change I needed to which was adding -p 12345. I didn't add the :12345. It was at that point that I realized the root cause of the failure was myself. I compared the blog draft script with the script I had actually run and found the extra :12345 where it should not have been. The root cause of my problem in getting the script to initially work was my failure to identify the extra :12345 at the end of the STEP 1 command.

I had mistakenly included :12345 because I am a novice programmer and my method of learning unfortuantely involves playing with commands to see what works and what causes problems. Using ssh in this new context (using it as a SOCKS5 proxy) meant I was adding and taking away parameters to see what broke. The :12345 was one of those experimental parameters that I had forgotten about. My strategy is usually to repeat such command formatting experiments until I find a command that works incrementally better than the previous failure. I use search engines to look up error messages I receive in order to find stories of people in similar situations as me. Writing up my thoughts in this blog post is another way in which I perform these experiments. When composing this document I am forced to create my own story which requires discarding irrelevant details, identifying relevant details, and then integrating the most important details of my troubleshooting experience into a story whose assumptions I can test. In this case my experimental learning strategy succeeded in producing a useful script even if it did do so in a meandering way.

Summary

I made a mistake in customizing a bash script to automatically start up ssh as a SOCKS5 proxy and start up Firefox to use this proxy. I initially thought this mistake might have been caused by a bug in OpenSSH itself but I eventually found that it was due to my inexperience with formatting ssh commands. I made this blog post to show a working bash script with the customization for my particular context (I need an -p 12345 flag because I have to ssh through a port forwarding router) for others to benefit as well as to explain one wild goose chase I ended up having to running before my understanding of ssh strengthened to be able to see my formatting error.

Note: I am not user evily2k. I preserved this user name in script examples to match the Reddit post from which my customized script originates.


🅭🅯🄯4.0
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Posted 2021-02-17T16:18:01+0000

This blog is powered by ikiwiki.

Text is available under the Creative Commons Attribution-ShareAlike license; additional terms may apply.