RSS Atom Add a new post titled:

Bonneville Dam photographs

Created by Steven Baltakatei Sandoval on 2021-06-14T05:32Z under a CC BY-SA 4.0 license and last updated on 2021-06-14T05:36Z.

A few days ago I took some photographs while visiting the Bonneville Lock and Dam Visitor Center. I created a web page for them in my TeXmacs article section here.

I enjoyed playing with TeXmacs and annotating the photographs.

Posted 2021-06-14T05:37:32+0000

Kyoto Animation arson attack Wikipedia article update

Created by Steven Baltakatei Sandoval on 2021-06-07T07:03Z under a CC BY-SA 4.0 license and last updated on 2021-06-07T08:21Z.

Summary

As I have several times before, I tidied up the references in the wikicode of the Kyoto Animation arson attack article on Wikipedia. I blogged about this earlier.

If I create an article from scratch or am present when an article is being initially formed, I try to store <ref> citation (a.k.a. "references") information as a list at the end of the article's wikicode instead of as in-line references spread throughout the body text. I find the task of verifying references easier if their fields (e.g. title=, url=, access-date=, publisher=) are organized in the wikicode with indentation; the indentation makes the information more human-readable. I hope my efforts to organize references this way helps future editors verify information more quickly.

The Wikipedia template I use to store references as a list is called {{Reflist}}; the template is described here.

Background

I first learned about the arson attack back in 2019 when I was living in Mountain View, California. I have enjoyed Kyoto Animation's work (see this fan-made visual summary); the Melancholy of Haruhi Suzumiya series in particular resonated with me when I was grappling with religion and the importance of grounding yourself with rational people. I felt very strongly that the events surrounding the violent murder of Kyoto Animation (a.k.a. "KyoAni") artists should be properly referenced on Wikipedia.

I remember tweeting to encourage people who had photos of the affected building to upload those photos; I remember there were no CC BY-SA photographs of the affected building. Someone named @Thibaut managed to see a photo by a Mike Hattsu who uploaded a photograph to Twitter of the studio years before the incident. This photograph was later uploaded to Wikimedia Commons here and has been used in the Wikipedia article about the incident. I am glad whenever people upload useful information to Wikimedia Commons since the default Creative Commons license helps editors like me collaborate to build useful articles on Wikipedia that can be shared freely.

Editing Notes

In the week after the incident I remember having to make quick short edits to the article; this was because many editors were making frequent changes to the wikicode of the article that conflicted with my reflist edits which span large portions of the code. In contrast, I was able to take my time and perform all the changes in a single edit today.

I also was amused to see that some people expanded on my use of the ISO 8601 (wiki) YYYYddmm date format in some reflist entries; for example, one reference is named nhk_20190718T2124+09_deathcount. This label communicates the following about the cited resource:

  1. The resource is an NHK publication (specifically this one).
  2. The resource is dated the 18th day of July in the year 2019 at 21:24 in the UTC+09 time zone (JST).
  3. The resource is about a number of people killed.

Although I would not have chosen such a verbose timestamp string (the purpose of the YYYYddmm string is to simply help differentiate multiple publications published at different times. The ISO 8601 timestamp permits ambiguity which is useful because often I do not want to think too much about the exact second an event starts or ends; often citing the day in which an event occurs is sufficiently precise.

Future Work

The next tasks I see that should be performed for the Kyoto Animation arson attack article are (lowest-hanging fruit first):

  • Add frequently missing reflist fields (e.g. date=, last=, first=, archive-url=, archive-date=, url-status=).
  • Reformat reflist fields to use consistent order.
  • Read sources and verify semantic content.

Special Thanks

I would like to thank the Internet Archive for saving snapshots of webpages that succumb to link rot. Please donate to them; I think they are currently the best of humanity's efforts to secure the past.

Posted 2021-06-07T08:01:53+0000

FreedomBox Calendar

Created by Steven Baltakatei Sandoval on 2021-03-10T22:45Z under a CC BY-SA 4.0 license and last updated on 2021-03-12T01:50Z.

Summary

I decided to try the Radicale CalDAV software available as a FreedomBox app. My motivation for doing so was to be less reliant upon Google Calendar, the calendar and task server I have been using for years.

Background

When I first started using computer calendars, it was when I was attending university. I had barely started using Email via Gmail in my last year of high school. I remember using the calendar software in the iMac my father purchased for me; the software was very useful in helping me to keep track of when all my classes were in the fast-paced Stanford quarter system. Later, I used mostly Microsoft Outlook because my employer used it for internal meeting scheduling and correspondance. After I read Free Software, Free Society by Richard Stallman, I grew a desire to use free and open-source software whenever it was available.

To that end, one of the first major changes I made was to switch from Microsoft's Windows and (at the time) Apple's OS X to GNU/Linux Debian. I quickly learned that dual-booting was more hassle than it was worth. I found Debian was a viable substitute for every major computing activity of mine except for gaming. Therefore, I purchased a dedicated GNU/Linux machine from Think Penguin and ran Debian on that. I have been satisfied ever since.

However, some computing activities I did not switch over because I had not been using them as often. I haven't been maintaining a personal calendar because either:

  1. An scheduleable event will involve my employer so I'd default to what they require.

  2. Scheduleable events were so infrequent that I didn't think I needed to bother with even Google Calendar.

For years I have neglected to maintain a personal calendar. However, I believe it is a useful activity, especially when planning tasks for myself. I have played with Org Mode but I haven't caught on to using its task management functionality (i.e. "TODO") which appears very robust. Org Mode attempts to do as much as it can in the GNU Emacs text editor. However, I have yet to see a Emacs calendar synchronization function that I like. I'm sure one exists, but in discussion threads about what people use Emacs for, I haven't noticed such an Emacs package. So, even though I'd like to maintain a calendar and task list, Org Mode hasn't really caught my attention. That's where my experiment with FreedomBox comes in.

FreedomBox is a free and open-source project to make server software available and usable to non-sysadmin people. Although I have some experience in crafting my own Bash) scripts and messing with Debian system services via systemd commands, I have been averse to messing with web server software such as Apache. That changed after I purchased a FreedomBox device from Olimex; the device is a small ARM computer about the size of a deck of playing cards which I purchased from Olimex (the "Pioneer Edition Freedombox Home Server Kit"). It contains a set of apps, which includes a calendar server called "Radicale". It stores calendar, journal, and task data that remote clients can synchronize with; this synchornization action can occur over the public internet since FreedomBox software has been designed to provide all its apps with the ability to securely communicate using a user's own public domain name (e.g. reboil.com for this blog). This means that I can activate the Radicale app, initialize a calendar, copy the provided synchronization URL into my smartphone or computer's calendar app, and then, as a result, be able to update the calendar while I am away from home (specifically, away from the FreedomBox located there). None of this process requires that I save any data with a commercial 3rd party such as Google or Apple. With the FreedomBox, I am my own server.

Setup

The setup instructions for Radicale can be found here in the Debian wiki for the FreedomBox. Basically, the steps are:

  1. Setup the FreedomBox (Let's Encrypt certificate, Dynamic DNS if applicable, a username/password to push/pull CalDAV data)
  2. Install Radicale app (the CalDAV server) on the FreedomBox's system web interface.
  3. Login to Radicale as a FreedomBox user and create a calendar/journal/task repository (?) via the Radicale web interface.
  4. Copy the repository's (?) CalDAV URL into a client device (e.g. smartphone) app (e.g. DAVx5 for Android), Thunderbird for GNU/Linux Debian). Provide the client app with the password for a FreedomBox user.

After following the steps described in the wiki for my Android phone and my work machine's Thunderbird instances, I found I was able to successfully push a calendar event from Thunderbird and a task from Android. Now I don't have to use Google Calendar!

Summary

Using FreedomBox and Radicale, I set up my own personal digital calendar server without needing to involve Google Calendar at any step.

Posted 2021-03-10T23:56:08+0000

Personal Text Logger

Created by Steven Baltakatei Sandoval on 2021-03-09T06:11Z under a CC BY-SA 4.0 license and last updated on 2021-03-10T02:46Z.

Summary

I wrote a bash script a while ago for compressing and encrypting log text streams produced by other scripts. I call it bklog. A static copy (version 0.1.33) is available here but I version track it in an environmental sensor script repository here.

Background

Born from desire to record my surroundings

I wanted a program to write to disk the observations captured by environmental sensor data loggers. I wanted such a program to permit me to also encrypt captured sensor data against a public key in case I have it capturing sensitive data (e.g. my personal smartphone's location data) and may need to transfer the data in shared spaces (e.g. a cloud service provider or someone's computer.

The reason I wanted this program was to be able to do something with data produced by Raspberry Pi devices I have been tinkering with. I could see myself generating temperature, air pressure, location, and other trendable data. At first, I started making an adhoc bash script to record each of these items but I decided to make a general script that could compress and encrypt any text stream via stdin.

Encrypted with age

I had seen age recommended as a command line encryption tool whose main feature was that it had fewer configuration options than gnupg, a tool often used for encrypting files. I was attracted to age because it accepted input data in the form of stdin and its public keys took the form of 63-char strings. Also, the key generation process was all done in the command line. Here, secretkey.txt is an example of what a private key in age beta1 looks like:

baltakatei@mycomputer:~$ age-keygen
# created: 2021-03-09T06:32:52Z
# public key: age14a65znxam4k45kd4xg0lc8uu0yvlpyj376kap970mcxf066wycqq57xzqd
AGE-SECRET-KEY-1VEDW7UEKRCE6P2NUDEDXZF0RJ23QTND5PZP97SHHD5TG6Z7CL70Q09A4D5

As of 2021-03-09, age is in beta7. The current version of bklog, 0.1.33 assumes use of beta2.

Recently useful for recording uptime statistics

I haven't recently touched bklog in a while after getting it to work successfully in recording temperature and location data with a set of Raspberry Pi Zero W devices I played with during the 2020 COVID-19 restrictions. However, recently I did find a use for it when I decided to try and collect some uptime, bandwidth, and system process data from the server that runs this blog. I was pleased that past-me had decided to include usage information for the many options I made in the program. I wrote some bash scripts whose only purpose was to output a single stream of data continuously. For example, here's the simple script named uptime_continuous.sh for producing uptime data:

#!/bin/bash
# Desc: Outputs `uptime` every 15 seconds

while true; do
    uptime &
    sleep 15;
done;

Such a script produces output like this:

07:13:47 up  1:30,  1 user,  load average: 0.36, 0.38, 0.37
07:14:02 up  1:30,  1 user,  load average: 0.42, 0.40, 0.37
07:14:17 up  1:31,  1 user,  load average: 0.40, 0.39, 0.37

A separate bash script to be run by cron would pipe the output of this initial stream into bklog which would take care of the job of compressing, encrypting, and writing the data to disk. Because bklog was intended to run on small GNU/Linux systems such as Raspberry Pi devices that use SD cards (flash memory with a limited number of writes), it collects a buffer for a period of time (default: 10 minutes) before compressing, encrypting, and writing a separate file to a memory-only directory (default: /dev/shm). Then, this file is appended to a time-stamped output tar file (default: one tar file per day). I provided several option flags that allow on to adjust time zone, output file name patterns, time periods, etc. An example use of uptime_continuous.sh without encryption is here:

# Desc: Logs system statistics
# Note: Run at boot or every day at midnight UTC
# Depends: bklog, ifstat, top, uptime

~/.local/bin/uptime_continuous.sh | /usr/local/sbin/bklog -v -e \
  -r age14a65znxam4k45kd4xg0lc8uu0yvlpyj376kap970mcxf066wycqq57xzqd \
  -o "/home/admin/logs" -l "uptime" -w ".log" -c -z "UTC" -b "600" -B "day" \
  1>~/$(date +%s)..uptime_logger.log 2>&1 &

Here is usage information that can be obtained by running $ bklog --help:

USAGE:
    cmd | bklog [ options ]

OPTIONS:
    -h, --help
            Display help information.
    --version
            Display script version.
    -v, --verbose
            Display debugging info.
    -e, --encrypt
            Encrypt output.
    -r, --recipient [ string pubkey ]
            Specify recipient. May be age or ssh pubkey.
            May be specified multiple times for multiple pubkeys.
            See https://github.com/FiloSottile/age
    -o, --output [ path dir ]
            Specify output directory to save logs. This option is required
            to save log data.
    -p, --process-string [ filter command ] [ output file extension] 
            Specify how to create and name a processed version of the stdin.
            For example, if stdin is 'nmea' location data:

            -p "gpsbabel -i nmea -f - -o gpx -F - " ".gpx"

            This option would cause the stdin to 'bklog' to be piped into
            the 'gpsbabel' command, interpreted as 'nmea' data, converted
            into 'gpx' format, and then appended to the output tar file
            as a file with a '.gpx' extension.
            This option may be specified multiple times in order to output
            results of multiple different processing methods.
    -l, --label [ string ]
            Specify a label to be included in all output file names.
            Ex: 'location' if stdin is location data.
    -w, --store-raw [ file extension ]
            Specify file extension of file within output tar that contains
            raw stdin data. The default behavior is to always save raw stdin
            data in a '.stdin' file. Example usage when 'bklog' receives
            'nmea' data from 'gpspipe -r':

            -w ".nmea"

            Stdin data is saved in a '.nmea' file within the output tar.
    -W, --no-store-raw
            Do not store raw stdin in output tar.
    -c, --compress
            Compress output with gzip (before encryption if enabled).
    -z, --time-zone
            Specify time zone. (ex: "America/New_York")
    -t, --temp-dir [path dir]
            Specify parent directory for temporary working directory.
            Default: "/dev/shm"
    -R, --recipient-dir [path dir]
            Specify directory containing files whose first lines are
            to be interpreted as pubkey strings (see '-r' option). Only
            one directory may be specified.
    -b, --buffer-ttl [integer]
            Specify custom buffer period in seconds (default: 300 seconds)
    -B, --script-ttl [time element string]
            Specify custom script time-to-live in seconds (default: "day")
            Valid values: "day", "hour"

Here, bklog is located within /usr/local/sbin/. The output directory is specified to be /logs. Each file saved in the output tar archive contains "uptime" and ends with .log. The time zone is specified to be "UTC" (what my server uses and will be useful since I am programming the cron job to run at midnight every day). Files are written by 600 seconds. The verbose diagnostic output (optional; 1>) and any error messages (2>&1) is written to a time-stamped (UNIX epoch seconds) file in the home folder ($(date +%s)..uptime_logger.log). Files are encrypted against the age public key defined by the string "age14a65znxam4k45kd4xg0lc8uu0yvlpyj376kap970mcxf066wycqq57xzqd".

The result is a tar file named 20210309..mycomputer_uptime.gz.age.tar. The hostname mycomputer is included by default. The file's contents after about an hour are:

$ tar --list -f 20210309..mycomputer_uptime.gz.age.tar
20210309T090729+0000..VERSION
20210309T090726+0000--PT2M45S..mycomputer_uptime.log.gz.age
20210309T091011+0000--PT10M0S..mycomputer_uptime.log.gz.age
20210309T092011+0000--PT10M1S..mycomputer_uptime.log.gz.age
20210309T093012+0000--PT10M0S..mycomputer_uptime.log.gz.age
20210309T094012+0000--PT10M1S..mycomputer_uptime.log.gz.age
20210309T095013+0000--PT10M0S..mycomputer_uptime.log.gz.age
20210309T100013+0000--PT10M0S..mycomputer_uptime.log.gz.age
20210309T101013+0000--PT10M1S..mycomputer_uptime.log.gz.age
20210309T102014+0000--PT10M0S..mycomputer_uptime.log.gz.age

Each file's name includes an ISO 8601 time period before the ... I make use of the PT separator which indicates a time period. -- is a recommended by the standard as a replacement for / since / causes problems when used within UNIX file names.

VERSION files contain the version of bklog and age used as well as some other metadata useful for someone interpreting the archive.

Other scripts and commands can be used to automatically extract and reconstitute a continuous uptime file but the stream of uptime data produced by uptime_continuous.sh is all saved.

Some example commands that can decrypt the files are:

$ tar -xf 20210309..mycomputer_uptime.gz.age.tar  # extract files
$ for file in ./*.age; do
  age -d -i ~/secretkey.txt "$file" | gunzip > "${file%.gz.age}";
done;

Where secretkey.txt is the same file generated by age-keygen described earlier.

Conclusion

I found that an older script I wrote for for recording environmental sensor data was also useful in recording system statistics. I described how uptime data could be regularly produced by a custom script uptime_continuous.sh and piped into bklog for compression, encryption, and writing.

Posted 2021-03-09T11:07:19+0000

Final Fantasy Pray Lyrics Spanish in TeXmacs

Created by Steven Baltakatei Sandoval on 2021-03-08T11:50Z under a CC BY-SA 4.0 license and last updated on 2021-03-08T12:03Z.

I exported the spanish translation of the Final Fantasy Pray song I did last year into TeXmacs format for publication on my TeXmacs article list. I did this to test its website generator's capability to display and render Japanese characters. The resulting .xhtml and .pdf files show the content adequately.

Posted 2021-03-08T11:58:02+0000

TeXmacs Static Website

Created by Steven Baltakatei Sandoval on 2021-03-04T04:26Z under a CC BY-SA 4.0 license and last updated on 2021-03-04T06:06Z.

Summary

I created a static website using TeXmacs. It can be found here.

Background

I rewrote an older blog post about a distance-bounding protocol that I authored in markdown with MathML tags. The math typesetting features of TeXmacs along with its static website generator and default CSS settings made for a much nicer-looking site.

I used the Notes on TeXmacs blog as a template for some features, although I didn't use all the Schema features (that other blog automatically updates an ATOM feed among other things). Redirecting from index.html to another page was a feature I used. I may adopt the Schema macros and some CSS, but even with just the bare TeXmacs website generator settings, it looks pretty good!

Posted 2021-03-04T05:48:45+0000

Freedombox Static Website Research

Created by Steven Baltakatei Sandoval on 2021-02-17T22:32Z under a CC BY-SA 4.0 license and last updated on 2021-02-18T00:02Z.

Background

I have been investigating other possible methods for publishing content accessible under my reboil.com domain. Currently, this ikiwiki blog is accessible via: https://reboil.com/ikiwiki/blog/ . However, the potential for formatting is not great; I would not use this alone when publishing mathematical equations, for example (i.e. texmacs).

The Freedombox I own and run this blog off of has been great for introducing me to the concepts of securing my own personal webpages served by Apache. It permits me to publish wiki and blog content via Mediawiki or Ikiwiki. Although I am familiar with the wikitext markup of Mediawiki thanks to some time I have spent editing Wikipedia pages, I prefer simpler solutions that don't involve accepting public input in the form of comments or account registration. I just want to be able to publish my own works.

Investigations

I recently investigated how I could use org mode (a note organization application within Emacs) to automatically render HTML pages for serving within an Ikiwiki blog. However, I ultimately decided that none of the org mode plugins for Ikiwiki were suitable for me.

I later investigated the possibility of using a mathematical typesetting program called TeXmacs to render a static website using its own WYSIWYG interface. The disadvantage of authoring webpages in TeXmacs is that authoring the source files (file extension .tm) properly requires running the graphical WYSIWYG interface in order to immediately see the typesetting results. Markdown, by contrast, is a format in which the source is also the text. TeXmacs itself has a static website generator function that takes a source directory full of .tm files and outputs rendered .xhtml files viewable by a web browser; CSS preferences are set in the TeXmacs Preferences settings. Some built-in CSS preferences make the resulting webpage appear quite nice. The disadvantage of this method is that quick authoring of blog posts requires firing up TeXmacs to render new a new .tm source file for each blog post I compose.

I also saw that FreedomBox developers have decided to add Wordpress as an app alongside Ikiwiki as part of their 2021 Roadmap. A discussion on the forum indicates this decision was made due to user feedback that publishing a website on the FreedomBox still requires some technical know-how regarding GNU/Linux file permissions and modifying configuration files via the command line interface through an ssh connection. I'm reminded of my time messing with Ikiwiki's /var/lib/ikiwiki/blog.setup configuration files in order to enable or disable built-in plugins. I am wary of using Wordpress, since popular plugins for it have been a regular source of security breaches according to my time listening to the Security Now podcast I have been following for years.

Proposal

So, for now, I think I will stick to using Ikiwiki for composing simple text-only blog posts in org mode and then converting them to markdown for Ikiwiki to process. However, if ever images or mathematical equations need to be published, I think I will create a static website using TeXmacs and serve it under my reboil.com Freedombox via a root cron job that git pull's a repo containing the TeXmacs site generator output and rsync's select parts of the repository to the FreedomBox's /var/www/html/ directory.

blog seems appropriate for the Ikiwiki site I have since it implies a "log", a stream of ideas that don't necessarily contain essential structured information. However, the TeXmacs pages I make will, by their nature, be capable of much more custom formatting thanks to TeXmac's deep MathML support and pleasant typesetting features (headers, equation numbering, image linking, etc.). Therefore, also calling the TeXmacs static web site a "blog" seems inappropriate. "Notes", "Articles", "Analects", or "Documents" seem more appropriate to describe what TeXmacs produces when rendering source .tm files. I like "Articles", since it invoke the idea of "newspaper articles" or "column articles"; basically, relatively independent parts of a larger typeset publication. This 1913 definition from the Webster dictionary highlights the meaning I'd like to emphasize:

Article \Ar"ti*cle\, n. [F., fr. L. articulus, dim. of artus joint, akin to Gr. ?, fr. a root ar to join, fit. See {Art}, n.]

  1. A distinct portion of an instrument, discourse, literary work, or any other writing, consisting of two or more particulars, or treating of various topics; as, an article in the Constitution. Hence: A clause in a contract, system of regulations, treaty, or the like; a term, condition, or stipulation in a contract; a concise statement; as, articles of agreement. [1913 Webster]

  2. A literary composition, forming an independent portion of a magazine, newspaper, or cyclopedia. [1913 Webster]

Project code update

Here is a set of project codes related to my reboil.com static website.

BK-2020-08: Ikiwiki blog

BK-2020-08-2: Ikiwiki blog binary blobs

BK-2020-08-3: TeXmacs articles

BK-2020-08-4: TeXmacs articles binary blobs

  • Git repository
  • Note: a submodule of the BK-2020-08-3 git repository.

Conclusion

I think I will call my TeXmacs-powered static website articles, as in "articles of a newspaper" or, more ambitiously "articles of an academic journal". I will host it at reboil.com/articles/, probably using a cron job in my Freedombox to automatically rsync article files rendered and committed to a git repository.

Posted 2021-02-18T12:11:29+0000

Citation Needed Hunt

Created by Steven Baltakatei Sandoval on 2019-07-16T14:23Z under a CC BY-SA 4.0 license and last updated on 2020-12-22T19:10Z.

Citation Hunt

A tool for looking up random sentences with {{citation needed}} tags in Wikipedia is Citation Hunt.

This can be used to help find a random place to start improving Wikipedia.

It is not an efficient method for improving Wikipedia (given how easy for an editor to add a {{citation needed}} tag compared to how difficult it is to understand and locate an appropriate source). However, I think it is more useful than clicking Wikipedia's "Random article" link since it can help focus your mind on a single sentence claim; when I open a random wikipedia article and see dozens of reference tags and paragraphs of text the phrase "Where do I even start?" comes to mind.

References


This work by Steven Baltakatei Sandoval is licensed under CC BY-SA 4.0

Posted 2021-02-17T16:18:01+0000

Russian Roulette

Created by Steven Baltakatei Sandoval on 2019-07-18T05:04Z under a CC BY-SA 4.0 license and last updated on 2020-12-23T00:52Z.

Background

I took it upon myself to review a {{citation needed}} tag on the Russian roulette page on Wikipedia.

I found a reference that cited the Oxford English Dictionary which itself cited a 1937-01-30 issue of Collier's, a magazine containing short stories. The issue conatined a short story named "Russian Roulette" by a person named Georges Surdez. I found a source for the document here and here.

It's interesting to me that a the Oxford English Dictionary cites a document that is rather obscure. It makes me wonder what a library filled with every source that the Oxford English Dictionary cites would look like. It seems like an ambitious project that would be necessary to preserve the english language's history in a technically satisfying manner. Something to think about.

Wikipedia edit

The wikipedia article containing the updated information as of 2019-07-16T22:54:07 is here:

I had removed usage of "Russian Poker" from a description of a 2019-01 incident in which a police officer shot another police officer in what the New York Times describes as "Russian Roulette" but which no source (which I could find) reporting on the incident described as "Russian Poker". I think using that particular phrase to describe an incident that no source describes as such would be creating information out of nothing ("original research"). In this case, the information created is the strengthening of the link between the phrase "Russian Poker" and the concept of pulling the trigger on a possibly-loaded firearm while aimed at another person. I said as much in my descriptions of the edits.

I confirmed that the Collier's quote is partially referenced in a printed copy of the OED2 (page 295) in my local library. The relevant sections are:

> `REVOLUTION` *sb*. `I I`; **Russian roulette**, an act of
bravado in which a person loads (usu.) one
chamber of a revolver, spins the cylinder, holds
the barrel to his head, and pulls the trigger; also
*fig*.;

> Revolution had never taken place. **1937** `G. SURDEZ` in
*Collier's* 30 Jan. 16 ‘Did you ever hear of Russian roulette?’
…With the Russian army in Rumania, around 1917,…some
officer would suddenly pull out his revolver,…remove a
cartridge from the cylinder, spin the cylinder, snap it back in
place, put it to his head and pull the trigger.

Citation Hunt

I had originally found this page to edit via a Citation Hunt webpage that looks up random {{citation needed}} tags in Wikipedia articles and presents them to the user for consideration. URL is here.

I'm also considering using markdown to format text but it hurts legibility if I'm using vanilla emacs. (edit(2020-12-22T19:22Z): I rewrote this article in markdown.)


This work by Steven Baltakatei Sandoval is licensed under CC BY-SA 4.0

Posted 2021-02-17T16:18:01+0000

Kyoani Arson Attack

Created by Steven Baltakatei Sandoval on 2019-07-18T23:21Z under a CC BY-SA 4.0 license and last updated on 2020-12-23T00:53Z.

Wikipedia article

I helped to proofread references on the wikipedia article for the Kyoto Animation arson attack.

33 dead. Attack occured at KyoAni's Studio 1 facility where normally about 70 people work.


This work by Steven Baltakatei Sandoval is licensed under CC BY-SA 4.0

Posted 2021-02-17T16:18:01+0000

This blog is powered by ikiwiki.

Text is available under the Creative Commons Attribution-ShareAlike license; additional terms may apply.