So, in my normal and ordinary forays into the world of Linux distros, I happened upon a way to use my Lenovo IdeaPad 3 with Linux via Crostini. Crostini is a facility in ChromeOS, built on top of LXD/LXC and allows Chromebooks to run a tightly integrated container that runs Debian..... by default. Following a tutorial from a certain Chris Titus Tech on YouTube, I removed the default container and installed Arch.
It's official - I use Arch, btw. On my CHROMEBOOK, LOL
The integration is very tight - /mnt/chromeos exposes the userland Files directory, and additional folders shared in the ChromeOS GUI appear here in the container. It also has hooks for exposing GUI apps from Linux that would not normally be able to run on a Chromebook - Gimp, Joplin, the desktop versions of Brave and FIrefox, Doom Emacs - you name it. More than a few people have said that this arragement turns any Chromebook into a very perfect self contained web development environment, as once you get your Node or Flask or Rails app running in the container, you can browse the server from the ChromeOS shell and get exposure to all the mobile and desktop browsers to test in. Truly a thing of beauty.
So, between this, and my desire to get certain files backed up in an alternate location (i.e. Backblaze B2), I set out to install Duplicati onto my Arch container. Oh sure, ChromeOS is designed to automagically sync everything to your Google Drive account, but :
Simple file sync, while better than nothing, is not a true backup, and
The 3-2-1 rule demands that there be at least two copies of the data anyway
I don't need to back up the entire Chromebook or container, only the home directories. Since both the ChromeOS and the container, by rights, should be able to be blown away and provisioned again from scratch, I thought I would just set up a Duplicati job to back up the only directories that matter to the same archive that all of my other machines go, and be done with it. Right?
Nothing ever takes just twenty minutes
I thought that I would just install Duplicati into the Arch container and be done with it. Granted, it is in the AUR, but that does not automatically mean that it will not work, and indeed, it launched just fine. Since Crostini allows the container to access localhost, I could use it from ChromeOS without worrying about the fact that my container did not have its own desktop environment. I set up a job, put in my Backblaze credentials, checked the connection - success !
And then, I tried to back up. It worked fine - until it didn't. I got to about 1 GB on storage when the following error occurred :
error : Ssl error:1000007d:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED
At this point, the job stopped working. Which was confusing, because it actually managed to move data - but apparently, this was not to be. After flailing about for an hour, I found that this error is not from Arch, Chrome, or Duplicati - but from Mono.
The problem with forking
Mono is the open source alternative to C#/.NET, and is what Duplicati is written in. It is truly crossplatorm, and I have actually restored Windows files to Macs, and Linux files to Windows with it. In order for it to do its magic with encryption, it uses a library called BoringSSL.
Or rather, its own fork of BoringSSL. Because Google is providing access to its code, but is apparently not taking any commits. But the first rule of cryptographic code is: Do NOT write your own cryptographic code.
Now, to be clear - open source does certainly allow anyone to fork a project and make it their own. BoringSSL is, in and of itself, a fork of OpenSSL. The exact nature of the license used may determine if you have to also make your changes freely/libre available, but beyond GPL/copyleft considerations, there really is not much else to worry about. One risk - and it is signifcant - is if changes happen to the project you forked from, the "upstream" project. A similar risk happens if changes or bug fixes happen in a downstream fork, but do not make it into the main project. In both cases, the codebases drift further and further apart, and there is no longer one source that has all the bug fixes/patches, etc.
This hits the open source community hard whenever it happens - as it causes a lot of extra work for everyone, and the benefits of sharing the code in the first place are diminished.
After a few hours of diving through multiple forums, I learned that many projects in the open source world that use BoringSSL are maintaining their own forks - out of necessity, since Google is not taking most patches (see their statement below). So, when a bug in BoringSSL causes it to go titsup whenever it encounters an invalid certifcate - instead of, for instance, gracefully skipping or blacklisting the cert and moving on - each downstream project that maintains its own fork must get the patch and apply it to their fork.
This library maintains a SSL/TLS certificate store, and provides a few other key cryptographic functions. Some of these BoringSSL forking projects are: Electron, Mono, TrueNAS and certain RHEL utilites. Both RHEL and Electron have encountered issues, and patched them for their projects - but those patches went to their own forks. Anyone else using BoringSSL apparently had to get the patches from the other projects....
How BoringSSL (is supposed to) work
When installed, best practice is to sync the BoringSSL certificate store with either the Mozilla certificate bundle (with root certificates, intermediate certificates, etc) or sync with the OS certificate store. As long as there is a valid chain of trust, the X.509 protocols will ensure a secure connection. There have been a few times - since at least 2019 - where this process has caused an issue, but the most recent problem apparently came to light in September 2021.
How BoringSSL actually works
The Let's Encrypt Root Certificate for DST Root CA X3 was sunset (September 2021). It has been replaced, of course, and normally, an expired root certificate would just be swept aside. But instead of behaving gracefluly or predictably, BoringSSL will crash. This problem was apparently patched by Electron, but since their patch is downstream of BoringSSL, no one else would easily see it or get to it. Patches from Electron apparently have not been accepted by Google, so RHEL, TrueNAS, and Mono have all had to try to take the Electron patch and manually apply it to their forks. Which they still must maintain on their own, because Google is not taking any patches that do not relate directly to how they are using BoringSSL. Indeed, their website makes it clear:
"BoringSSL is a fork of OpenSSL that is designed to meet Google's needs.
Although BoringSSL is an open source project, it is not intended for general use, as OpenSSL is. We don't recommend that third parties depend upon it. Doing so is likely to be frustrating because there are no guarantees of API or ABI stability.
Programs ship their own copies of BoringSSL when they use it and we update everything as needed when deciding to make API changes. This allows us to mostly avoid compromises in the name of compatibility. It works for us, but it may not work for you."
Where does this leave me?
Well, I was not happy to learn this - by the time I understood what the full problem was, I burned almost five hours of time trying to get the damn thing working. I tried forcing the OS certificate store to sync again, I tried refreshing both Arch and Mono's certificates. When reading what the RHEL community went through, I learnt that there was a workaround of manually blacklisting the errant root certificate so that BoringSSL would not have anything to choke on. This was how I also found out that Mono maintains its own certificate store, such that there is one set of tools for the OS, and one for Mono itself.
At this point, I started looking at other options - partly because I was actually in a serious time crunch, and partly because I seriously thought about (and am still considering) switching my container to a different distro. After all, the version of Duplicati in the Arch AUR is not the latest release, and it will not update using the in-app updater either. It was at this point that I discovered that ChromeOS offers a native solution to back up the entire container. More than I wanted to back up, but at this point, I thought - hell, why not? A single button, and a folder on the device or in Google Drive that has enough free space, and that's it. Not as efficient as my first scenrio, but good enough, considering I burned five hours on what was supposed to be a twenty minute task.
A native, integrated backup solution for Linux containers in ChromeOS. What could possibly go wrong?
To be continued...