Showing My Work

In the normal and ordinary course of a typical week, I touch many different items of technology. Hardware, software. Mobile. Server. I do all kinds of cool stuff. However, I have not done a very good job of documenting any of this.

That is, until now.

I have been moved to show my work after reading the book "Show Your Work" by Austin Kleon. I felt really compelled to change my approach to my computer experimentation as I have precious little to show for all of the hard work I have put into:

  • My website

  • My computers

  • My art

  • My electronics

  • My meetups (and the tangential converstations)

I routinely have all kinds of conversations with people who would love to see what I am up to, as I routinely go places they would never think to go with my technology.In the past I would just do it. But now, I have been taking picutres and screenshots, and aim to record video as well.

At first it was vanity, but the points raised in Show Your Work have given a very clear purpose to all of this. I want to put back in what I took out of the Noosphere, and share the journey of discovery.

I have made the first of a series of changes that will show my work. I have added some photos, shared details about what I use to make this site, and have gotten into the habit of taking picutres when I go to meetups or harry my various devices. As far as I can help it, this year will be very interesting.

Curiouser and curiouser

Well, after a year or so, I managed to finally get into development in a professional way.... then get laid off at the end of the (previous) year. Fun while it lasted!

For now, though, I have been busy with sharpening the saw. I have obtained certs for ITIL from Axelos and CC from ISC(squared), and I have also been sweeping through my various projects to see what to keep, and what to throw away...

Whilst I maintain a keen interest in Python, I have been looking into the information security (infosec) world. There has been a great deal of hype around red team, offensive maneouvres, penetration testing, and the like - but from what I can tell, most jobs are actually blue team (defensive) related.

Also, I have been looking into CI/CD and pipelines lately. I'd like to set this site up to work on one as Nikola would lend itself very well to such an arrangement. I'd also like to rearrange the style of the website too.

So much to do, so little time....

My goal for this year is to show my work more often. More frequent updates to this site are certainly in order, as well as more pictures. I may even start showing videos on YouTube as well. Using scripts and pipelines to pull everything together should be fun, and useful. I already have a ton of pictures ready to show off, and I plan to grab some more screenshots and video too.

Happy (belated) New Year!

Happy New Year! Spent most of the end of 2021 fighting my various computing environments, with varying degrees of success. Looking forward to a new year full of new opportunity.

I have also made it a point to document my various projects more often, and indeed, took some steps to partially migrate my blog to a Gitlab repo in preparation for using CI/CD (Continuous Integration/Continuous Delivery)

I am now able to blog from any device with Git installed, which is proving to be very useful already. But with the final CI/CD piece in place, I can deploy from anywhere.

It occurs to me that I need to share more about what I do when I do things, and I am documenting what I do as I do it - this year, I endeavour to show my work and share as much of what I know as I can.

Adventures in distro-hopping, Part 2: Electric Boogaloo

So, in my normal and ordinary forays into the world of Linux distros, I happened upon a way to use my Lenovo IdeaPad 3 with Linux via Crostini. Crostini is a facility in ChromeOS, built on top of LXD/LXC and allows Chromebooks to run a tightly integrated container that runs Debian..... by default. Following a tutorial from a certain Chris Titus Tech on YouTube, I removed the default container and installed Arch.

It's official - I use Arch, btw. On my CHROMEBOOK, LOL

The integration is very tight - /mnt/chromeos exposes the userland Files directory, and additional folders shared in the ChromeOS GUI appear here in the container. It also has hooks for exposing GUI apps from Linux that would not normally be able to run on a Chromebook - Gimp, Joplin, the desktop versions of Brave and FIrefox, Doom Emacs - you name it. More than a few people have said that this arragement turns any Chromebook into a very perfect self contained web development environment, as once you get your Node or Flask or Rails app running in the container, you can browse the server from the ChromeOS shell and get exposure to all the mobile and desktop browsers to test in. Truly a thing of beauty.

So, between this, and my desire to get certain files backed up in an alternate location (i.e. Backblaze B2), I set out to install Duplicati onto my Arch container. Oh sure, ChromeOS is designed to automagically sync everything to your Google Drive account, but :

  1. Simple file sync, while better than nothing, is not a true backup, and

  2. The 3-2-1 rule demands that there be at least two copies of the data anyway

I don't need to back up the entire Chromebook or container, only the home directories. Since both the ChromeOS and the container, by rights, should be able to be blown away and provisioned again from scratch, I thought I would just set up a Duplicati job to back up the only directories that matter to the same archive that all of my other machines go, and be done with it. Right?

Nothing ever takes just twenty minutes

I thought that I would just install Duplicati into the Arch container and be done with it. Granted, it is in the AUR, but that does not automatically mean that it will not work, and indeed, it launched just fine. Since Crostini allows the container to access localhost, I could use it from ChromeOS without worrying about the fact that my container did not have its own desktop environment. I set up a job, put in my Backblaze credentials, checked the connection - success !

And then, I tried to back up. It worked fine - until it didn't. I got to about 1 GB on storage when the following error occurred :

error : Ssl error:1000007d:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED

At this point, the job stopped working. Which was confusing, because it actually managed to move data - but apparently, this was not to be. After flailing about for an hour, I found that this error is not from Arch, Chrome, or Duplicati - but from Mono.

The problem with forking

Mono is the open source alternative to C#/.NET, and is what Duplicati is written in. It is truly crossplatorm, and I have actually restored Windows files to Macs, and Linux files to Windows with it. In order for it to do its magic with encryption, it uses a library called BoringSSL.

Or rather, its own fork of BoringSSL. Because Google is providing access to its code, but is apparently not taking any commits. But the first rule of cryptographic code is: Do NOT write your own cryptographic code.

Now, to be clear - open source does certainly allow anyone to fork a project and make it their own. BoringSSL is, in and of itself, a fork of OpenSSL. The exact nature of the license used may determine if you have to also make your changes freely/libre available, but beyond GPL/copyleft considerations, there really is not much else to worry about. One risk - and it is signifcant - is if changes happen to the project you forked from, the "upstream" project. A similar risk happens if changes or bug fixes happen in a downstream fork, but do not make it into the main project. In both cases, the codebases drift further and further apart, and there is no longer one source that has all the bug fixes/patches, etc.

This hits the open source community hard whenever it happens - as it causes a lot of extra work for everyone, and the benefits of sharing the code in the first place are diminished.

After a few hours of diving through multiple forums, I learned that many projects in the open source world that use BoringSSL are maintaining their own forks - out of necessity, since Google is not taking most patches (see their statement below). So, when a bug in BoringSSL causes it to go titsup whenever it encounters an invalid certifcate - instead of, for instance, gracefully skipping or blacklisting the cert and moving on - each downstream project that maintains its own fork must get the patch and apply it to their fork.

This library maintains a SSL/TLS certificate store, and provides a few other key cryptographic functions. Some of these BoringSSL forking projects are: Electron, Mono, TrueNAS and certain RHEL utilites. Both RHEL and Electron have encountered issues, and patched them for their projects - but those patches went to their own forks. Anyone else using BoringSSL apparently had to get the patches from the other projects....

How BoringSSL (is supposed to) work

When installed, best practice is to sync the BoringSSL certificate store with either the Mozilla certificate bundle (with root certificates, intermediate certificates, etc) or sync with the OS certificate store. As long as there is a valid chain of trust, the X.509 protocols will ensure a secure connection. There have been a few times - since at least 2019 - where this process has caused an issue, but the most recent problem apparently came to light in September 2021.

How BoringSSL actually works

The Let's Encrypt Root Certificate for DST Root CA X3 was sunset (September 2021). It has been replaced, of course, and normally, an expired root certificate would just be swept aside. But instead of behaving gracefluly or predictably, BoringSSL will crash. This problem was apparently patched by Electron, but since their patch is downstream of BoringSSL, no one else would easily see it or get to it. Patches from Electron apparently have not been accepted by Google, so RHEL, TrueNAS, and Mono have all had to try to take the Electron patch and manually apply it to their forks. Which they still must maintain on their own, because Google is not taking any patches that do not relate directly to how they are using BoringSSL. Indeed, their website makes it clear:

"BoringSSL is a fork of OpenSSL that is designed to meet Google's needs.

Although BoringSSL is an open source project, it is not intended for general use, as OpenSSL is. We don't recommend that third parties depend upon it. Doing so is likely to be frustrating because there are no guarantees of API or ABI stability.

Programs ship their own copies of BoringSSL when they use it and we update everything as needed when deciding to make API changes. This allows us to mostly avoid compromises in the name of compatibility. It works for us, but it may not work for you."

Where does this leave me?

Well, I was not happy to learn this - by the time I understood what the full problem was, I burned almost five hours of time trying to get the damn thing working. I tried forcing the OS certificate store to sync again, I tried refreshing both Arch and Mono's certificates. When reading what the RHEL community went through, I learnt that there was a workaround of manually blacklisting the errant root certificate so that BoringSSL would not have anything to choke on. This was how I also found out that Mono maintains its own certificate store, such that there is one set of tools for the OS, and one for Mono itself.

At this point, I started looking at other options - partly because I was actually in a serious time crunch, and partly because I seriously thought about (and am still considering) switching my container to a different distro. After all, the version of Duplicati in the Arch AUR is not the latest release, and it will not update using the in-app updater either. It was at this point that I discovered that ChromeOS offers a native solution to back up the entire container. More than I wanted to back up, but at this point, I thought - hell, why not? A single button, and a folder on the device or in Google Drive that has enough free space, and that's it. Not as efficient as my first scenrio, but good enough, considering I burned five hours on what was supposed to be a twenty minute task.

A native, integrated backup solution for Linux containers in ChromeOS. What could possibly go wrong?

To be continued...

Things I am thankful for in 2021

This Thanksgiving, I really wanted to reflect on all the things I am grateful for. This year has been a serious ride - I was able to move past some major obstacles, and I came to understand the real value and meaning of things.

I am grateful that I was able to get my health issues sorted, or at least controlled and managed better. Last November, it seems like my doctors finally figured out what was broken, and a flurry of appointments and a surgery later, I feel much better than I have in a long time, and I can breathe much better. It makes a massive difference.

I am thankful for my friends, Shelby, Travis, Mark, Johnny, Bill - very encouraging to be around, and nice of them to keep me accountable, and to keep me company along the way.

I am also thankful to the FOSS community, not the least of which the Central Ohio Python group (COhPy) - Harlan, Mike, Greg, Joey, and the gang (as well as Andrew, Russ, Eric, and Jim) have been a great source of commraderie.

Also, the Joplin, Zettlr, FreeBSD, and neovim projects have been a great help. I am thankful that, in general, there are still enough crazy madlads out there willing to make technlogy open and accessible.

I am thankful for my family - at least the part of my family that still wants to be family - for keeping in touch and keeping connected. It makes a huge difference.

I really appreciate everyone out there that continues to share information on what they know - good, quality information on how to get computers to do what you need them to, or how to install a bidet, or good ways to take notes, or good ways to cook fish - so much is posted online and is freely (or almost freely) accessible. For my part, I endeavour to also share more of what I am getting into so as to put back in what I took out, so to speak. That there are people out there who still make an effort to feed the commons is a big deal.

Happy Thanksgiving, and I hope that the holidays are festive and not stressful for you.

Adventures in distro-hopping

Earlier this evening, I helped a good friend with a System76 laptop that would not boot reliably. Actually, the saga spans about a year or so, and there is a lot more to it than just booting, but all he wanted was to have a small PopOS partition to grab at the Oryx firmware updates without issue, followed by Manjaro. Manjaro has the more recent packages he needs for what he does, and after troubleshooting various desktop configuration issues, it came down to the boot.

Looking back at the variety of issues he ran into, it seems that he encountered a few peculiarities of Linux, which seemed interesting to go over.

systemd delente est

The biggest obstacle was the boot process - PopOS, like all Ubuntu-based distros, have started using systemd-grub-boot to manage the boot process. Add one more to the list of reasons why systemd is a bad idea. In this case, this means that not only does systemd extend into the very early stages of the boot process, well past init of the OS, but it also fails to play nicely with other grub configs. This is true even with manual manipulation of fstab.

This was a massive distraction, because it masked the real issue that was stopping my friend from getting anywhere. In theory, you are supposed to be able to install may flavours of linux on the same device, and boot into whichever one you want - grub can handle it, as long as you tell it where everything is.

In practice? Ubuntu presumes that it is the only distro installed. And if you have more than one Ubuntu based distro - PopOS and elementaryOS, for instance - both their systemd-grub-boot instances will fight each other to the death for control over the boot. I obsereved similar issues with OpenSUSE on one of my machines, likely also due to use of systemd-grub-boot.

Desktop environments are horribly inconsistent

GNOME, in particular, is not very good. PopOS has added so many of their own custom extensions to GNOME that they are now moving towards their own custom backend. I hope they succeed.

My friend tried Manjaro Xfce early on, and bounced around a few distros to see if he could get his preferred workflow going. PopOS and other GNOME based desktops were the only ones that consistently allowed him to use his external monitor without issues, but even then, there were still glitches in how settings got applied, how they could (or could not) be changed....

It took a lot longer than it needed to for him to find a trouble-free desktop environment on a distro that was not tied to Ubuntu LTS. I eventually helped him find the recent Manjaro GNOME - of all the current GNOME desktops, this one is the best so far. It plays nice with his external display and other hardware.

Too many distros blindly grab the upstream desktop environment, half-ass a theme, and do nothing else to ensure a smoother integration. It is not uncommon to see two or three tools to handle network configuration (for instance) but only one incomplete, bare bones file manager (for instance). The GNOME Foundation being openly antagonistic towards the open source community that is not in their fiefdom certainly does not help matters, either.

The boot process is still a dark art

Even after EFI came along, and even though it has been around since 2005, all the utilities around it seem like clumsy hacks, and there is still a tonne of information about traditional BIOS boot in Linux. It is a pity, as it makes more trouble than there really needs to be at this point. Yes, there libreboot/coreboot - System76 ships several laptops with this out of the box - but very few machines are able to be retroactively flashed with it. There really is more to life than a Thinkpad X220. Seriously.

For as graphical and user friendly as many distro installs are nowadays, the inital boot configuration is still using the same command line tools that were in use in the late 1990's. My friend had to troubleshoot around both grub's intereaction with EFI, and systemd-boot, and it damn near drove him crazy. Fortunately, there is rEFInd.

<https://www.rodsbooks.com/refind/>

rEFInd is an open source, cross-platform boot manager. Nothing else. Not a boot loader, not an "init system", just a plain tightly scoped, tightly focused tool. Once I finally convinced my friend to install it, he was able to tame the boot process, put systemd back in its place, and finally see what was holding him up...

The last remaining obstacle

Manjaro would not boot, because somewhere along the way, pkgfile became corrupted or locked. So, the desktop would not load, because pkgfile would cause SDDM to crash out before it could even open an xsession. The full shock and horror of what this is - as well as two solutons, can be found in this forum thread :

[FAILED] failed to start pkgfile database update <https://forum.manjaro.org/t/failed-failed-to-start-pkgfile-database-update/31731>

pkgfile is a core depenency of zsh, and is used in autocompletion, autosuggestions, and a few other things. It is a database of certain strings that allows zsh to be more responsive - querying the pkgfile database is faster than querying the entire system. When it breaks, there are bascally two paths:

  1. Force a rebuild of pkgfile with 'sudo pkgfile -u'

  2. Switch away from zsh, and uninstall pkgfile

Because my friend is an old school traditionalist, he already switched to bash. We removed pkgfile and a few other zsh related items - et voilà ! The issue was resolved, and Manjaro boots now.

For the first time in a year, my friend finally has a reliable boot into an environment that has the packages he needs, and that plays nice with his external monitor. Yes, part of the reason it took so long is due to his stubborness, but still, I think we in the open source community can do better. This really should not have been so difficult, and I look upon renewed scrutiny of the Linux desktop with keen interest.

How to close literally thousands of open browser tabs

Within the past fortnight or so, I have been taking an online course on Personal Knowledge Management (PKM) called "Building a Second Brain". It is run by Tiago Forte, and I have been following him for a while. I took one of his Udemy courses a while back, and I follow him on his Praxis blog.

One thing I learned about, that I had not considered, was the concept of a Reactivity Loop. The idea being, there are things that come up that grab our attention. Maybe it is urgent - maybe it is something less urgent that you have been wanting to do, but it nags at the back of your mind. In either case, the cognitive load predisposes you to react to something. A classic example might be social media notifications. Emails and phone calls fall into this category for a lot of people, too.

What grabbed my attention, though - my open browser tabs.

The problem

See, there was a live chat during this session, and when I mentioned that I had 500 tabs open (which is Safari browser's hard limit on iOS) - it got quite the reaction from the chat. It then occurred to me that each open tab was grabbing at my attention, something left unsettled. A reactivity loop. Not on the same level as social media popups, but a reactivity loop nonetheless. As long as those tabs stayed open, some attention would go to them - the cognitive load is still there.

Now, I am a voracious infovore. I am a very curious person, and I want to know everything about everything about everything. At the time of this revelation, I had :

  • 500 tabs open in Safari on my main iPad

  • 186 tabs open in Firefox on the same iPad

  • 386 tabs open in Safari on my iPhone

  • 55 tabs open in Firefox on the same iPhone

  • 700+ tabs open in Firefox on my Macbook Air, spread out across three windows...

You get the idea. Tons of open tabs with information that I found useful, information relevant to projects that I am working on, and that I completed, and that I planned. There would be times where I would have issues selecting tabs to get to the one I needed, so I would just open another window. Given that I use multiple devices, I would use another device if it were more convenient.

Oh sure, I had bookmarks - but they were not organised well at all. It all just ran together. And there was a definite impact on the performance of my devices. But I actually did use those open tabs as a kind of reference, a loosely curated list of things I found useful. Clearly, however, that is not an appropriate use of the browser tabs.

The solution

Once I became aware that I needed to properly get through the browser tabs in a way that reduced cognitive load, I started looking for bookmarking tools. I wanted something that worked better than the browser default tools, something that worked similar to the way Delicious did. Remember del.icio.us ? Pepperidge Farms remembers...

Delicious would let you sort bookmarks several layers deep, but it would also check for broken links. This is very important for certain technical information, as vendors shuffle things around or move them behind paywalls all the time. Or perhaps a personal blog with a really good resource goes dark. There are other similar services out there, but they seem to be too heavy - too many other distracting features that get in the way. But then I found :

Raindrop <https://raindrop.io>

This app is cross platform, has a mobile app, and extensions for every major browser. It does everything Delicious did, without too much heavy ceremony. It is literally a bookmarks manager with Zapier and IFFFT intergration, an API, and several other modern conveniences. But it does not try to do any other than manage bookmarks.

Breaking the reactivity loop

With Raindrop, I was able to bookmark all of the open tabs, then sort them into an Archive folder based on which device I got them from. The first step is important from a raw psychological perspective - I did not want to miss anything. The second step is important from a cognitive load perspective - only what you are working on in the moment should be in focus. Everything else should be hidden, and retrievable based on when you need to use it.

Archiving all of the open tabs stopped the same clutter from overrunning Raindrop, so that I can proactively use it to stop this issue from happening again in the future. A neat, orderly, curated collection of bookmarks is much better than a raw dump of all the open tabs. So, the initial raw dump goes into the Archive folder.

Once all the open tabs were bookmarked in Raindrop, I knew I could access them from all of my other devices - they do have a cloud sync feature. So to reduce cognitive load further, I also put all of my existing bookmarks into the same place in Raindrop. This allows me to totally nuke the browser bookmarks bar - or the browser install itself - and start over from a clean slate.

A few special notes about Safari

Firefox and Brave both have had a few tools to group tabs, get multiple bookmarks at once, and manage bookmarks and tabs - these tools are fairly straightforward to use. Safari, on the other hand - it was very much a tedious experience. The desktop browser was somewhat tolerable when trying to bookmark multiple tabs, but the mobile version of Safari had no way to do that at all. And tab grouping? Putting related tabs all together so that you can move them to a new window or close them all at once when you are done? Not possible.

At least, not until iOS 15 and macOS 12 (Monterey)

Now, literally within the past month or so, it is possible to group tabs together, and bookmark multiple items very easily in both mobile and desktop Safari... so that's exactly what I did. The jump in performance and battery life on my iPad was significant. Grouping all the open tabs let me grab them and archive them once and for all - for while iOS does not have support for Raindrop as a Safari extension, I could send all the tabs in a group to my Mac Mini. All 500+ of them. From there, Raindrop happily ingested them all.

The aftermath

Raindrop shows me that, across all my open tabs from all my devices, as well as my bookmarks, I had 7.2k links. Of those, only 389 were broken, and approximately 700 or so were duplicates. I have successfully closed all the open tabs on my mobile devices, except for about 10 or 20. The only tabs I have open anywhere are on my Mac Mini, and my Chromebook - and I have been trying to start a new habit of closing the tabs, or sorting them into Raindrop when I am done.

The cognitive load is a lot lighter, as I can search my bookmarks instead of manualy searching my open tabs. The reactivity loop around using my browser is certainly reduced, if not elmintated. For the past few days, I have been able to use my browser as a focused tool to work on the active projects I have, without tirpping over or sorting through all of the other open tabs. The sun is shining, the birds are chirping, and Her Majesty's corgis are beying to serenade the new dawn upon me.

Other things to try

Besides Raindrop, there is OneTab <https://one-tab.com/>, which is very helpful in grabbing all the open tabs in a browser window, then giving you a list to export (or restore later, but for this situation, you would not want to restore them all later - not without curating them first)

Instapaper, Pocket, and Readwise also come up - these are all in the category of "read later" apps. I am stil not quite comfortable with Pocket, and all of these apps just seemed to be too heavy for what I needed.

Doom Emacs also comes to mind - Org Mode in Emacs is legendary, and they even have org-roam meant to make it even better for PKM. The Doom distribution puts in Vim keybindings, as God intended.

Finally, you could always just close all the tabs, and/or install the browser again from scratch - but where's the fun in that?

Powerline 10k is awesome

Powerline 10k makes zsh so much more bearable. And the additional details in the prompt give me a much better sense of situational awareness. Reminds me of the same affordances that dumb terminals provided - it amazes me how many people who live in the Linux terminal do not seem to want a simple status bar that shows them the time, network status, etc.

Awakening from a long slumber

Well, after about a year or so, I got my blog updated, moved to a new machine, and ready to go. With Nikola, I have yet to figure out a satisfactory way to get the blog source shared (with multiple devices), until discussing this with a friend. At which point, it occurred to me that I should probably use Gitlab.

Getting to this point, though, trying to get the perfect workflow, the perfect setup, the perfect circumstances for this ongoing project - it took way too long. Perfection is a hell of a drug. In many ways, large and small, it occurs to me that I need to be comfortable with small, incremental changes. And sometimes, good enough is good enough.

A lot happened over the past year - I got some major health issues sorted, and can breathe better than ever. I am now also a member of OWASP, and I look forward to posting more about that as well.

For workflow? I am still chasing the perfect flow, but it occurs to me that I shuld document the journey. At least that way, there will be more to show for all the work and exploration I have been doing. I am still enamoured with the text based/terminal workflows - Vim is the best distraction free writing environment I have ever seen, although Emacs is also very intriguing. Doom Emacs = Emacs with Vim keybinds, and is quite a nice environment.

Not all who wander are lost, unti they start demanding perfection.

WebDAV from the command line

Recently, I have been working on bootstrapping my very own cloud. I have found that using the services I already pay for and use is crucial in getting things done. But also, that many ways of getting things done do not mesh well with a text-driven command line life. There does seem to be a renewed interest in doing as much from the command line as possible - which I am happy to see, and I look forward to continuing to share more.

One thing I have noticed is that so much of the world is going to web-based software-as-a-service - which is better than nothing, but it does also mean that one must have a perfectly set up desktop environment for the GUI. On servers, I do not want a GUI, and on some of my endpoints, they work better without one. The overwhelming majority of browsers are either very thick and heavy (like Firefox, Chrome, etc.) or have odd dependencies (Midori comes to mind - it is lightweight, but it often clashes with zeitgeist - and many distros do not resolve that conflict well). Lynx is awesome, text only, but can be tedious when accessing certain heavily styled sites - particularly since it cannot show graphics. An entire generation of web designers appear to have totally forgotten about this - or screen readers as well, for that matter.

So, I have been paying attention to terminal friendly ways of doing things. As I discovered, moving files from one platform to another is not neceassarily as easy as it could be, at least not out of th box. And until my private cloud is ready, there is not one place I can point things to - or is there ?

Now, my personal cloud does partially work, but as I discovered, Fastmail have provisoned some rather interesting features - including 10GB storage included with an active subscription. Further investigation reveals that not only can you deploy a photo gallery with that storage, but that you can also store other files there. And not only can you access those files with a web portal, but you can also use WebDAV as well.

WebDAV is a protocol that allows for file sync and transfer over HTTP. Many phones - both iOS and Android - already have this baked in, along with CalDAV and CardDAV (for calendar data and contact data, respectively). As an early adopter of the old Windows CE and Palm OS devices of yore, I can personally attest to how much easier life was to have a fairly open standard for these things. On a practical basis, these protocols mean that I can move data from my mobile devices to my laptop to my server to my Mac - as long as suitable clients can be found. Using the GUI environment for what it does best, using my phone and its camera for what it does best, and using my servers for what they do best is a great thing.

Now, many places on the internet presume that you are using a GUI or mobile application for WebDAV. So, I was very excited to learn about a nice utility called Cadaver. This project is a commandline WebDAV client that will run on any Unix OS - I use it on FreeBSD - and it will run on MacOS and Linux as well. While the Fastmail documentation makes no mention of this - and while I do intend to document this further, I was able to successfully use cadaver to connect to the 10GB provisioned as a part of my account. This is the perfect sized staging area for bootsrapping my private cloud. I was able to use it to get screenshots off of both my phone and my netbook to the website :

/images/desktop.thumbnail.png

Above, a screenshot of my netbook Egodoge in WindowMaker, before I added RAM. Below, a follow-up in CoolRetroTerm :

/images/coolretrodesktop.thumbnail.png

And finally, a picture of one anthophila halictidae that I ecountered this past weekend, otherwise known as a sweat bee :

/images/sweatbee.thumbnail.jpeg

I have some cool ideas on how to document the process of using cadaver with Fastmail, and am working on making the steps repeatable and showable soon.