The Three Bandits

Digital painting of three raccoons raiding a vending machine

This year has been quite the anxiety-inducing one, and the best way I know of for me to relax is to paint. I’ve always relaxed by drawing or painting for as long as I could rememeber. As a child my parents would just hand me a notebook and a pencil to keep out of their hair, and my family would gift me three subject notebooks for my birthday and Christmas. Truth be told they were as much a gift for my mother as they were for me. Today, I just paint or draw before bed to relax and empty my brain before sleep. I’ll work on the same image for months. I started on the one above in late April.

I had been struggling for a while coming up with an idea for a painting I could do, and I was just looking through random images. I ran across a photo of a gas station that was lit from above by a sodium vapor lamp with the scenery outside of the lit area being blue. I knew I wanted to do something with that kind of lighting. It just took much longer to figure out what the subject matter was going to be. I just came up with this idea of raccoons raiding a vending machine randomly.

I think I had the most fun on this painting working out the lighting, so much I introduced more light than would normally emanate from a vending machine, including the area at the bottom where the one raccoon is digging inside the machine for goodies. It sort of glows like the briefcase in Pulp Fiction. Another fun part was painting the labels on the snacks in the machine. They’re all mostly vaguely like real world products but yet not. Some are a bit vulgar, too. Of course!

The original of this image is 6136 × 8496 and would be about 24 cm. × 34 cm. when printed as intended.


Growing a Neckbeard

These are quite interesting times for sure. I mentioned in my last post I already had by that point a couple of posts lined up, but I didn’t see them as important anymore because of the COVID-19 pandemic and the seemingly perpetual dumpster fire that is my country. My thoughts on this haven’t changed any. I’ve decided to post this one anyway, mostly because I’ve let my blog go silent again and that I need to do things that don’t involve work which keep my mind off of the daily atrocities that are occurring in my country. I almost didn’t hit publish on this post because it just seemed totally trivial. Maybe I just want to write something here no matter how inconsequential it appears to be.

This post has been the one that has rested the longest in my backlog, having started it in January of 2019. This is because what I was wanting to write about changed a lot, and I made the mistake of starting to write it at the beginning of a transitional period instead of at the end of it. Back then I was just interested in drafting a review of various FOSS desktop environments, starting with GNOME. I have years of experience in running Linux as a server, but I didn’t have a whole awful lot of experience running it as a desktop operating system. Again, I’ve tried them out from time to time over the years. I haven’t been entirely ambivalent to them, but there is a huge difference between trying out an operating system and its desktop environment in a virtual machine and installing the operating system and desktop environment on actual hardware. That’s before actually attempting even to perform daily tasks on it; one’s priorities change when running an OS as their primary. I wanted to take these desktop environments for a spin on a separate drive on my main computer and attempt to get actual tasks done with my multiple displays and other hardware. What happened next was surprising.

The Hackintosh Failure

I outlined my early history with the Macintosh, many of my grievances with Apple, and my initial findings in setting up a hackintosh in Knowing When to Move On. I won’t reiterate them here. I will, however, add to them. The experiment ultimately failed. The bootloader kept getting corrupted not long after I started on my review of GNOME, and I was incapable of keeping the system updated. One day I booted into Windows to play a game, and when I was done with that I rebooted into macOS and couldn’t boot it anymore even when I tried overwriting the bootloader from a backup. I was done. There are a lot of bullshit things one must do to get everything on a hackintosh system working perfectly, and I really didn’t want to spend a long time getting everything back just like I wanted it. I also was stuck on macOS 10.13 High Sierra because of Apple’s disagreements with Nvidia as I have an Nvidia card. It also doesn’t help that the latest released version of macOS is garbage because Apple prioritizes new features over bug fixes. Hey, don’t just take my word for it. I am also concerned with all of the features and behaviors from iOS which have been creeping into macOS, especially Catalyst applications which allow developers to essentially click one checkbox and have their iOS applications run on macOS. None of the existing applications which use Catalyst on Apple’s released operating systems are worth using, and it looks like that is going to be the case in the near future. The Mac is going to be filled with applications by developers who think that clicking a checkbox to build for the Mac is sufficient enough for a release, and the number of quality native applications for the Mac is already less than what it was a decade ago. What I see in macOS 11 Big Sur doesn’t give me hope; it actually fills me with dread akin to what Windows Vista did in 2006 although admittedly not quite to that degree. I can see a future where I would be completely incapable of using a Mac as my primary computer because of Apple’s propensity for removing features in favor of half-baked and often completely broken replacements. After the keynote for Big Sur people were confused and worried thinking Apple completely removed the terminal from the operating system because of the way they demonstrated the Linux virtual machine; that didn’t end up being the case, but it’s telling that people could believe Apple would remove access to the terminal. It’s gotten to where every year people look for things which are missing in new releases of macOS as they slowly morph it into a desktop version of iOS. I don’t want to use iOS on my desktop; in fact I’d prefer never to use iOS at all.

Since I last wrote on the subject Apple released a new Mac Pro as a tower computer that is upgradable which addressed one of my grievances in the prior post. Unfortunately, it is almost entirely made for the professional film industry and is priced accordingly. Their new display intended for use with it alone is $5000 and doesn’t even come with a stand; the stand is $1000. Yes, that’s the correct number of zeros. The display itself isn’t VESA compatible and requires an adapter to even mount to a third party stand; the adapter is $200. The display has a super high color range that is really only useful again to the professional film industry for color grading. I’m sure there are some other obscure uses, but the reality is that 99.999% of the market has absolutely no use for the display. Don’t get me wrong. I’d love to use one, of course. One that can do Adobe RGB is quite alright with me which it in of itself is on the higher end of the display market. The Pro Display XDR can produce colors far outside of the Adobe RGB color space. Their pro computers as they are today are luxury items instead of being able to service the professional market as a whole. At those prices Apple probably fancies itself the tech equivalent of Tiffany & Co. My computer isn’t for show; it isn’t a luxury item for materialistic idiots; it’s for getting work done.

The entire rest of Apple’s lineup isn’t suitable for my uses on my primary computer, and two years ago this problem is what brought me to hackintoshing. Apple just a short bit ago announced they’re switching Macs to their own processors like what they already use in the iPhone, iPad, and Apple Watch. I really do wish them luck in this endeavor. I just cannot follow it for my primary computer. Apple with this move will lock down their computers even more, and it will make their monopoly in ARM silicon even stronger. Apple has a performance monopoly when it comes to ARM processors for the consumer market as no Android device can even remotely come close to Apple’s CPUs (or GPUs for that matter) in performance. Hackintoshing is likely going to become a thing of the past because both the processor architecture switch and the fact that third party offerings don’t measure up in performance. Needless to say it’ll be a multi year process and likely be slower than their PowerPC to Intel transition. I am, however, excited by the move as I wonder what they have planned for Macs now and am hopeful they’ll start producing hardware that is worth purchasing again that doesn’t require taking out a mortgage to purchase. I just don’t see myself going back anytime soon for my primary. I don’t trust Apple to make consistently good hardware anymore. I don’t want to compromise again by buying an all-in-one computer that becomes useless when just one component of the unfixable-by-design computer breaks. The nearest Apple Store to me is 2.5 hours away.

No, It’s Not the Year of the Linux Desktop

The Year of the Linux Desktop is thrown around as a running gag. It’s even used as an insult to Linux users by Apple and Microsoft fans. There are very little computer manufacturers selling computers with Linux preinstalled on them, and that isn’t likely to change in the near nor distant future. The days of the desktop computer for anyone but enthusiasts is waning. Most people are not buying personal computers. That makes perfect sense because most people just need their computers to be appliances; phones and tablets are perfect for that. I am not new to Linux. I’ve been using it for years on my servers, but I am no wizard at that by any means. This website is served on a Linux server, and all of my websites for the past 20 years have been served on Linux. I have also run various distributions on my secondary computers over the years but my actual time with them has been limited. My primary computer has been consistently macOS-based for the past 14 years. That has now changed, and I can almost feel the unshaven hairs growing longer on my neck as a result.

The burning question probably is, “Why Linux?”. I don’t have any ideology about computer software. I love free software. I’ve even contributed to some projects myself, but I will gladly buy good software if it does what I need. The honest truth is there’s nothing else to really move to. Windows 10 is bloated, slow, woefully insecure, costs ridiculous amounts of money, violates its users privacy, and umm… those fucking forced updates. I’ve already described macOS’ faults in detail above and in previous posts, so that doesn’t need repeating. It does seem like I am moving to Linux just because I feel like I have no other option; that’s not true. I am quite happily using it, and I am enjoying learning things about Linux that only come from using it as a primary operating system. It makes me feel like it’s 2006 all over again, abandoning Windows completely and using the first model Mac Pro as my primary. What made me love Mac OS X was its Unix underpinnings; while it presented a working environment that didn’t require access to the terminal at all the terminal was there for power users and the Unix folder structure — while hidden for everyday users — was there. I could automate repetitive tasks really easily without writing code in something like C, could explore the internals of the operating system, and after Homebrew came out installation of software required just a single command. Linux provides me an operating system that is in some ways a lot like what Mac OS X was, not how macOS is today.

What Now?

Linux is greatly fragmented, and that is both its greatest strength and its greatest flaw. This fragmentation does provide users with a lot of choice of different approaches, but also that choice makes it ridiculously hard to decide on what to use as an end user — at least it has for me. The choices are nearly endless as one can literally customize every aspect of their operating system. Out of all that there is to offer because of my affinity for macOS one might think I would have ended up using elementaryOS. I didn’t.

While I don’t want to go into great detail on my assessments of different desktop environments because I might write about them later, I will say a bit about my distribution choice. I have tried most of the popular distributions. Ubuntu is the most well known if not the most popular one. I currently run Ubuntu on my server and really like it for that, but I found I did not like it as a desktop distribution because most of the software available in its repositories are outdated. In the Linux world distributions are typically either fixed release or rolling release, either updated in versioned updates or gradually over time as individual packages themselves are updated respectively. Ubuntu is a fixed release OS like macOS is. However, unlike macOS all applications whether they’re part of the OS itself or not are traditionally installed via Ubuntu repositories; they aren’t updated as frequently as the application developers themselves update if at all between point releases of the operating system itself. Ubuntu has a solution for this shortcoming, Snaps, which are a way to bundle applications and distribute them independent of the operating system. Questions about Canonical’s absolute control over the distribution system itself aside, I find Snaps overengineered and quite like trying to mend one broken leg by cutting the other off. I really would like to avoid them and other things like it for the time being, so quickly realized I would like to use a rolling release OS. I narrowed down my list to OpenSUSE Tumbleweed and Manjaro because they both are rolling release; software on their repositories aren’t bleeding edge; and they’re tested before release which I quite like. I ended up going with Manjaro because almost all of the applications I like to use were available without adding additional repositories or resorting to Snaps, Flatpaks, or AppImages; the rest were on the AUR. It also didn’t hurt that Jeff was fed up with Windows by this point and jumped head first into installing Manjaro while I was still taking my dear sweet time trying to decide. I then eventually landed on KDE Plasma as my desktop environment, and I am mostly happy with it. I can do almost everything I need to do with a computer on Linux, and with free software at that. In fact most of the programs I used on macOS are available for Linux, so I am even using the same applications. For others such as the macOS iTunes/Music app there’s alternatives for Linux which are actually quite a bit better. Not everything I have found alternatives to have been better; email applications are really lacking on Linux compared to macOS. I am using Evolution at the moment until I can find something less bloated and antiquated in user experience while still supporting CardDAV.

I, unfortunately, haven’t been able to find good alternatives to the Adobe Creative Suite. GIMP is the most well known Photoshop alternative, but I absolutely hate using it. Its user interface and experience is quite frankly really weird and janky. Color management also only works on a single display, and feeding it color profiles to use causes it to crash. Without color management it’s pretty much useless for me. It also cannot work in CMYK or L*a*b* which makes it not useful at all for printing and for manipulating color in photographs. Inkscape looks and works like CorelDRAW from 1999. I’m sure it’s quite capable, and I have tried to use it quite a lot but then get frustrated trying to do even the most basic of tasks. I just don’t want to fight its awful user interface to get work done. Krita is quite possibly the best painting application in existence, free or not, and I have used it already in my work. For painting I feel like I’m quite covered, but for everything else I’d need Adobe’s apps. I am currently running Adobe’s applications in a Windows virtual machine guest. This presents a problem of course because Adobe’s applications make liberal use of GPU accelleration, and there’s a huge difference when there isn’t any. The solution to that is GPU passthrough using QEMU. I can give the virtual machine a secondary GPU and a display, and the virtual machine runs almost as if it’s on bare hardware. Barrier is then used to share my mouse and keyboard with the virtual machine. Works great. Most of the time I am just doing something lightweight that doesn’t really require GPU acceleration, so I can just run the same virtual machine in a window using the default SPICE video.

In some people’s opinion my virtual machine setup wouldn’t be preferable to just running Windows as my primary. I disagree. I can strip Windows 10 down to just what is required to run everything for Adobe’s Creative Suite, and I can not worry as much about getting the OS configured just how I want it or worry about its myriad of software failures that pop up at the most inopportune times. I keep a backup of the VM so when something fails I am not stuck having to reinstall everything or searching the internet for esoteric Windows errors. It’s just there to run Photoshop, Illustrator, and InDesign when I need them and whichever way I wish to run them.

A big disadvantage Linux has over macOS is color management. Everything one sees on the screen on macOS is color managed and has been since 1993. Regardless of one’s display’s capabilities the colors on the screen are clamped to sRGB by default. This means that on displays which support color gamuts larger than sRGB colors in the user interface don’t look oversaturated. Applications which require support for additional color profiles such as Photoshop and web browsers can access them and apply them to their open documents. However, the UI remains sRGB. Linux (and Windows for that matter) allow for custom color profiles, but it only applies LUT values and doesn’t clamp the UI to sRGB, making any color used in user interfaces be oversaturated on my display. There is no way to fix this, and explaining this problem to people who have never used high gamut displays is like trying to explain blue to a blind man. I can live with this shortcoming mostly by using a neutrally colored theme which I would be doing anyway to avoid having the UI affect my perception of colors when painting or designing. Most applications which really need it like Krita and web browsers correctly change the color and not just LUT values.

Wacom tablets are supported out of the box on Linux which is quite nice with a driver that is quite well documented and doesn’t exhibit the application conflicts that the official driver has on Windows and has been having in recent years on macOS. However, depending on the desktop environment one is left with varying degrees of interfaces to configure the tablet stretching from no GUI at all to fully featured. Xfce has no tablet configuration tool. Cinnamon has a fully featured one. GNOME and KDE Plasma’s are problematic, although in different ways. After struggling getting what I wanted with Plasma’s tool I ended up writing my own. Aside from initial configuration I have no need for a GUI tool, so it works well and is easy (for me) to modify if I want to change its configuration in the future. While initially not having a GUI configuration tool for my tablet was a problem the fact that I am able to easily write a tool myself because the driver is accessible via the command line is a huge advantage Linux has over either macOS or Windows.

One big advantage over macOS is that I don’t need to boot into Windows to play most PC games. There are quite a few technologies that allow Windows games to run as well or better than they do on Windows in Linux. There is of course Wine, and Steam has a thing called Proton which is based upon Wine to run games on Linux. On top of that there are applications such as GameHub and Lutris which are designed to make it really easy to configure games and manage them, especially ones which aren’t bought on Steam. I like to play the occassional PC game with my girlfriend, so it’s really nice not having to boot into Windows to do so. So far pretty much every game I’ve wanted to play runs just fine. I refuse to buy games with anticheat malware or spyware. They don’t run in Wine/Proton due to the nature of what they are, anyway.


Whew. This, like my hackintosh before it, is an experiment, but it looks like I’ll keep this experiment going. It’s a good start, anyway. I would like to write more about my findings on Linux in the future and maybe do a few projects of my own in the Linux world. I can say I haven’t been this happy using my computer in quite some time, and the only thing I have to sacrifice is that I need to run Adobe’s applications in a virtual machine. It’s been nice finally getting this post worked out. Hopefully the next one won’t be as difficult to author.


Nonsense

Casein painting of a river otter floating in water

I had a couple of other posts planned, but they both seemed quite unimportant while people are dying of the COVID-19 pandemic. Art, however, is. It’s definitely one of the best coping mechanisms I know of when trying to get myself through difficult times, anyway.

This image has a bit of backstory to it. A few years ago my girlfriend was into taking various personality tests, and there was this one that associated you with a particular animal based upon your personality traits. I don’t remember what it is called or I would link it here, but there was a methodology to it and very detailed thought processes that went into the animal selections. It even had a very active forum with people discussing the topic. Upon taking the test mine came out as a river otter, my girlfriend a red panda. Months later I bought her a stuffed red panda, and she reprised it with a river otter for me. She selected “Toffee” as the name for her red panda, and I named my river otter “Nonsense” after the frequently-used pun “otter nonsense”.

I have been experimenting with casein paint a lot lately. My previous post on this blog in fact is of a polar bear painted in casein. I have a few others that I have done that I will get to posting here perhaps in a single post with all of them? I am not too sure yet. I actually began this one in late December while my girlfriend spent some time with me over Christmas. We painted together a few times, and for the longest time this was artist taped to the top of my cedar chest in my office barely worked on as I moved on to other projects and other paintings while I devised in the back of my mind what I wanted to do with this piece. It was initially painted in watercolor using a set of Winsor & Newton Cotman watercolors my girlfriend brought with her here. My intention was to use both it and casein, but it didn’t turn out that way in the end really. I initially had this more abstract water background that was reminiscent of swimming pool caustics going on I wasn’t happy with. When I finally got the nerve to work on it again I brushed masking fluid over the otter and painted over the top of the background with casein.

The extremely vibrant blue used in the water is not a color available in Richeson’s tubes. It’s a primary blue cyan made from some Sennelier pigment I bought off of Dick Blick. To make casein paint I simply add some pigment to casein emulsion, and I have casein paint to paint with. I might experiment in the future with more pigments as Richeson’s color offerings are a bit slim.

As for the subject matter itself: river otters sometimes will float on their backs, but as far as I know they never hold their food while in this position. I felt like I wanted him holding something, so in his paws I put a bluegill for him to munch on later. They are a fish prevalent where I live in Louisiana.


Polar Bear

Casein painting of a polar bear sticking its head out of the water

What? Three posts in one week? What is the world coming to? Well, before you die from shock I should tell you that this is older content I’m finally posting here now that I’ve updated the website.

Something that’s been happening over the past year is that I’ve started dabbling in traditional media again after a long hiatus. Kate started this by wanting to experiment herself, and she encouraged me to as well. I first worked with gouache again, but due to James Gurney’s videos where he’d paint plein air in casein I decided to give casein a try and have been painting in it a lot. This polar bear is one of them.

Casein paint can be deduced by its name to have something to do with milk, and that’s true. The paint is bound with casein dissolved in an alkali like borax. Casein paint has a particular smell to it that’s not horrible, but it is indeed peculiar. It is probably one of the oldest kinds of paint there is, and was especially popular with illustrators because the paint dries to an even consistency and is perfect for photographing for print. Acrylic paint largely replaced it for that purpose in the 1960’s. There is only one manufacturer of casein paint today — Richeson.

I don’t quite remember at this point how I came to paint a polar bear, but I think Kate just suggested it at random when I inquired about what I should paint next. I think the piece shows off some of casein’s strengths. I painted the bear first completely to finish before I ever worked on the water. I took care to paint the fur differently above the water than in it to give the appearance of being underwater. I waited until the bear was dry then painted the water in washes of color over the top. You can’t do this in gouache because the paint below will reactivate and mix with whatever is painted on top. You can in casein because the paint seals when it dries like it does with acrylic. Casein paint can still be reactivated but only if you add a drop of ammonia. This gives it a versatility that borders on oil.

The painting is 9″×12″ in casein on watercolor paper.


A Fresh Coat of Paint

A couple of days ago I posted the first update to my weblog in a year and some change. The reason is that I was mostly busy with work and projects. I actually wrote several drafts for this website, but nothing ended up being published. Things kept changing, and I also wasn’t feeling the website anymore. The scripts I wrote to update the website years ago were stretching their usefulness, and I felt like I needed to write a set of tools for updating the thing before I actually published anything. Client work also pushed things back, but after a long time of saying I’ll do it and not I finally just did.

Back to Basics

I wasn’t ever truly happy with what I made last time. The overwhelming blueness of the design wasn’t a good idea, and I spent way too much time with JavaScript making unnecessary things happen to the page. I didn’t overengineer the JavaScript like we see a lot of these days on the Web, but superficial things were done simply because I could. I wanted to simplify everything and make the content stand out more, especially if it contained color. Often the best design comes out of limitations. So, I set myself some limitations before designing the website’s layout:

  1. The only two colors which may be used on most things are the page’s color and a text color.
  2. A tertiary color can be used for accents but only very sparingly.
  3. No shades or tints can be used for any of the colors.

These limitations forced me to use typography to create “color”. I could, for instance, use a lighter weight to make text darker or lighter in appearance without modifying its color. This is how typography should be used anyway. This led me to deciding to not use my font, Ook, anymore. Ook only comes in regular, bold, and italic variants. While it contains a lot of special characters it doesn’t come in a lot of variants. Since my restrictions demanded a font with a lot of weights I must retire it. When I first started creating it almost a decade ago there were no free fonts that had advanced typography features with ligatures, small caps, old-style, proportional, and lining numerals, subscripts and superscripts, etc. Nobody but typography nerds even considered it important nor possible, but many web browsers supported them. I had to create my own if I wanted them, so I did. The font was experimental, and it wasn’t well-designed. I even said so when writing about it initially. It was my first attempt ever at a font, and I even changed it a lot while designing it. Today there are a lot of fonts that do support advanced typographic features, including the one I chose for the body copy of this website — Noto Sans.

Noto Sans was designed at Adobe and commissioned by Google to support most languages. Its eventual goal is to support them all, and that is where its name comes from. Sometimes when there is no font installed on a system to support a particular character a replacement symbol is drawn there instead which is called tofu. Noto is short for no tofu. I have been a fan of the font since first seeing it shortly after I finished my own, and since it has matured over the years a lot of errors in the font and missing features have been added. I am sad I won’t be using a font of my own creation anymore, but it’s past time to retire it. The best part about Noto is that it is truly open source, being licensed under the SIL Open Font License.

Most websites by designers for themselves these days tend to be pretentious, ostentatiously showing off things at the expense of user experience. Just visit almost any website featured on Awwwards where the name of the game is bombard the viewer with flashy interactive special effects instead of providing them with information in a easy-to-understand manner. Forget making anything on the page accessible to those with disabilities. I won’t win any awards with my design here of course. When I write a blog post about my newest work I want the work to be what the eye is drawn to, not the website’s user interface. If that means this website is “boring” then so be it.

Under the Hood

This website makes liberal use of CSS grid layout. My last redesign used it, but only sparingly and with extensive fallbacks because most browsers that supported it then were quite buggy with it, especially Firefox. I don’t have any sort of fallback for grid on this. Without support for CSS grid the website would simply fall back to traditional flow content which is quite acceptable in my opinion. It keeps the CSS itself cleaner and easier to manage, too. I have also tried to make my website friendly to printers with print styles and also with people who prefer dark themes. This website (and its favicon in supporting browsers) will change when dark mode is turned on.

So, with that said of course there’s no JavaScript running on the page to make crazy shit fly at you or pictures to materialize and drop from your mouse pointer. There’s not much running at all, even. I don’t even have analytics. I’m pleased if people read my blog, but I’m not interested in tracking anyone. There’s far too much of that going on around the Web. That’s not to say there aren’t any interesting JavaScript things on the website. I just leave it for where it is necessary. For example, when I wrote Color Modes I experimented with very basic WebGL to show a RGB cube and HSB cone along with an interactive image diff, allowing the user to slide a bar across an image to reveal a secondary image to show differences between the two. These are still present in the original post, but they have been re-implemented using Web Components and delivered using JavaScript modules. They work like this:

<script type="module" src="/scripts/js/elements/RGBCube.js"></script>
...
<rgb-cube>
 <div class="notice">
  <p>Sorry, your browser appears to be incapable of viewing this 3D content properly. There is a <a href="https://www.youtube.com/watch?v=mI51DTNh11E" title="RGB Cube">YouTube video</a> available of the content if you are interested.</p>
 </div>
</rgb-cube>

Sorry, your browser appears to be incapable of viewing this 3D content properly. There is a YouTube video available of the content if you are interested.

Boom.

Only recent browsers can load JavaScript modules so it works well for graceful degradation as old browsers won’t even load anything at all; they haven’t an idea what a module type is. Web components are incredibly flexible. With that one module I can put many RGB cubes on this page (and possibly crash your browser in the process because of all that WebGL). All I need to do to make one appear after including the module is drop a rgb-cube element anywhere in the document; the stuff inside is just fallback content, intended to be shown if the JavaScript didn’t fire or it failed. This didn’t come out of nowhere. I’ve been experimenting with Web components for the past couple of years in an attempt to create a Web interface for The Arsse instead of using something like React. I should probably get on that because if I don’t Jeff will likely want to throttle me; actually, now that I think about that he probably wants to do that to me anyway.

I expect to keep this design for a while. There are some fixes I want to make to things I’ve noticed since going live with it. I also would like to write more, especially about what I’ve been up to lately. I know I’ve promised it before and didn’t deliver. I apologize for that. I’ll definitely try especially since I’ve made it ridiculously easy for me to do so.


The Way It Was

Illustration of a barefoot boy on a 1980's era three-wheeler with a cane pole and a cricket basket

It’s been over a year since my last post, and things have changed on this website. I’ll write about it in another post, though. Instead, this one is going to be about the image that is just above.

This is the album cover for Frank Foster’s newest album The Way It Was. Frank contacted me last October wanting me to illustrate his album cover. The album itself is a detour from his usual work, being very acoustic and back to the roots of country music; so, he wanted the album cover to be a reflection of this. His direction was to have the cover be painted and of him as a child on an old 1980’s Honda 185S 3-wheeler. He’s from Louisiana like me, so the dirt has the appearance of a lot of red clay in it and the sky is perpetually threatening rain. I treated the composition like a traditional painting straight on and staged almost like those old paintings of generals on horseback with the “camera” slightly low. I really enjoyed working on this. It was especially a challenge to paint his likeness from a few photos of him as a child and to paint the 3-wheeler. Frank was very descriptive in what he wanted, and he was a joy to work with.

I painted this entirely in Krita which is an excellent (and FREE) program strictly designed for digital painting. I’ve tried Krita over the years, but I never could really work in it because of its lack of hardware acceleration. It made the application sluggish for me because of the enormous sizes I typically paint in. It has had hardware acceleration for a bit now, so I put it through its paces painting this one. Don’t let the free aspect of the application fool you; it is a fully professional application with color management, and its brush customization puts Photoshop to shame. I plan on using it a lot more in the future.


If you’re interested go out and buy his album. It is available on his website.


Birdshit

Here’s yet another post on this blog where I describe a computing change. My first one was where I moved away from 1Password and the second was where I built a hackintosh. Now, I’ve moved away from Twitter. This has been a long time coming, and recent actions and inactions by Twitter have made me be introspective about my use of the platform and about social media in general. Actually, this decision happened months ago. I am just now getting around to putting it in writing.

The Killing of the Web

I was an early adopter of Twitter; many of us webheads were. We saw the potential of the platform as kind of an interactive RSS feed, and we were entirely the reason for its success. Many of its features were invented by our small community from retweets to hashtags and even the term “tweet”. When the masses flocked to Facebook away from MySpace we flocked away from our blogs to Twitter. In doing so, those of us who championed the Web have been instruments in killing it because of our own lethargy and addiction to alarming personal conduct and scandal.

Prior to Twitter we had our own blogs where we self-published our own work. This was a realization of the original premise of the Web, the democratization of information where everyone could become their own publisher. We had a decentralized community of writers and artists who wrote, commented, and discussed ideas in the open. This is entirely how the Web standards movement happened. When we started using Twitter we stopped writing in our blogs and really stopped trying to help each other develop and learn. Because of this there is an entire generation of Web developers who know nothing of Web standards, know nothing of Web accessibility, and really know nothing of how and why the Web works the way it does. They treat the Web as if it is one of the many other proprietary app development platforms when it is in reality so much more. They yell about how CSS sucks and use classes for everything instead of using the cascading nature of the language to their advantage. As a result Web development has devolved into a JavaScript-library-of-the-week hellhole where people are hired to positions based upon their proficiency with a particular library and not the language and how it should work with and not against HTML and CSS.

I’ve never used Facebook for longer than a week or so. Its toxicity was apparent long before Twitter’s was. I was bombarded with racist garbage from family and acquaintances who wanted to “friend” me on the platform and got into an argument with someone who was bullying a friend. I closed my Facebook account and have never looked back. Social media is addictive; we are social creatures, and these services are designed specifically to prey upon — and twisting around Abraham Lincoln’s eloquent words in his first inaugural address — the worst angels of our nature. I have never received peer pressure so intense as I have from people livid at me because I do not have a Facebook account. So many people willingly spew every little aspect of their private lives online for everyone to see so much that it is now expected, and when you do not do so you are berated for it because others cannot spy and comment on your private life where they can count their likes and retweets for validation to their points of view. It’s appalling behavior, and the Web has become a cesspool controlled by a few social media websites instead of the platform of freedom and democracy it was intended to be.

Amorality

I am not pointing my metaphorical finger at others here and proclaiming my moral superiority. I am just as guilty as everyone else. Especially over the past couple of years my content on Twitter has mostly consisted of political retweets and comments on such. We Americans are experiencing a national dumpster fire spearheaded by our — and it loathes me to call him this — President and his Nazi allies in the Republican Party. Britons are watching as both of their major parties are causing their country to circle the toilet and break from the European Union without any means whatsoever to do anything about it. “Brexit”, the 2016 U.S. Presidential Election, and the Russian war on democracy that took place in both events have turned everyone’s Twitter feeds into garbage, and I apologize for contributing to it.

However, political commentary itself isn’t the worst of the disease that has spread through Twitter. The worst part of Twitter is in fact Twitter’s own refusal to take responsibility for its platform. This complete lack of professional ethics in computer science permeates every aspect of Silicon Valley and has created very public platforms where the most degenerate facets of our societies can have a voice and gain the ligitimacy they so desperately seek. Nazis and other mentally unstable individuals are allowed to bully and prey upon others with impunity while time and time again the victims and those who report such behavior are the ones punished. Donald Trump can on a daily basis violate Twitter’s terms of service by promoting violence and hate, but they don’t dare ban him from Twitter; he makes them too much money through the outrage generated by each and every single tweet that comes out of his diseased mind and to the fingers tapping on his phone. They openly promote him, even. Twitter isn’t alone in this. For far too long many technology companies are perfectly content with making money from those who promote hate and fear; they are headed by individuals who are every bit devoid of morality as those to whom they make bank promoting.

This promotion is carefully designed. Facebook pioneered the use of algorithms to determine what shows up in its users’ feeds. Originally Twitter feeds consisted of a chronological feed of content from and only from those people who were followed. Today Twitter uses algorithms and machine learning akin to Facebook where feeds consist of content the user did not ask for mixed in with followed users’ content in no logical order whatsoever. It is a frustrating user experience, but the worst part of it is that there’s a method to the madness. The content is scrupulously chosen to generate the maximum level of engagement so advertisers will get the most out of their money. Twitter doesn’t care what kind of engagement is generated, and outrage seems to be the most lucretive. In other words, Twitter and other social media services are specifically designed to make you miserable and unhappy to make these companies and their advertisers money.

These same companies also make money by not only selling ads but also selling the very information people post to their services to advertisers, and apparently also to organizations that use the data to manipulate elections. The scariest part about all of this is that most people do not care and have no concern for their own privacy, and they do not care about the amoral behavior of the companies whose services they use. Because of this most governments aren’t concerned with holding these companies accountable for their actions — or in cases of abuse — inactions. No one is holding these people accountable, and that is scary.

Federation

For years Twitter has allowed access to its service through its API. I and many early adopters of Twitter have used third party applications to access the service. On August 16, 2018 Twitter decided to shut down necessary features of its API to third party applications in an attempt to force users to use their applications and website so we third party application users would be subjected to the algorithmic feed of enragement like the vast majority of their user base. I simply refuse to use their first party offerings for reasons that should be apparent by reading this essay to this point, so I no longer will actively use Twitter. This is all for the best because I cannot in good conscience continue to contribute to the tempest of filth that the service has become. I did not use the service at all for a month, and quite frankly it has been mentally liberating. I have since tweeted promptional stuff there, but I will not interact on there anymore.

Possibly the biggest mistake we have made concerning the Web has been to allow major communications platforms to be controlled by single entities. We are entirely naive thinking if we use a service, promote a service, and our communities contribute features and ideas to a service it is largely ours. We even expect the data on those services to be ours and private, too. We are genuinely shocked when we learn the hard way time and time again that this is not the case. What idiots we are. Almost every service we use today is centrally controlled by a corporation using proprietary technologies. We have absolutely no control over these products because we neither directly pay for it nor do we truly own the data stored on their servers.

Free and open source federated software is our solution to this problem where users of a service are distributed over many servers and yet are still capable of communicating with one another. This used to be the typical behavior of many Internet-based communication platforms years ago. Email is federated. People with Gmail accounts can communicate with people on Yahoo! and with anyone who runs an email server. This is how Facebook and Twitter could work. Facebook and Twitter could even communicate with each other, but they don’t.

There is pushback from this from many people who believe having a service be centralized is the way to go, and until recently they seem to have been winning out. Moxie Marlinspike, the founder of Open Whisper Systems, is one such person. He wrote a lengthy blog post assailing federated services a while back. I won’t pick through the post debunking it because it isn’t the focus of what I’m wanting to write about here, but it is nonetheless worth a read because it is a fantastically constructed piece of spin. Of course he is against it; his entire business model is predicated upon a centralized system.

There are actual drawbacks to federation, of course. It is more difficult to develop because you have to account for other servers in the federation. If one signs up for a hypothetical federated Twitter on feditweet.com and it shuts down in six months one would need to sign up for another one and start over. Both of these issues are surmountable.

Mastodon

Mastodon is a free and open source federated microblogging service developed by Eugen Rochko. It implements features similar to Twitter, but not exactly. Aside from obvious things such as lack of advertising, tracking, and a business model it has designed its features to curtail some of Twitter’s most reprehensible behavior.

One of the absolute worst features of Twitter is the ability to add your own comment to a retweet. A typical use of this feature is to retweet something a Nazi has said along with a comment explaining how horrible it is. We’re all guilty of doing this. It does nothing but give them what they want and promote the fascist’s point of view while filling your followers’ feeds up with what are literally statements of pure evil. Mastodon instead has what are called “boosts” which perform the same function as a retweet on Twitter but deliberately does not allow for commentary. The name of the feature correctly describes what it is for — boosting other people’s content; it becomes always a positive action rather than what has become over time a predominantly negative one.

Mastodon also has a feature that allows people to hide content behind a content warning. This is most notable on the service today when discussing politics or instance administration. It is, however, useful for all sorts of things from hiding spoilers or showing joke punchlines even. The feature is an example where interface design can encourage behavior on a service. It has become ingrained in the culture of Mastodon to hide controversial posts as a common courtesy.

Because of its federated construction Mastodon instances can be rather small from a single user to hundreds to thousands. Instances are essentially small communities of like-minded individuals who can communicate if they wish with other instances. This allows for better moderation where moderators and administrators are responsible for policing each of their instances. If a particular instance gets out of hand and starts harassing other instances they can be blocked.

Mastodon isn’t perfect and isn’t a utopia by any stretch of the definition of the word. There have been issues, especially a quite nasty one surrounding Wil Wheaton where a community of trans people harassed and threatened him, causing him to leave. On Twitter he encouraged adoption of a blocklist that blocked Nazis and other nasty people maintained by someone else. The maintainer started adding trans people to the list, and since a lot of people — including myself — used this blocklist many trans people on Twitter found themselves isolated. They were bitter about it, and decided to attack Wil Wheaton as one of the promoters of this blocklist (before he found out it blocked trans people) thinking he was transphobic and hateful toward them. He added fuel to the fire himself by reporting anyone and everyone including those who were supportive and sympathetic to his plight, so it wasn’t entirely one-sided.

This incident especially has caused discussion around how to improve moderation tools, and that’s where things really improve upon Twitter. Discussion of it is public, and development of it is public. Anyone can contribute. We don’t know how Mastodon will turn out, but we definitely can strive to do better.

Fediverse

Mastodon isn’t the only federated social media service. It isn’t even the only microblogging one, and in that is where the real potential is for these services. All of these services exist in a network that has come to be called the Fediverse. Most Fediverse services today are free and federated reimplementations and reimaginings of existing social media platforms:

Federated Service Original
diaspora* Facebook
Friendica Facebook
Mastodon Twitter
PeerTube YouTube
PixelFed Instagram
Pleroma Twitter

Mastodon implements a W3C standard protocol called ActivityPub. Any platform which implements this protocol can communicate with Mastodon instances. Pleroma is a service that’s almost identical in features to Mastodon and can communicate freely with Mastodon instances. However, other platforms such as PixelFed and PeerTube directly can as well without having to interface with proprietary APIs unique to each platform. This provides a far superior experience to what we’re used to.

The best thing is that there can be and are multiple implementations of different kinds of services. I am interested in Pleroma because of its greatly-reduced dependencies over Mastodon. If I wanted to switch to Pleroma as my microblogging software I can — provided they support subdomains in the future.

These federated platforms and the W3C’s interest in publishing standards which aid in this have a chance of taking back at least some of the Web from the grips of companies intent on ruining our lives for their monetary gain. The Fediverse won’t replace them, but it doesn’t have to.


Keeping Watch

Digital painting of a wolf pup and parent; the parent is asleep or pretending to be while the pup is watching

Well, here’s yet another result of my late night painting sessions. It is a painting of a gray wolf pup and his parent. The parent is resting but obviously not asleep while the puppy is awake and looking like he’s on the lookout, hence the title Keeping Watch. I’m not sure whether the parent wolf is the mother or father. That’s for the viewer to decide, I guess. It began as a doodle I did right before bed in April. I wasn’t sure what I wanted to do with it. Kate encouraged me to paint it, so I did.

Color Palette A table of color swatches showing the color palette used for “Keeping Watch”
Color palette for “Keeping Watch”

I went about this a bit differently than I have in the recent past. I am unsure whether it’s because I’ve been doing a lot of shirt designs lately, but I just picked the colors that could be used in the painting first before starting instead of my usual practice of mapping out a color gamut on the wheel. I also actually painted it a bit differently than I would normally. All of the colors but one were used in the vast majority of painting. I reserved one color for a special part of the painting — the puppy’s eyes. It’s the only thing that’s blue in the painting. It’s barely blue at that, being a faint blue grey. The leaves in the background look to be reflecting a bit of a sky that is off view that also look faintly blue, but upon closer inspection they’re green. Juxtapositions of color can play tricks on the mind, and it’s been interesting exploring that aspect of color with this painting most of all.

I am generally happy with it. I could, of course, keep picking nits off of it until the end of the world. I had to stop at some point, but truth be told I stopped at Kate’s insistence. I have been painting wildlife a bit lately, and I rather enjoy it. The original image is 14,400 × 10,800 pixels at a traditional canvas ratio of 4:3; that makes it roughly 48″ × 36″ at 300 l.p.i. (121.92 cm × 91.44 cm at 118.11 l.p.cm).


Knowing When to Move On

I seem to be going through a lot of changes lately when it comes to computing. My last entry to this website detailed my move away from 1Password. This one will provide details on my move away from Apple hardware. Unlike the last one this is not necessarily a desired move but more like a necessary one. First, however, I need to explain how I got here which means actually starting from the very beginning.

The first computer I can remember ever using was a Macintosh II. My father probably bought it around 1987 and kept upgrading it from there until upgrades weren’t available anymore, eventually replacing it with a Centris 650 in 1993 and later a vivid lime green clamshell iBook in 2000. I would play games, but what I really enjoyed doing on the computer was art — which is what he bought the computer for in the first place. My earliest memories involve using what was then Aldus Freehand and also MacPaint. Later, my father bought Adobe Photoshop 1.0, but 2.5 is the one I remember using the most. Being able to use computers for art and design was what got me interested in them in the first place. Until the late 1990’s you’d have been wasting your time trying to do art and design work on Windows.

Personally I have owned a 2001 Titanium PowerBook G4 that I got for college, a first generation Mid 2006 Mac Pro, a Late 2012 27” iMac, a few Mac Minis, and two more MacBook Pros. I have owned quite a few.

I greatly prefer the workflow in macOS over Windows. I am not saying that macOS isn’t without its annoyances (far from it), but out of what there is to offer it’s the one that best provides me with both a useable GUI and a usable CLI in one neat package. Windows does not provide that. Linux can provide that with lots of tinkering, though; more on Linux later.

I’ve been using a Mac for my main computer since 2006 when I bought the first generation Mac Pro. I was tired of constantly fixing Windows XP issues in my computer, didn’t like what I saw in Windows Vista, and Apple was transitioning to Intel processors. It still remains to this day the best computer I have ever owned. The case was built like a tank, and components inside of it were upgradable. I only ever had one issue with the computer. My video card failed; Apple just sent me a new one. There was no taking it into a store; there was no waiting for someone to fix it. I just put the card in myself. Self-upgrading and self-repair are concepts which are entirely foreign to today’s Apple.

I tend to have a major upgrade every six years or so, and in late 2012 it was time to get something new. I was a bit ticked off at that point because Apple decided in Mountain Lion to remove support for 32-bit EFI, keeping my Mac Pro from being officially supported. I ended up having to partition a SSD to create a hacked 64-bit EFI. Apple also hadn’t updated the Mac Pros in quite some time at that point, and there was no word except from an email from Tim Cook on the release of anything new either. I was feeling ill toward Apple at that point, but I really did not want to go back to Windows. I caved and decided to buy an iMac when Apple updated them that December and bought a top-of-the-line 27” iMac.

I haven’t been as enamored with this machine. My intention was to keep it for a few years at the most, but I’ve kept it almost exactly 5 1/2 years. I knew before I bought it I wouldn’t be able to fix it myself easily, and that has always bothered me. I’ve always purchased AppleCare on my Macs, and it was useful for this machine. The first issue I ran into with it was about six months after receiving it where the fan in the computer made a tapping noise. The computer is a thin device with only a fan and a hard drive (part of the fusion drive in my model) as the moving parts. Hard drive wasn’t dead, and the noise only happened when the fan was going. After numerous phone calls and emails I finally got them to realize it was the fan. Because I don’t live remotely near an Apple Store and because I have a desktop they sent someone to my house to fix it where he had to go through a ridiculous process to take the computer apart to just replace a fan.

“Self-upgrading and self-repair are concepts which are entirely foreign to today’s Apple.”

I bought a model with the Fusion Drive where it uses solid state storage to cache a larger traditional hard disk drive. To put it bluntly… it sucked. Internally the SSD and the HDD are separate bits of hardware and literally everything is handled by software — Core Storage. This isn’t a filesystem handling of SSD caching like ZFS can do. Every single time it decided to write to the SSD there was a noticeable few milliseconds pause. Because it only used the SSD for files which are frequently used this momentary pause occured quite often, and when you’re handling a lot of files those pauses add up. Today Fusion Drives don’t have that problem, but it still is slow because the filesystem isn’t handling it, higher level software is. Within a week of running my computer with the Fusion Drive I’d separated the two parts and never returned to using it. This is important to what happened a year later: the hard drive failed. The hard drive they put in the machine was a Seagate, and because it’s a Seagate drive it failed within months of use; it’s what they do best. Thankfully I’d separated the drives. Again, Apple sent someone by to fix it. What did he bring? A Seagate drive. That one lasted longer; it failed almost eight months ago.

Wait! There’s more! Within a year of using the computer the display started ghosting. The display in my iMac was manufactured by LG. They even faced a class action lawsuit over it on MacBook Pros. However, all of Apple’s computers using LG’s displays during this time period exhibited this problem. Apple again sent someone by to replace the display. He replaced it with another LG display instead of the non-faulty Samsung display they started using; that one started ghosting about six months later. I didn’t bother getting it replaced and have put up with it ever since. It’s never permanent, but it has gotten worse over the years.

Many Mac users are exasperated these days because of their recent MacBooks which are — to put it bluntly again — overengineered pieces of shit. They contain keyboards which fail because of microscopic specks of dust, contain so few ports that to use the machines one has to carry around bags of adapters which Apple will gladly sell you for $50 a pop, and have internal components which are inferior to PC competitors’ comparable offerings in every conceivable category. Developers are upset, and even John Gruber has damaging things to say about Apple hardware; it’s bad when he does. They act as if this is a new thing. Apple has been lethargic at updating their Macintosh hardware for at least seven years. They have made inferior hardware for quite a long time, choosing thinness over performance when it was unnecessary to do so. There is absolutely no reason why an iMac has to be as thin as it is; it sits on a desk and doesn’t need to be lugged around anywhere. My iMac is an i7. The new ones are all i5s. Why? Because new i7s run too hot for the tiny heatsink and chassis fan that are used in Apple’s thin case. Processing power takes a back seat to thinness… on a desktop computer. The new iMacs also don’t have optical audio output I suppose because they wanted to differentiate it even more with the iMac Pro and to remove yet another useful function of their computers. The iMac Pro is a compromised machine as well, containing custom lower power (meaning also lower performing) Xeon processors that won’t overheat in the iMac Pro’s thin case; if spending $5000 for a computer I don’t want my CPU a lower performing model. Apple did the same shit with the Mac Pro a few years earlier. The original Mac Pro was a tower. Its replacement is this weird cylinder trash can-looking device that has no internal expansion capabilities whatsoever. The new machines are underperforming compared to off-the-shelf PC hardware because Apple’s overengineering of the internals restricts what they can do with the new Xeon processors. It hasn’t gone over too well; many in the high-end market had moved away from Apple by this point, and surely by now most of them have. We’re again at the point where we were in 2012 with Tim Cook’s promises of a new machine, and when the new Mac Pro drops it probably will be another overengineered piece of shit instead of what the high-end market needs: a simple tower computer with upgradable and replaceable parts. That’s exactly what I’ve built instead of buying Apple hardware again. It is past time to move on. I should have before I bought the last one.

“They have made inferior hardware for quite a long time, choosing thinness over performance when it was unnecessary to do so.”

My father bought an IBM-Compatible PC computer sometime in the late ’80’s, and that machine eventually became completely mine when he didn’t require it to run Automap for routing charter bus trips. The computer I left behind in 2006 when I bought my Mac Pro started out as that machine. I have years of experience building (and breaking) computers, so when I decided this time to build a PC instead of buying shitty Apple hardware I knew what I was doing. I didn’t go into it without a plan; my initial plan was as follows:

  1. Build a “Hackintosh” which would run macOS as my primary operating system.
  2. Failing that, run Ubuntu Linux and see if Adobe Photoshop and Illustrator would be okay in a virtual machine.
  3. Failing that, run Windows 10.

The first plan to extinguish was B. I didn’t get far with Linux. I have a Thunderbolt Drobo 5D, and it’s not supported in Linux. I tried some third party software for managing a Drobo on it, but it never could recognize my Drobo. It wasn’t Thunderbolt that was the issue because the drive it booted off of was Thunderbolt, connected into the Drobo. Even if I got the thing to work the Drobo only supports NTFS, HFS+, and ext3. I would have to use ext3 which is rather slow. I’ve never gotten around to checking much on how Photoshop and Illustrator would work in a virtual machine.1 I do know that VirtualBox’s “Seamless Mode” is far from seamless. I am not sure about VMWare Player. I will investigate this further because I do want to give Linux a fair shot. I enjoy using it quite a lot on my MacBook Pro. This will be easier later also because I want to explore replacing my Drobo with a custom built NAS where I can have my own RAID array with a filesystem like ZFS.

I abhor Windows; that’s why it’s last. Its GUI is overly complicated in places it shouldn’t need to be and at times completely illogical; Microsoft treats its customers with contempt as if they’re criminals with its restrictive and buggy activation DRM that has extremely confusing licensing terms; features are most of the time thought out only half way before implementation; and software lacks the polish Mac developers have historically applied to their applications. There isn’t a Windows equivalent to Panic or Rogue Amoeba. I have also been spoiled by the Unix shell. I know you can now use bash on Windows, but let’s face it the Linux Subsystem for Windows is popsicle sticked and duct taped into it. Just accept Unix already, Microsoft; everyone else has. With all that said, Windows 10 is indeed the best version they have released, and if push came to shove I would reluctantly use it as my primary computing operating system.

At least thus far plan A has worked. Installing macOS was as simple as making a USB stick with the necessary bootloader. Getting everything recognized afterwards was a completely different story. Everything involves loading custom kernel extensions in the EFI partition. Video was as easy as downloading Nvidia’s Mac drivers. Audio was difficult because Apple doesn’t yet have hardware with my particular audio chipset. There were a few kernel panics to sort out which required some manual editing of CPU settings in the UEFI/BIOS/Whatever, but the issues were documented and easily rectified. I have had to learn a lot in the past couple of weeks. Truth be told I have had more difficulty with Windows than I have with hacking an operating system onto unsupported hardware. My computer has an Asus motherboard, and it came with an expansion card for Thunderbolt that apparently doesn’t have firmware on it at all and has to be flashed with firmware when installing the drivers in Windows. That is sheer idiocy, but whatever. Twice after installing the drivers Windows became unresponsive, even in safe mode. Thankfully, I’m using NVMe SSDs. Reinstalling Windows didn’t take long. Actually, the funny thing is it takes Windows longer to verify my serial than it does to install the OS before configuration. My intention with the Windows installation is just to play games, and getting my Switch Pro Controller to work was a pain in the ass. I had to download this hacky software to make it emulate an Xbox 360 controller that has to be copied into each game’s folder and configured for each game that requires the controller. Contrast this to macOS where I simply paired it with the computer and went about my business. Bluetooth in general seems to be shitty on Windows where my Bluetooth keyboard would be recognized but non-functioning. Bluetooth really seems rather pointless for keyboards and mice on PCs because they won’t work until the OS boots up, so I’m having to use a spare keyboard at the moment even though the keyboard works just fine once booted into macOS.

“It is past time to move on. I should have before I bought the last one.”

I am not going to go into any specifics on the hardware I have for my new computer because I’ve already written enough. It was interesting buying individual parts after all of these years, and I probably annoyed Jeff with questions so much he felt like punching my face in. The most difficult part of buying parts was finding parts that did not have LEDs all over them and a case without a window to show off the inside of the computer. My motherboard was also covered with LEDs which were thankfully easy to remove. I do not understand this fascination with turning your computer into a Christmas tree. It’s a waste of electricity for something that — like a Christmas tree — is gaudy and an eyesore. I don’t care to see the inside of the computer unless I’m fixing or adding to it. Building the computer was relatively easy; there were only a few things different, usually in a way which made it easier.

I now have a Hackintosh, running macOS as my primary OS with a Windows installation just for playing games. This feels like a transitioning period for me, and I might move away from Apple completely especially since I am now at the point where I am hacking their OS onto my custom built machine just so I can have hardware that isn’t garbage and can be easily repaired if something goes wrong. Only time will tell what I will do, but I am open to all of the possibilities — even Windows.


  1. Yes, I am aware of free alternatives such as The GIMP, Krita, and Inkscape. While I would and do use Krita especially all of them aren’t complete replacements yet for either Adobe Photoshop or Illustrator for me.

Passing on 1Password

For years I’ve used 1Password. I am no longer using it.

I’ll admit that’s extremely blunt and can be perceived as setting the tone for the entire article where I outline crummy things about 1Password. Nope. I just don’t agree with the direction they are taking with the application. In this post I would like to describe my experiences and show what I am using now to replace 1Password.

Renting Software

There is a movement lately toward a subscription-based model for software. The traditional model of perpetually licensing paid software seems to be dying. Some say it is because Apple’s App Store is extremely restrictive and has destroyed the traditional software market by poo-pooing paid upgrades. Others think it’s because of pirating. There is some truth to the former, but it obviously doesn’t tell the entire story. The latter is bullshit. For instance Adobe’s Creative Cloud software has never been easier to pirate. Historically the biggest barrier to pirate Adobe software was downloading the full software version; that’s extremely easy today. The best part about it is that pirates even get a better user experience.

When software has moved to this model wanting me to fork over a portion of my monthly income for software in perpetuity (unless I unsubscribe), I haven’t obliged yet.1 Overcast switched to a model where it displays ads unless you pay for a subscription. iOS automatically updated it, and there I was being effectively ransomed for money from an application whose last version I paid for and could not return to. I deleted it immediately and started looking for alternatives.

1Password switched to a subscription model, but to their credit they haven’t stopped their software from working for people who paid for licenses in the past unlike others have. However, there are new features unique to the subscription model. Their customer service has been quite good, and they have helped me since with issues I’ve had. I just can’t use software that won’t really be updated and eventually won’t be supported. Needless to say I won’t subscribe to their software either.

Subscription software is a scam, and in my opinion it should be illegal unless the customer is allowed to keep the software as is at the point of their subscription’s termination. I subscribe to National Geographic, a monthly magazine. If I stop subscribing to the magazine, I still have the magazines I received while subscribed. Upon cancelling the subscription the software ceases to function or — as is the case with Overcast — starts displaying ads anew when you stop forking over money to the company. We should stop referring to it as a subscription and call it what it is: rentalware. It’s a racket that many crooks in the past would have loved to be able to legally do; today people accept it as the norm.

In a world where people pay a rental fee to listen to music they could otherwise own I’m not holding my breath for a change.

Looking for an Alternative

I originally wanted to ramble a bit on about free software, but I came to the conclusion that was pretty much what it was — a ramble. Free software is free. It doesn’t cost you anything, but it can be clunky and support usually antagonistic with neckbeards telling you stupid shit like “RTFM” all the time. These points were tangential to what I’m wanting to say here, so having outlined what made me want to jump ship from 1Password let me get on with describing my journey to what I eventually decided upon.

There are other alternatives to 1Password of course ranging from rent-for-advanced-features LastPass to free KeePass and its many derivatives. I really shouldn’t need to say why I didn’t want to use LastPass, but aside from the fact that the password storage is not in the user’s possession multiple security breaches2 are a nice reason not to use it. I first tried out KeePass, and I found that while the vault format itself was solid the software was not. There is not a single client that works worth a crap on macOS except KeeWeb, and it’s an Electron application3. KeePass-compatible browser extensions are even worse and don’t even provide a modicum of the functionality 1Password’s or LastPass’ extensions provide. I also have looked at Bitwarden which is a server-based solution where I install it on my server and use applications to access the storage there. I fully intended on installing it on my server despite its heavy use of a Microsoft software stack that I don’t otherwise use, but I discovered something else first.

Pass

I decided on pass; pass — or “Password Store” — is a Unix-based terminal application which follows the ol’ Unix philosophy:

  1. Write programs that do one thing and do it well
  2. Write programs to work together, expecting the output of every program to become the input in another

So… I replaced 1Password with a command line program? I did, and I’ll admit now it’s not for everyone. The downside to the Unix philosophy comes from its greatest strength in that the software isn’t monolithic; it’s just one tool in a chain of tools instead of a single application. Granted, with 1Password there are three parts: 1Password, its browser extension, and 1Password Mini (what actually interfaces with the extension). Those are all three pieces of a whole though, installed with the first piece. Pass is only really the second link in a chain.

Pass works a lot differently than other password managers. It doesn’t create a vault-like database to store your data. In the typical Unix manner it utilizes the filesystem. Rearranging and organizing your passwords and secure data is simply a matter of moving files around. Pass does not enforce any format for organizing your data; each secret file is a simple text file with key/value pairs with the first line’s being the password. A hypothetical website secret file would be like this:

I4m7h3l1qu0r
username: lahey@sunnyvaletrailerpark.com
Mr. Lahey’s novascotialiquor.ca login information

Each of those files is encrypted with a GPG key. Pass simply provides a way to access these files and decrypt them on the fly.

Setting up Pass

I use macOS, but installation is trivial on other Unix-based systems and is even available for Windows.

brew install pass pinentry-mac
gpg --full-generate-key
Installation of pass using Homebrew on macOS

During the installation process Homebrew (or whatever package manager you have) will install GnuPG which is what handles the GPG keys and does the decryption and encryption of the files. If you’ve ever sent an encrypted email you’ve used this before. Before using pass a public/private key pair must be generated, so that is what the second command is doing.

As a helping hand to Linux users be advised pass expects gnupg2, so the command might be gpg2 for you. It’s an interactive prompt so just follow the on screen questions. Generate one with these properties:

Kind RSA & RSA (default)
Key size 4096
Key validity time 0 = key does not expire

It will in addition to these ask for your name, email address, and a comment. The comment is an identifier for the key. I used “Password Store”; use whatever you’d like. Keep your keys safe. If you lose the keys you will not be able to decrypt the passwords. Earlier when showing how to install pass via Homebrew I included pinentry-mac. This is a command line program which interfaces with GnuPG to show a GUI dialog box for you to put in your password — very convenient. GnuPG just needs to be configured to use it:

echo "default-cache-ttl 300
pinentry-program /usr/local/bin/pinentry-mac" > ~/.gnupg/gpg-agent.conf
chmod 700 ~/.gnupg/gpg-agent.conf
sudo killall gpg-agent
Configuration of gpg-agent to set timeout and the pinentry program

"Password Store" is the comment for the GPG key that was generated earlier. Doing that will create a password store in ~/.password-store. Inserting passwords isn’t difficult at all but is beyond the scope of this post. Pass’ manual provides numerous examples and is really easy to follow. I should say if migrating from 1Password the script provided on pass’ website doesn’t create a password store that is organized very well.

Browser Extension

There is a browser extension for Firefox, Chrome, and Chrome derivatives called Browserpass which can access your password store. There is currently no Safari extension. I am not sure whether it is because no one has yet been interested in making one or if Safari’s extension API doesn’t support what Browserpass needs. If I had to guess it would be the latter.4

Unfortunately, there are multiple parts to this. In addition to the extension itself a messaging interface needs to be installed to communicate with extensions. Because of some really stupid shit browserpass isn’t in Homebrew. I can sort of understand the Homebrew owner’s logic on this, but all Homebrew needs to manage is a command line tool and not the extensions themselves. Something is said about a “better and safer experience” if they can install the command line tool and the extensions all in one go. They’re right; that’s great in theory, but Chrome prohibits management of extensions via its extensions page if the extension is managed by something else other than the browser; no thanks. Thankfully one can create their own repositories in Homebrew, so I have created my own largely using what was worked on in the issue I linked to earlier.

brew install dustinwilson/tap/browserpass
browserpass-setup chrome && browserpass-setup firefox && browserpass-setup vivaldi
Installation of the browserpass CLI

When done it will tell you that you will need to install the extensions (duh), but also a browserpass-setup command needs to be run which copies the appropriate messaging host file to the browsers. I have Chrome, Firefox, and Vivaldi installed on my computer, so I told it to install those. Install what you need. Perhaps when I get some time in the future I’ll have the formula auto install them for you to save this step.

This is it. Browserpass should now work in your browser(s). One caveat to note with the extension is that it expects the filename of password files for websites to be in the format of domain.tld. It doesn’t matter how many folders deep you have it, though. You do not have to put your store in a format like this, but autodetection by the extension requires this format because decryption of the password file doesn’t happen until you click to fill in the form and you provide your passcode. Consult Browserpass’ requirements for more information. Another caveat is that it does not help in creation of passwords yet; that is being discussed, though.

Syncing

One thing I didn’t mention is syncing. The accepted manner by the community (and one where pass does provide a small bit of help with) is using a private remote Git repo to sync to different devices. Git isn’t user friendly to even seasoned users, so I can understand averseness to using it. You can thankfully use whatever service you wish; just needless to say make sure it’s a secure one.


Like I said there was a good bit to set up, but this approach does provide me with quite a bit of freedom that I rather enjoy. Everything is encrypted with a format that is openly available, independently tested, and useful for more than just password storage. The best part is that I don’t have to rent software. Pass doesn’t end here. There are mobile apps for iOS and Android available. Surprisingly enough the iOS app works really well. I was thoroughly shocked when I first tried it out to find a working and free iOS app that didn’t contain any bullshit. I haven’t tried the Android client yet, though. There are many plugins for Pass as well that does stuff like OTP5 and vaulting with Tomb along with a compatible alternative called gopass which contains extra features not found in the official executable; either or both can be installed and not affect each other or Browserpass.


  1. Thus far employers have paid for my Creative Cloud subscription, and it hurts my soul.
  2. I ran out of words to apply links to for LastPass security breaches; there are that many.
  3. I don’t really have a problem with Electron, but it’s ridiculously wasteful in this case especially when it needs to be forever open and unlocked if communicating with a browser extension.
  4. Insert obligatory rant about the need for standardized extensions here.
  5. I provide an easy way to install it via my Homebrew tap by entering brew install dustinwilson/tap/pass-otp into your terminal.