Clark Rasmussen -

Deploying to EC2 from GitHub via S3

Notes on deploying from a GitHub repository to an EC2 instance via S3 to avoid having to save GitHub credentials at AWS.

I won’t get into the reasoning behind it but I recently found myself looking to set up continuous code deployment from a private Git repository at GitHub to an EC2 instance at Amazon Web Services.

There are some documented ways to do this that combine GitHub Actions and AWS CodeDeploy but most of what I saw required logging into a GitHub account via AWS and stashing those credentials, then using them to pull from the repo during the deployment process.  For various reasons, that wasn’t going to work for me.  Deployment keys were also an option that had to be ruled out.  (I’m not saying this was a typical use case, don’t worry.)

What I ended up doing instead was adding a step in the middle using AWS S3.  Essentially, the code gets staged there and then deployed to the EC2 instance, all triggered by GitHub Actions.

It works like this…

The EC2 instance (project-instance [obviously not the actual name I used but I’m going to be obfuscating names with horribly creative replacements]) needs to have the CodeDeploy Agent installed.  This instance is Ubuntu 22.04 and the automated tooling for installing the agent on 22.04 is broken so I followed some manual steps.

sudo apt-get update
sudo apt-get install ruby-full ruby-webrick wget -y
cd /tmp
mkdir codedeploy-agent_1.3.2-1902_ubuntu22
dpkg-deb -R codedeploy-agent_1.3.2-1902_all.deb codedeploy-agent_1.3.2-1902_ubuntu22
sed 's/Depends:.*/Depends:ruby3.0/' -i ./codedeploy-agent_1.3.2-1902_ubuntu22/DEBIAN/control
dpkg-deb -b codedeploy-agent_1.3.2-1902_ubuntu22/
sudo dpkg -i codedeploy-agent_1.3.2-1902_ubuntu22.deb
systemctl list-units --type=service | grep codedeploy
sudo service codedeploy-agent status

I restarted the instance at this point but I’m not sure if that’s necessary.  It’s a little-used service so I could get away with that.

With that out of the way, we bring  up the AWS Management Console and head over to S3, mostly because it’s the easiest part to take care of.  There, we create a bucket (referred to henceforth as deploy-bucket) with default settings.  It’s not strictly necessary but I added a lifecycle rule to expire all objects in the bucket after two days to keep storage costs down.

Staying in AWS, we then move over to IAM, where we’ve got a handful of things to do.

We create a new user (deploy-user).  While creating it, we add a new group (CodeDeploy) and give that group the AWSCodeDeployDeployerAccess and AmazonS3FullAccess permissions policies.  It seems like AmazonS3FullAccess might be overkill, that access could be limited to deploy-bucket, but I’m going off of some other docs and not really experimenting.  We add deploy-user to this new group.  We also give it programmatic access and save the generated keys for later.

We need the EC2 instance to have a proper role.  If it already has one, it can be edited to include the permissions that follow.  Otherwise a new role with those permissions can be created.

If we’re creating the new role, the entity is an AWS Service and the use case is EC2.  We’ll call it ec2-instance-role.  It gets AmazonEC2RoleForAWSCodeDeploy permissions.

Now we need a service role.  Once again, the entity is an AWS Service and the use case is EC2.  Permissions are AWSCodeDeployRole and we’ll call it code-deploy-service.

Now we edit code-deploy-service so that the trust relationships policy is as follows:

	"Version": "2012-10-17",
	"Statement": [
			"Effect": "Allow",
			"Principal": {
				"Service": ""
			"Action": "sts:AssumeRole"

Since we created a new role for the EC2 instance, we need to edit the instance to use that role.  To do that, we head over to EC2, open the instance, click the “Actions” drop-down, select “Security” and “Modify IAM Role.”  From there, we select the newly-created ec2-instance-role and save the change.

Now we move over to AWS CodeDeploy, where we create a new application (new-application), set to EC2/On-premises.

Then we create a deployment group inside new-application.  Name it something like new-application-deploy.  For the Service Role, copy over the ARN for code-deploy-service.  The deployment type is in-place.  We set the environment configuration as Amazon EC2 instances, then select from the tag group where the Key is Name and the Value is project-instance.  If we didn’t install the CodeDeploy Agent earlier, we have the option to do it at this point, using the default settings.  Then we set the deployment settings to AllAtOnce and click Create Deployment Group.

Now we head over to GitHub and, under Settings, go to Secrets and Variables.  Here we add new secrets for the previously, saved AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.

Finally, we add some definitions to our repository.  In appspec.yml in the root directory, we can do something simple like the following:

version: 0.0
os: linux
  - source: /
    destination: <path-to-destination>

This would deploy the entire repo into the specified destination path.  I’ve actually got some hooks defined, too, but that’s a whole other thing.

The last piece is .github/workflows/workflow.yml.  It looks as follows:

name: CI/CD Pipeline
    branches: [ main ]
    runs-on: ubuntu-latest
      AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
      AWS_DEFAULT_REGION: us-east-2
        appname: ['new-application']
        deploy-group: ['new-application-deploy']
        s3-bucket: ['deploy-bucket']
        s3-filename: ['${{ github.repository }}/${{ github.ref_name }}/${{ github.sha }}']
      # Step 1
      - uses: actions/checkout@v3
      - name: Push Deployment to S3
        id: push
        run: |
          aws deploy push \
            --application-name ${{ matrix.appname }} \
            --s3-location s3://${{ matrix.s3-bucket }}/${{ matrix.s3-filename }}.zip \
            --source .
      # Step 2
      - name: Create CodeDeploy Deployment
        id: deploy
        run: |
          aws deploy create-deployment \
            --application-name ${{ matrix.appname }} \
            --deployment-group-name ${{ matrix.deploy-group }} \
            --deployment-config-name CodeDeployDefault.OneAtATime \
            --file-exists-behavior OVERWRITE \
            --s3-location bucket=${{ matrix.s3-bucket }},key=${{ matrix.s3-filename }}.zip,bundleType=zip

We define some configuration variables, then two steps to the deployment.  The first step pushes the repository code to the previously-created S3 bucket as a zip file, named after the repository, the branch, and the commit SHA.  This naming convention is to allow for future projects to also use this S3 bucket.

The second step then creates a deployment using that zip file as the source.

The end result is that, on every commit to the repository’s main branch, GitHub Actions will fire and the repo’s code will be zipped up and sent to S3, with a new deployment created.  Then, over on the CodeDeploy side of things, that zip file will be pulled in and deployed to the EC2 instance.

There’s probably a way to only push edited files but, in this specific case, the repo is small enough that it doesn’t hurt to deploy the whole thing over and over again.

Random Mastodon Setup/Migration Thoughts

Like seemingly many people, I recently started experimenting with hosting a Mastodon instance (, in this particular case).

When I started, because I was considering it “just” an experiment, I decided to take the path of (nearly) least resistance and fired up a droplet over at DigitalOcean using their 1-Click app.  That gave me a good opportunity to play with the software a bit and decide that I wanted to do more than just experiment.

While I was experimenting, an upgrade to the Mastodon software was released.  I attempted to upgrade and hit some roadblocks based on me not knowing exactly what was installed where on that droplet.  I decided not to upgrade at that time, since I was still experimenting.

Eventually I determined that this wasn’t going to be a throwaway thing.  If I was going to continue maintaining the instance, I wanted to know exactly what was installed where, so I decided to rebuild the server from scratch.

And…  As long as I was doing that, I’d move it from DigitalOcean to AWS.  All my other stuff was at AWS so it just made sense that if I wasn’t going to be taking advantage of the 1-Click anymore, there was no need to be off at DigitalOcean (not that I had any problems with DigitalOcean’s service, to be clear).

One thing I would not have to migrate was uploaded media.  As I said, I took the path of (nearly) least resistance when I first set up, the “nearly” accounting for having dumped media off to an S3 bucket deployed via CloudFront.  As such, the following setup and migration notes wouldn’t have to account for that.

Before doing this I did see a pretty awesome “how-to” on getting Mastodon set up in the AWS ecosystem.  That one, however, assumes that you’re going all-in, with load balancing and RDS and ElastiCache.  Maybe that will be my next step.  For this, however, I decided to do a more one-to-one migration – one droplet to one EC2 instance.

I should probably note that this project led me on a whole other side-quest of reorganizing my AWS properties.  Because, as seen by the aforementioned path of (nearly) least resistance, sometimes I can’t stop myself from adding complications.

After that side quest, I made a couple attempts at this that failed miserably.  I won’t document exactly what went wrong there but the great thing about all these cloud-based resources is that when something goes wrong, it’s easy to just trash it and start over.

EC2 Instance

I started by creating the EC2 instance.  I used Ubuntu 22.04 with a 64-bit arm processor.  Since I’ve got hardly anyone using this (as of writing, there are 42 users, five of which are accounts that I own for various projects), I started small with a t4g.micro instance.

I also added a couple security groups that I already had in place; one that had permission for my home IP address to connect via SSH and another that allowed the world to connect via HTTP/S.

Elastic IP Address

I hopped over to Elastic IPs, allocated one, and assigned it to the newly-created EC2 instance.

I’m honestly not sure what would happen without a static IP address.  Restarting the instance would get me a new IP but I think I could alias the DNS records to point at the EC2 instance directly, so the change wouldn’t matter.  Maybe SSL certs would be a problem?

I’ll admit that my AWS-foo is weak.  It may no longer be strictly necessary but I expect a web server to have a dedicated IP so I gave it one.

General Server Setup

After SSHing into the server, I ran sudo apt update, just to get that out of the way.  I also added 2GB of swap using the following commands:

sudo fallocate -l 2G /swapfile

sudo chmod 600 /swapfile

sudo mkswap /swapfile

sudo swapon /swapfile

echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab

sudo apt install systemd-zram-generator

I edited /etc/systemd/zram-generator.conf to set zram-fraction = 1 and compression-algorithm = zstd.

This was necessary to solve some issues with compiling Mastodon assets.

I’d thought about editing /etc/apt/apt.conf.d/50unattended-upgrades to enable unattended upgrades (Unattended-Upgrade::Automatic-Reboot “true”) but decided that, since this instance does have users other than me, I shouldn’t do that.

I gave it a restart, then I moved on to the actual requirements.

Install Node.js

I chose to install Node.js via a NodeSource  PPA because I needed v16 and NPM comes pre-packaged this way.

curl -sL -o /tmp/

sudo bash /tmp/

sudo apt install nodejs

Install Yarn

Yarn is a package manager for Node.js.  Supposedly it comes pre-packaged with modern versions of Node but I wasn’t seeing it so I just installed it myself.

curl -sL | gpg --dearmor | sudo tee /usr/share/keyrings/yarnkey.gpg >/dev/null

echo "deb [signed-by=/usr/share/keyrings/yarnkey.gpg] stable main" | sudo tee /etc/apt/sources.list.d/yarn.list

sudo apt-get update && sudo apt-get install yarn

Install PostgreSQL

Postgres is the database behind Mastodon.  There’s some more setup that happens later but getting the initial install done is pretty simple.

sudo apt install postgresql postgresql-contrib

Install Nginx

Nginx is the web server used by Mastodon.  There’s a ton of setup after getting Mastodon itself installed but getting Nginx installed is another single-line command.

sudo apt install nginx

Add Mastodon User

The mastodon software expects to run under a “mastodon” user.

sudo adduser mastodon

sudo usermod -aG sudo mastodon

The first command prompts for a password and additional user details.

Add PostgreSQL User

Mastodon also needs a “mastodon” user in Postgres

sudo -u postgres createuser --interactive

It should have the name “mastodon” and be assigned superuser access when prompted.

Install Mastodon Dependencies

There are a handful of packages that Mastodon depends on that need to be installed before we get into working with Mastodon itself.

sudo apt install imagemagick ffmpeg libpq-dev libxml2-dev libxslt1-dev libprotobuf-dev protobuf-compiler pkg-config redis-server redis-tools certbot python3-certbot-nginx libidn11-dev libicu-dev libjemalloc-dev

Switch to Mastodon User

From here on out, we want to be operating as the newly-created “mastodon” user.

sudo su - mastodon

This puts us in the /home/mastodon/ directory.

Clone Mastodon Code from Git

Mastodon’s code lives in a Git repository.  This pulls that code down and gets us working with the correct version.

git clone live

cd live

git checkout v4.0.2

Install Ruby

Mastodon is currently requiring v3.0.4 of Ruby so we explicitly get that version.

sudo apt install git curl libssl-dev libreadline-dev zlib1g-dev autoconf bison build-essential libyaml-dev libreadline-dev libncurses5-dev libffi-dev libgdbm-dev

curl -fsSL | bash

echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bashrc

echo 'eval "$(rbenv init -)"' >> ~/.bashrc

echo 'export NODE_OPTIONS="--max-old-space-size=1024"' >> ~/.bashrc

source ~/.bashrc

rbenv install 3.0.4

rbenv global 3.0.4

Installing Ruby takes a minute.

We also need the “bundler” Ruby gem.

echo "gem: --no-document" > ~/.gemrc

gem install bundler

bundle config deployment 'true'

bundle config without 'development test'

bundle install

Install Javascript Dependencies

Mastodon requires that Yarn be set in “classic” mode.

sudo corepack enable

yarn set version classic

yarn install --pure-lockfile

Mastodon Setup

At long last, I was finally ready to actually set up the Mastodon instance.  I started by allowing SMTP connections, so that I could send a test message during the setup process.

sudo ufw allow 587

After that, I ran the Mastodon setup process.

RAILS_ENV=production bundle exec rake mastodon:setup

Since I was migrating from an existing Mastodon instance, I partially used dummy data here.  Specifically, I used a different S3 bucket than I was using in production so that it wouldn’t overwrite any live data.

I said “yes” to preparing the database and to compiling assets.

Then I created an admin user.  The only real reason to do that was to give something to test with after finishing the setup and before transferring the existing instance data over.

Configure Nginx

With Mastodon configured, it was time to configure Nginx to actually serve up those files via the web.  I started by opening the server up to HTTP and HTTPS traffic.

sudo ufw allow 'Nginx Full'

I’d been thinking that I could get away with Nginx HTTPS for that setting but wasn’t accounting for how Certbot requires HTTP access, which I would have run into a few steps later had I not caught it here.  I started having a bad feeling about my plan at this point, which came to a head a little bit later.

Next up was adding Mastodon to the Nginx configuration, which was easy thanks to a file that just needed to be copied over from the Mastodon install and a symbolic link that needed to be set up.

sudo cp /home/mastodon/live/dist/nginx.conf /etc/nginx/sites-available/mastodon

sudo ln -s /etc/nginx/sites-available/mastodon /etc/nginx/sites-enabled/mastodon

That file doesn’t have all of the correct configuration, though.  I opened it up in a text editor and updated the server_name values to “” (I would have also needed but I don’t have DNS set up for that [“www” on a gTLD just looks weird to me]).   I also needed to comment out the “listen 443” lines because I didn’t have an SSL cert yet.

At this point I realized what that bad feeling was.  There was no way I could get the SSL cert for without moving the domain over to this box.  Expecting it to fail, I ran the cert request command anyway.

sudo certbot certonly --nginx -d

Had I been using the “www” domain, I would have also needed to request that here.  It wouldn’t have mattered, though, because after entering my email address and agreeing to terms and conditions, the request failed, as expected.

If I were using Elastic Load Balancer, the certificate would be handled on the AWS side of things and I wouldn’t need Certbot.  I could have updated my DNS to answer the challenge regardless of where the domain was actually pointed.  But I wasn’t going that route.  I decided to move forward and add a step for requesting the certificate later.

I restarted Nginx to at least put the changes I had made into use.

sudo systemctl reload nginx

Set Up Mastodon Services

There are three services to enable to get Mastodon working.

sudo cp /home/mastodon/live/dist/mastodon-*.service /etc/systemd/system/

sudo systemctl daemon-reload

sudo systemctl enable --now mastodon-web mastodon-sidekiq mastodon-streaming

I also added a couple weekly cleanup  tasks to the mastodon user’s crontab

47 8 * * 2 RAILS_ENV=production /home/mastodon/live/bin/tootctl media remove
37 8 * * 3 RAILS_ENV=production /home/mastodon/live/bin/tootctl preview_cards remove

These could run at any time.  They don’t even have to be weekly.

Migrate Existing Data

If this had just been an initial setup, I’d be about ready to go.  Because I was migrating, though, I needed to take care of a handful of other things.

First, I updated my Mastodon config so that the secrets matched those on my old instance.  This made it so that admin account I set up earlier wouldn’t work but I wasn’t worried about that anymore.

On my original instance, I disabled the Mastodon services and ran a database backup

sudo systemctl stop mastodon-{web,sidekiq,streaming}

pg_dump -Fc mastodon_production -f backup.dump

I copied that backup file over to the new instance, stopped the services there, deleted and recreated the database (since it was junk at this point anyway), restored the backup, and (because the backup was from an older version of Mastodon) ran the database migration script.

sudo systemctl stop mastodon-{web,sidekiq,streaming}

dropdb mastodon_production

createdb -T template0 mastodon_production

pg_restore -Fc -U mastodon -n public --no-owner --role=mastodon -d mastodon_production /home/mastodon/backup.dump

RAILS_ENV=production bundle exec rails db:migrate

I updated the DNS to point to the new instance, then ran the Certbot commands from above to get an SSL certificate.

sudo certbot certonly --nginx -d

Then I went back into the Nginx config to un-comment the “listen 443” lines and update the location of the certificate files.  I also went back to the Mastodon config to update it to point at the correct S3 bucket for file storage.

With that done, I confirmed that the Nginx config was valid, restarted Nginx, brought the Mastodon services back online, rebuilt the user home feeds, and deleted the outdated database backup.

sudo nginx -t

sudo systemctl reload nginx

sudo systemctl start mastodon-{web,sidekiq,streaming}

RAILS_ENV=production ./bin/tootctl feeds build

rm /home/mastodon/backup.dump

Then I noticed that the site still wasn’t being served up properly due to permissions issues, which was an easy fix.

chmod o+x /home/mastodon

Wrap Up

At this point, was up and running on the EC2 instance but I still had some cleanup to do.

I left the Digital Ocean droplet up and running for a bit longer, just in case I needed something from it.

I also had some custom tools I’d built that needed to migrate over to the EC2 instance.  I manually copied them over and left myself with a future project of updating the deployment process for them to account for the move.

I’m not certain I did all of this the “right” or “best” way.  I was learning as I went, though, and learning is important.

Building a Custom Hockey Stick Holder

Instructions on how to build my custom hockey stick display brackets. Mostly so that I have it documented next time I need to do it.

Several years ago I wrote up my designs for a custom hockey puck display case.  When I wrote that, it was mostly as reference for myself, so that it would be documented the next time I went to build one.

I’ve designed custom brackets for mounting hockey sticks, as well, and it’s time for me to make another set of those.  Of course, that’s not documented, either, so here we are.

I start with 1/2″x 1 1/2″ pine or poplar craft board, depending on whatever I happen to have on hand or whatever I find first when I run out for materials.  Really the quality of the wood most matters in how you want to finish it and I just spray paint it black (getting ahead of myself here) so that really doesn’t matter.

Materials in hand, it’s just a couple 45-degree cuts and one straight chop to round it off, with the created pieces looking like this:

Could the bottom piece be rounded to 2 inches?  Certainly, but the first one I made was a little short so everything I’ve done since has followed.

Depending on how you’re going to mount it to the wall, you probably want to put a hole through the larger piece, in the middle near the flat end, so you can run a screw through it later.

Where to place a hole for mounting my custom hockey stick bracket.

Glue the pieces together as they appear in the diagram and you’ve got yourself a bracket.  You can stain it or paint it to make it prettier, of course.  As I mentioned above, I paint mine black because the sticks I’ve got are black so it matches.

You need to make two brackets for each stick, unless you’re really good at balancing and don’t expect the stick to be bumped at all.

This is the end result:

My custom hockey stick brackets in use.

My Trello Weekly Summary

I’m trying to get myself to better recognize work I’ve completed, so I wrote a process to look at my Trello board and send a Slack message to myself. This is why and how.

I used to write a lot about Trello.  The API, how I use it, how my day-job teams were using it.  It’s been five years since I’ve said anything new.

A big part of that is that I haven’t been doing anything new.  I did a ton of Trello-related work in about a one-year period and then was done.  But I still use it every day and recently found a gap that I decided to close.

My personal Trello board has three columns.  “Not Started,” “In Progress,” and “Done.”  Pretty simple.

The “Done” column is the one I decided needed some help.

The “Done” column is important to me.  It’d be easy to not have one, simply archiving cards as they were completed.  I think it’s easy to get lost in a never-ending “Not Started” list without seeing the things that have been completed.  Instead, I have a process that runs nightly to archive cards over two months old, after they’ve had time to pass out of my mind.

I’ve been finding of late, though, that that’s not working for me.  I see the “Done” column, I know it’s there and that there’s a lot of stuff in it, but that’s what I’ve boiled it down to.  “A lot of stuff” that’s done and easy to ignore.

So I decided to add another automated process.  This one runs weekly and sends me a message via Slack, detailing how many cards I’ve completed in the last week.

I’m not doing this for analytics or anything.  I don’t have any goals to accomplish a certain number of tasks (they’re not estimated or anything so they’re all different sizes, anyway).  I just want to get another reminder in front of my face.  This one is a little disruptive, so maybe it’ll stick differently.

For the record, the following is the code for this new process (with a little obfuscation):

This makes use of my TrelloApi and SlackApi helper classes.

DetroitHockey.Net 25th Season Logo Design

A look at DetroitHockey.Net’s 25th Season logo on the site’s 24th birthday.

Today marks the 24th birthday of DetroitHockey.Net, my Detroit Red Wings-centric hockey site and the first site I ever created.

Normally the 24th birthday would be the start of the 25th season but in the world of COVID, timelines aren’t what they once were.  Nonetheless, earlier today I unveiled a 25th Season logo at DH.N.

The commemorative 25th Season logo for DetroitHockey.Net

I’d been doodling quite a bit in the lead-up to the site’s 24th birthday.  I knew I wanted to differentiate the 25th Season logo from the 20th Season logo by putting the “25” front and center, rather than the site logo, but I had a hard time with anything beyond that.

I was trying to avoid using a bounding shape and was focusing on just using “25” and the site logo when I realized I couldn’t make those work the way I wanted to.  I didn’t want to do a shield (the DH.N logo) inside of another shield, so I went with a circle as the bounding shape.  I felt like the circle needed to be broken up to help distinguish the anniversary logo from the site’s “promotional” logo, so the ribbon came in and the site logo was lowered to break the bounding circle at the bottom.

The ribbon gave a good place for most of the anniversary-related text.  Like it is in the site’s “promotional” logo, “DetroitHockey.Net” arches across the top part of the bounding circle.

The four stars across the bottom part of the logo represent the four Stanley Cup Championships the Red Wings have won since the site was founded in 1996.

My one complaint about this is that it feels like there is empty space before and after the “DetroitHockey.Net” text.  It’s necessary to have some space to let the individual elements breathe but that feels like just a little too much and I couldn’t figure out a better way to handle it.

I’m a Bad Citizen of the Open Source Community

On how I learn, how I code, and not sharing well with others.

I’ve always said that I love the open source community because of the opportunity it provides for riffing off of each other.  I’ve also acknowledged that I have the tendency to re-invent the wheel.  If those seem at odds with each other, it’s because they kind of are.

How that usually works for me is that I’ll Google to see if someone has solved the problem I’m working on and, if so, I’ll learn what I can from what they did and re-work it to fit my specific situation.  Then maybe I’ll write about it and embed my code in the blog post.

I do this because I’ve always wanted ideas, not code, from other people.  I don’t want to composer install their stuff, I want to see how they did it and then do it myself.  So when it comes time for me to share, I write about it but don’t make my code particularly accessible.

There are two problems here.  One is that my way isn’t particularly efficient.  I need to be better about using other people’s code and not reinventing the wheel.  The other is that I’m not sharing well because my code stays locked up in my own repos.

Not sharing.  Mrs. Biondolillo would be disappointed in me.

I realized this as I was trying to consolidate a bunch of my code into a framework package.  In doing so, I found that I have a bunch of one-off tools and classes.  Things that I don’t use much and didn’t bother to build tests for or anything but that might benefit others a bit.  And I never put them out there.

So I’m going to start dumping some of these out to GitHub, starting with the current version of my Trello API helper class.  If no one uses them, that’s fine.  At least they’ll be out there to find out.  And I’ll start being a better citizen of the open source community.

Integrating Invision Power Board (back) into

Notes on how the project integrating with Invision Power Board went and how it was done.

As I’ve mentioned in the past, has something of an odd history.

It was originally a part of DetroitHockey.Net, which at that time used Invision Power Board as a forum system.  When I spun FHS off into its own site, I migrated user management and messaging to a system built around a Slack team.  That served us well for several years but it became apparent that it had flaws, one of which being difficulty scaling up.  As such, I made the decision to bring Invision Power Board back in, running it alongside the Slack team.

IPB gave me user management out of the box, so I removed much of my custom-built functionality around that.

I was able to add our Slack team as an OAuth2-based login method for IPB, so existing members could continue logging in with their Slack credentials and all of the other Slack integrations in the site could key off of that information.

To allow users to log in and out using the forum functionality but return to a non-forum page from which they came, two changes to IPB code were required in their login class, /community/applications/core/modules/front/system/login.php.

On lines 59-61,  we force their referrer check to accept external URLs.

Then at line 259, we check whether not a referrer was defined and, is so, use it.

To determine whether or not a member is logged in in non-forum areas of FHS, we tap into IPB’s session management and use it to populate our own.  It’s a little messy, especially as IPB’s code is not meant to be included into a non-IPB codebase.  We end up having to do some cleanup but it looks something like this:

First we pull in the session details.

If we need to, we can get the member ID.

Then we revert as many of the changes IPB made as possible.

That gives us an IPB installation acting as single-sign on for the site.

Other points of integration are driven primarily by the IPB API.  On league creation, a forum and calendar are created for the league, with the IDs of each stored in the league config so that messages can automatically be posted there later.

I also added tooling that allows for cross-posting between the forums and Slack.  As the expectation is that some leagues will continue to use Slack while most use the forums, the idea is for no messages to get lost between the two.  This might lead to over-communication but I would rather see that than the opposite.

On COVID Homeschooling and Technology

My latest fight with technology in the face of COVID: Setting up accounts for my daughter that don’t allow for parental controls that respect her blended family.

The thing that’s driving me surprisingly nuts about trying to homeschool my daughter during “these trying times?” Not the time management; I expected that to be bonkers. Instead it’s the technology.

I’m a software engineer. How can the technology be a problem?

As a developer, I know the limitations of our industry and I can see when my use case isn’t impossible, just the product wasn’t designed for it. I’m also well aware of our industry’s history of designing for the situations that developers themselves are most familiar with rather than that of their potential customers.

As such, today’s frustration comes due to Google and 1Password and their definition of “family.”

The kid has an Android tablet, which she uses with a locked-down guest account, with my Google account as the primary user. Part of that lock down is that she doesn’t get access to Chrome. This has never been a problem when the tablet was “just a toy” but now that she’s using it for school, she needs Chrome.

The way to manage what sites she can go to is with a child account administered by me. Okay, fine, I didn’t want to give her a Google account at this age but Google has the option for accounts to be managed by a parent so I’ll give it a shot. But I can’t create her account – even though I can manage it after it gets created – because my wife and I have a family group for sharing of media and my wife is the manager, so she has to do the initial setup. A small hurdle but an annoying one nonetheless.

But wait! This means that my kid’s account will be tied to our family group! The problem there is that she has two other parents – her mom and step-dad – who should also have the ability to manage what she can do with this new Google account of hers. Shockingly, my ex-wife and her husband are not a part of our Google family group. The end result is that my daughter’s account can either be managed by her dad and step-mom or her mom and step-dad (assuming they have a Google family group over there, that’s none of my business) but not any other combination of the four of us.

And it’s not like Google doesn’t know about this problem, there was a support thread created about it over a year ago.

So I set up this new Google account and drop the password for it into 1Password, where I have a vault of the kid’s passwords. That vault is shared with her mom, of course, right? No, because, like with Google, she would have to be in my 1Password “Family” for me to be able to share that. I can share my daughter’s vault with her step-mom but not her birth mother.

To be fair to 1Password, there is a workaround in which I could give my ex-wife a guest account on my family that only has access to our daughter’s vault. However, if the whole point of shared vaults is to give each user a  single place to access everything, that point is lost by forcing her to have two separate 1Password logins.

This is two different tech companies both independently deciding that “Family” means a bunch of people living in the same house, and real world families look way different than that.

As I said, I know this industry has a problem with developers building solutions for people who look like them. It’d be easy to say the stereotypical development team of young, single guys didn’t consider the possibility of a blended family using the product.

If you do that, though, you have to also assume that the team of young, single guys included no children of divorce, which just isn’t statistically likely. So, instead, I can only assume that both Google and 1Password made the conscious choice that blended families don’t count.

Fantasy Hockey Sim: The Big Damn Database Refactor

Thoughts on my recently-completed refactor of Fantasy Hockey Sim, which started out with a small feature request and spiraled out of control.

I recently completed a year’s worth of upgrades to Fantasy Hockey Sim.  They were all important changes and the site is better with them completed, but it was never supposed to be that way.  I want to try to make some sense of it here.

It started with a suggestion.  “Can we track TOI and ATOI for players in their season stats?”  That would be time on ice and average time on ice.  And, given that we have the number of minutes each player skates in every game, yeah, that should have been easy enough to add.

So I started to add the two new fields to the stat_player table, where each players season stats are stored.  Looking at that, though, I was struck by how wrong that table was.  And that’s where the snowball started.

A brief history lesson…

Long before Fantasy Hockey Sim was a thought, I was building tools for managing the Fantasy Hockey League (now the DetroitHockey.Net Fantasy Hockey League).  In 2006, I greatly expanded on those tools and developed a system for not just managing the league in its current state but also displaying historical data, such as career stats and past games played.

To do that, I wrote code and designed a database centered around importing archived historical data.  In that context, I always knew that stats belonged to Team X and/or Player Y, as played for a given game type of Season Z.

In 2013, I updated this system to handle multiple leagues, so then the stats belonged to Team A and/or Player B in League C for a given game type of Season D.

The problem was that was exactly how the stat_player table was keyed.  From the original 2006 design, it had never been updated to have an auto-incrementing key.  Instead it had a multi-dimensional key consisting of player_id, team_id, league_id, game_type, and season_id.  Additionally, it wasn’t even accurate, as farm league stats were simply another set of fields on the end of the player_stats table.

This might have made sense in the context of the format from which the stats were imported in 2006 but it was wrong for how things were used in 2019.

So I started to fix it.

Leagues have seasons.  Seasons have schedules.  Schedules are made up of games and have a game type.  Leagues have franchises.  Each season, a franchise fields a team.  A farm team has a parent team.  Now stats_player has a player_id, a team_id, and a schedule_id.  From that, you can find what league the stat record is for, what franchise it’s for, and whether or not it’s from the farm league.

But then I didn’t stop there.

Games had a home team and an away team, with scores for each.  They also had a set of power play stats for each team.  But that could be combined into a game_stat_team table, with one team having a flag denoting them as the home team.  Goalie stats and skater stats for each game had their own tables but they were merged to become game_stat_player, with position_id denoting whether they’re a goalie or not.

This continued until I had a database structure that I was happy with.

Now I just had to update all of my code to match the new database layout.

That would be no small thing regardless of the state of the codebase coming into this project.  I’d just redefined or added the concepts of seasons, schedules, franchises, and teams.  I had to build new tooling around that.

On top of that, like the database structure, the site code was based around importing historical data.  Much of the code was historical itself, having been originally written in 2006, with new features bolted on as necessary since then.  As such, the database redesign led to a major code refactor project.

All started by a request for two new stats fields, for data the system already had available.

So what have I learned from this?

I think it shows that I’d gotten complacent.  Both the database and the code connecting to it should have been refactored ages ago but, because they worked just fine, I didn’t touch them.  FHS is a personal project, so I can justify this by saying that I didn’t have the time available to go back and do it right, but I think that would just be an excuse.

That said, this project shouldn’t have snowballed in the way that it did.  I should have broken it down into smaller chunks.  I never would have allowed this in a project at the office, but because it was for one of my own sites, I let it get out of control.

Finally, in an effort to force myself to get the project done, I rolled in several unrelated features and launched them, so that I wouldn’t have the option of continuing on with normal operations of FHS using the old codebase and schema.

By not breaking it down into smaller pieces, I ended up burning out a bit on the project.  It took so long to get done and involved so much downtime that I was desperate to work on anything else.

Now that it’s done, I’m relieved to have the site back up and running and that off my plate.  I’m also glad to have the opportunity to look back, see what I did wrong, and work to do better next time around.

Griffins Jersey Contest – 90s Edition

The number of times I have “quit” the Grand Rapids Griffins’ annual jersey design contest is comical at this point.  Last year I even publicly announced that I was done.  Then they went and made this year’s edition 90s-themed and I was drawn right back in.

Two years ago the Griffins requested an 80s-themed “fauxback” jersey as part of their contest.  I loved it because it raised the question of what makes an 80s-themed jersey.

As I wrote at the time, there were many jersey design elements that became more prevalent in the 1980s, but it was really color that defined the decade.  The Griffins were dictating the colors for their contest, though, so I was really curious what they would deem “80s enough” to win.  In the end, and perhaps unsurprisingly, I strongly disagreed with their choice, which left me feeling like the question of what makes an 80s jersey was unanswered.

This year, with the challenge being to create a 90s-themed jersey, I believe there are much stronger trends to work with.

While the 1980s saw only two NHL teams relocate and only one go through a major rebranding, the 1990s saw seven expansion teams, three relocations resulting in new team identities, and a whole slew of redesigns.  Many of them used the same design elements and most of those changes have since been reverted, resulting in a set of looks that are uniquely 90s.

As such, when the Griffins announced the theme of this year’s contest, for me it wasn’t so much a matter of figuring out what they were asking for as it was figuring out how to make all of those design options work together.  An idea popped into my head almost fully-formed.  As such, despite my “retirement,” I was drawn back in.

As an aside…  I am writing this on August 2 for publish after the design contest is over.  I usually spend the entire design period tweaking, then write up my design thoughts and publish them along with my submission right at the deadline.  This year, as part of an attempt to put less effort in, I’m submitting my design on the second day of the contest and writing this up for publish after voting is over.

My completed submission for the Grand Rapids Griffins 90s Jersey design contest.

There are six distinctly 90s elements to this jersey.  I’ll note who used them in the NHL to show just how prevalent they were (I’m choosing the NHL because their identities are generally more stable and documented than minor leagues).

“Winged” Shoulders

From their inaugural season to the AHL’s league-wide redesign in 2009, the Griffins’ jerseys featured a shoulder design that was meant to represent the wings of their mascot.  I’ve brought them back here.  It’s something that’s unique to the team from the era in question.

By using this shoulder pattern, I was unable to take advantage of another trend of the 1990s: alternate logo shoulder patches.  They just don’t work on that background.

Diagonal Sleeve Stripes

Diagonal sleeve stripes were used in the NHL prior to the 1990s (specifically, by the Pittsburgh Penguins, Hartford Whalers, and Vancouver Canucks) but in the 90s there were nine teams that introduced them or brought them back, bucking the trend of standard straight stripes.  The aforementioned Penguins, Whalers, and Canucks all used them.  The expansion Mighty Ducks of Anaheim and Florida Panthers and the relocated Phoenix Coyotes did as well.  Redesigns or third jerseys for the Calgary Flames, New York Rangers, St. Louis Blues, and Washington Capitals all featured diagonal sleeve stripes.

Angled Hem Stripes

Most – but not all – teams who introduced diagonal sleeve stripes also paired them with a nonstandard angled set of stripes at the hem.  Anaheim, St. Louis, and Washington all went with an asymmetrical version of this element while Calgary and Pittsburgh chose to make a “V” shape of their stripes.  Additionally, the Colorado Avalanche featured a mountain-like design along their hem.

I really dislike the asymmetrical look – even if it’s iconic – and the gap in Pittsburgh’s design, so my concept uses something similar to what Calgary’s 1998 third jersey had.

Arched Nameplate

The vertically-arched nameplate was introduced to the NHL by the Detroit Red Wings in 1982.  In 1990 it was copied by the Rangers.  The Avalanche used it when they relocated in 1995 and the Panthers switched to it in 1998.

While this is hardly a widespread design element from the 90s, the fact that it quadrupled in use over the decade and that the Griffins are the farm team of the originators makes me comfortable including it in the design.

Rounded/Custom Numbers

In 1967, the Penguins made their debut wearing rounded numbers, dropping them after a single season.  The Rangers broke from tradition in 1976, switching to a completely different jersey that included rounded numbers, which only lasted until 1978.  The Red Wings switched to “fancy” numbers for the 1982-83 campaign before immediately switching back.  For the first 75 years of the NHL, those were the only times a team didn’t wear some form of block number on their jerseys.

Then the Tampa Bay Lightning came along.  After spending their inaugural campaign in a standard – though drop-shadowed – block font, in 1993 they italicized their numbers.  That same year, the NHL’s All-Star Game featured jerseys with rounded numbers rather than block.  One season later, the Flames had italicized numbers while the Lightning had moved on to a custom “paintbrush”-like font.  One more season later, seven teams had at least one jersey that didn’t use a block font.

By the summer of 1999, 14 teams were using a non-block font on at least one of their sweaters.

The Griffins debuted with a custom number (and name) font and I was highly tempted to go back to it.  In the end I chose to stick with a slightly-more-generic rounded font, similar to those used by Calgary, Nashville, Phoenix, the Carolina Hurricanes, and the San Jose Sharks.

Tampa Bay’s “paintbrush” numbers might be more memorable than any of them but I just didn’t think they fit the design.  My goal (whether it’s the request of the Griffins or not) is to make the most-90s jersey that still looks good, not just cram as much 90s stuff into a jersey as possible.

Angry Mascot Logo

My 90s-style Grand Rapids Griffins logo.

The last element of the jersey is the Angry Mascot Logo.  San Jose debuted represented by a Shark biting a hockey stick in half.  Florida’s first logo featured a Panther pouncing forward.  The New York Islanders rebranded to focus on a fisherman holding a hockey stick, staring angrily.

In the minor leagues the trend of “fierce” logos was even more visible.  Between the AHL and the IHL, no fewer than 13 teams introduced branding – of various qualities or rendering – featuring some combination of a snarling animal or fearsome creature between 1990 and 1999, including the Albany River Rats (1993), Carolina Monarchs (1995), Chicago Wolves (1994), Cincinnati Cyclones (1992 and 1993), Denver Grizzlies (1994), Hamilton Bulldogs (1996), Hartford Wolf-Pack (1997), Indianapolis Ice (1996), Kentucky Thoroughblades (1996), Lowell Lock Monsters (1998), Beast of New Haven (1996), Saint John Flames (1998), and Syracuse Crunch (1994).

I have to admit, I don’t like this trend.  The Griffins’ original logo does not meet my criteria for “fierce” and I find it to be a better logo than either their current mark or the one I’ve created here.  But to stick with the trend, I’ve swapped majesty for musculature.  That said, I do feel like, in doing so, it’s moved out of 90s and become a little more modern.

I submitted this jersey in red because I think it looks best.  That said, another perceived trend of the 1990s was “black for black’s sake,” seeing teams add black as a team color and introducing black jerseys simply because it was a color that sold well at the time.  With that in mind, I created a full jersey set that includes a black alternate.

My full jersey set for the Grand Rapids Griffins 90s Jersey contest

The funny thing (though I suppose how funny it is depends on how far this design goes in the contest) is that I’m not sure I actually even like this design.  It’s easily my least favorite submission to the Griffins’ contest over the years.  Given what they’re asking for, though, I think it nails it.

I called “black for black’s sake” a perceived trend because it simply didn’t happen in the NHL the way that it’s portrayed at times.  There are other elements that I ignored because, while everyone “knows” 90s design was all about them, it turns out the facts don’t match up with that.

The Calgary Flames added black as a trim color in 1995 with a black alternate jersey in 1998.  The Washington Capitals included black as a trim color in their 1995 redesign and added a black alternate in 1997.  The Philadelphia Flyers already had black trim and their 1997 alternate was black.  That’s three of 28 teams.

Teal?  That was just the San Jose Sharks, with the short-lived New York Islanders  “fisherman” jerseys using it as an accent.

Similarly, there’s Tampa Bay’s “paintbrush” crazy numbers.  Yeah, the Lightning used them for six seasons.  And the Mighty Ducks of Anaheim had a one-season alternate with a crazy number font.  But that’s it.  It didn’t take the league by storm.  It was really just one team and a quickly-abandoned alternate for another.

Finally, there’s asymmetrical striping.  I called it “iconic” above but it was just Anaheim and St. Louis who used it.  Two teams.  More teams used symmetrical angled stripes, but those don’t stick in our collective memories.  The Islanders’ fisherman jerseys were technically asymmetrical but not in the way we typically think of.

We’ll see how many of these perceived trends end up being applied to designs in the contest.

View All Images as Gallery