Awhile back I wrote about the problem with code samples. The basic idea is that code samples are nothing without context. I’d been throwing zip files full of code onto this site but not explaining why I wrote what I did. Later, I started blogging about code and realized those entries were my true code samples. As such, they’re all collected in this category.
A big part of that is that I haven’t been doing anything new. I did a ton of Trello-related work in about a one-year period and then was done. But I still use it every day and recently found a gap that I decided to close.
My personal Trello board has three columns. “Not Started,” “In Progress,” and “Done.” Pretty simple.
The “Done” column is the one I decided needed some help.
The “Done” column is important to me. It’d be easy to not have one, simply archiving cards as they were completed. I think it’s easy to get lost in a never-ending “Not Started” list without seeing the things that have been completed. Instead, I have a process that runs nightly to archive cards over two months old, after they’ve had time to pass out of my mind.
I’ve been finding of late, though, that that’s not working for me. I see the “Done” column, I know it’s there and that there’s a lot of stuff in it, but that’s what I’ve boiled it down to. “A lot of stuff” that’s done and easy to ignore.
So I decided to add another automated process. This one runs weekly and sends me a message via Slack, detailing how many cards I’ve completed in the last week.
I’m not doing this for analytics or anything. I don’t have any goals to accomplish a certain number of tasks (they’re not estimated or anything so they’re all different sizes, anyway). I just want to get another reminder in front of my face. This one is a little disruptive, so maybe it’ll stick differently.
For the record, the following is the code for this new process (with a little obfuscation):
It was originally a part of DetroitHockey.Net, which at that time used Invision Power Board as a forum system. When I spun FHS off into its own site, I migrated user management and messaging to a system built around a Slack team. That served us well for several years but it became apparent that it had flaws, one of which being difficulty scaling up. As such, I made the decision to bring Invision Power Board back in, running it alongside the Slack team.
IPB gave me user management out of the box, so I removed much of my custom-built functionality around that.
I was able to add our Slack team as an OAuth2-based login method for IPB, so existing members could continue logging in with their Slack credentials and all of the other Slack integrations in the site could key off of that information.
To allow users to log in and out using the forum functionality but return to a non-forum page from which they came, two changes to IPB code were required in their login class, /community/applications/core/modules/front/system/login.php.
On lines 59-61, we force their referrer check to accept external URLs.
Then at line 259, we check whether not a referrer was defined and, is so, use it.
To determine whether or not a member is logged in in non-forum areas of FHS, we tap into IPB’s session management and use it to populate our own. It’s a little messy, especially as IPB’s code is not meant to be included into a non-IPB codebase. We end up having to do some cleanup but it looks something like this:
First we pull in the session details.
If we need to, we can get the member ID.
Then we revert as many of the changes IPB made as possible.
That gives us an IPB installation acting as single-sign on for the site.
Other points of integration are driven primarily by the IPB API. On league creation, a forum and calendar are created for the league, with the IDs of each stored in the league config so that messages can automatically be posted there later.
I also added tooling that allows for cross-posting between the forums and Slack. As the expectation is that some leagues will continue to use Slack while most use the forums, the idea is for no messages to get lost between the two. This might lead to over-communication but I would rather see that than the opposite.
I’ve been toying around with Slackbots a bit lately. Back in January I wrote about publishing a Git log to Slack via one. They’re dirt simple to implement and can be really useful.
They can also be fun. I’ve added a couple bots in our office Slack just for posting randomness, running as cron jobs. I figured I’d take a look at them here.
A couple weeks ago one of my coworkers posted a link to the documentation for a seemingly-random PHP function in our dev Slack channel. It wasn’t random, it just pertained to an offline conversation, but we still made some cracks about how he was doing it to spread awareness of the function.
At nearly the same time, he and I both said, “That sounds like a great idea for a Slackbot.” So I made it, and it looks like this:
As you can see, it’s a little more advanced than just posting a link to a PHP.net URL. If PHP.net had metadata on its pages that allowed for a nice-looking Slack unfurl, I might have just done that. Instead I decided to pull content from the page we’re linking to and make the Slack post a little prettier with it.
So how does that all work?
Simple thing here. Get the defined PHP functions and then start a do…while loop. By using get_defined_functions() I’m limiting what our PHPBot can link to but I figured there are enough functions available there that it doesn’t really matter.
We jump into our do…while loop, where we grab a random function from our array of functions and build what should be a PHP.net documentation URL from the name. For some reason, not every function has documentation (or at least not that matches this template), which is why we’re in a do…while loop.
We get the headers for that URL and if no shorturl is defined there, we know we don’t have a valid function documentation page. In that case, we sleep (I like to sleep in situations like this) and unset the URL, then we’ll take another run at the do…while loop.
If we do have a shorturl, we get into the heavy lifting.
We initialize a couple arrays that we’ll use for building our Slack message, then we pull in the HTML from the documentation page and strip out line breaks and any spaces that might start a line. We do that stripping so that the text we use in our message later on looks better.
Then we load that HTML into a DOMDocument and fire up XPath for it so we can target the elements we want a little easier.
We grab the element of the page that contains all of the information about the function by targeting the refentry class.
Inside that element is a div with class refnamediv, which contains an H1 with the function name, a paragraph classed as verinfo with information about what versions of PHP support the function, and a paragraph defining the function classed as refpurpose. We grab each of these as we’ll use them in our message.
If – for some reason – we didn’t get a function name, we sleep (like I said, I like to sleep in these situations) and then head back to the start of our do…while loop.
Having advanced this far, we target the “Description” section of the function documentation page, which has a class of refsect1 description.
Inside that div is an H3 that serves as the section title, a div that contains an example use of the function, a paragraph with a description, and an optional blockquote with notes about the function. We target each of these and pull that content in, then we use them to build the test of our message.
It should be noted that we use markup to make sure that the example is displayed as a code block while the note(s) (if present) are displayed as a quote, mimicking their appearance on PHP.net.
With all of the information we need acquired, we build our message.
The description section title and our formatted text from it are added as a Slack attachment field. That attachment gets the function name as a title (linked to the documentation URL) and it’s text is the version and purpose text we grabbed early on.
We post this all as user “PHPBot” with a hosted avatar. These two steps aren’t necessary as you can define your incoming webhook’s name and avatar, but we’re reusing a webhook for multiple purposes and, as such, define these for each.
Then we hit the end of our do…while loop. We’ve assembled our message so we can move on.
Lastly, we actually send that message. It’s just a cURL post to the Slack webhook URL (obscured here) and ends up posting a message that looks like this:
As I said, there’s nothing too complex here. If anything, this is probably over-complicated. But it gives us something to talk about at 2:05 every day.
Last week another coworker posted to our developer channel asking that we all send a random photo of a Yak to one of our non-dev coworkers. My immediate thought was that this was begging to be made into a Slackbot, and thus the YakBot was created.
We bring in the Google search API for this one in addition to the Slack API but overall it’s a bit simpler because there’s less of a message to build.
The first thing we do is define our Google search API key and the ID of the custom search engine that we’re using. Then we build an array of possible recipients of our yak. The recipients include our developer channel as well as individual user IDs, to whom the message would be sent in the form of a direct message from Slackbot.
Next we select a random number between 1 and 91. This is because the Google search API won’t let you request a start point higher than 91 for some reason. The search results return ten items, which means the most results you can get is 100. We only need one and one 1/91 is close enough to 1/100 so I do the randomizing up front and then do the lookup and take the first response.
Here we actually do that lookup. We get JSON back and decode it to pull out the first item listed.
Once we have our image URL, we build our message, which is relatively simple compared to the PHPBot message because all we’re doing is posting an attachment with an image. As I mentioned above, we also define a custom username and avatar, though we don’t need to.
Finally, we wrap it up by determining who to send our yak to. We loop through each of our channels and generate a random number between zero and nine. If the number is zero, we add that channel to the list of recipients.
We do that because it allows for some randomness. It’d get noisy if we were posting yaks too often or too regularly.
Then we loop through the list of recipients and send the message, defining the channel along the way. The lucky winners end up getting something like this:
At my day job our codebase is kept in a handful of self-hosted Git repositories. We have a tool that runs nightly, emailing out a digest of all of the previous day’s commits.
It’s kinda cool but I have the tendency to ignore it as a wall of text. I prefer more granular messaging and since we’re also using Slack, I saw an opportunity to do something with a post-receive hook to get a message as changes came in.
Posting from Git to Slack isn’t revolutionary. There are a tons of solutions for this out there. In fact, my original attempt just used a modified version of Chris Eldredge’s shell script, which I grabbed off of GitHub. However, my Bash-foo is weak and we’re a PHP shop so I decided to write a solution based in PHP (though heavily based on Eldredge as I had that code in front of me).
To fire off the PHP script, the post-receive hook looks like this:
That’s simplified a bit as the actual hooks use an absolute path to the script but you see that the script accepts the oldrev, newrev, and refname arguments.
As for the PHP script itself, it looks a bit like this (I’ve sanitized some things to remove references to our internal services).
We get details about what’s being pushed and build a message out of all of that. Simple enough. So lets break that down a little bit.
If the old revision is empty, it means we’re creating something. If the new revision is empty, it means we’re deleting something. Otherwise it’s an update of something that already existed and continues to do so.
Whatever the change type, we get more information about the old and new revisions by using the backtick operator to run the get cat-file -t command for each revision number.
If the change type is a create or an update, we’ll use the old revision data to reference things going forward. If it’s a delete we’ll use the new revision data.
This is just a bunch of logic that looks at the refname and the revision type and determines exactly what you’ve pushed. If we can’t figure out what it is, we exit with an error.
We determine the repo name based on the path the hook is running from and we get the user it’s running as so we know who did the push we’re about to notify people of.
Now we start building the message that will be posted to Slack. The message begins in the form of “[reponame/branchname]. If it’s a create or delete, we then note what was created or deleted. If it’s a commit, we note the number of commits and who they were pushed by.
We’re going to start building a series of messages (what Slack calls “attachments”) detailing items from the Git log pertinent to this push. We use the git log command and define our format. We get the author name with %an, the hash with %h, the commit message with %s and the commit body with %b. Those are all separated by five ampersands, with each item separated by five at signs. We use those goofy separators so we can split on them later, as it’s unlikely anyone enters those in text.
Here we actually build our message. The text property of a Slack attachment can be markdown, so we pretty it up a little bit. The fallback property is plaintext so it doesn’t get that formatting. The result is the hash, then the commit message, then the author. If there is a longer commit message, it gets added after that.
We’re not done with that text yet, though. We have a loosely-followed naming convention for our commits and we can use that to link back to other systems that might have more information about the commit. Anything that was a Jira task should start with the task number in the format “[ABC-1234]” but sometimes it’s “(ABC-1234)” or “[ABC] (1234)” or “[ABC](1234)” so we account for all of those. Similarly, references to our ticket system sometimes use “TICKET” or “BUG” or “SUPPORT” and sometimes have a space or a dash and sometimes use “HOTFIX” and… You get the idea. There are probably better regular expressions to use here but these work. So we find references to our Jira cards and link back to them, then find references to our support system and link back to it, where there’s a script that will do some additional parsing to figure out where to go.
With all of that done, we build an array for this attachment, defining our text, our fallback text, a color to display alongside the attachment, and confirming that there is markdown in our text field.
Once we’re done looping through our data from the git log and building our attachments, we put together the message and send it off via Curl. The message is just an array, which then gets JSON-encoded and posted to our webhook URL.
It’s fire-and-forget, so we don’t make any note if the webhook doesn’t respond or anything like that.
For a short time we had an additional message attachment that included diff data but we decided we didn’t want our code getting posted to Slack so we removed it.
As I said, there are tons of solutions for this out there, this is just one more.
So I’m a bit particular about how I keep my finances in order. To the point that I wrote my own web-based ledger software to help myself keep it all straight. Yeah, there are third-party solutions out there, but I wanted something that worked exactly the way I wanted.
Every couple days I pull receipts out of my pocket and go through my email inbox and drop new entries in my ledger. Every couple weeks I manually reconcile the ledger with my banking statements. Manual processes – ew – but it’s important enough to me that I do it.
The issue I ran into was transactions that I didn’t have a receipt for. Tim Horton’s drive-thru or gas station pumps with a broken receipt printer or the Flint Firebirds’ souvenir shop using SquareCash. No receipt means no ledger entry means confusion when I go to reconcile.
I could have made a mobile version of the ledger entry form but I really didn’t want to. As such I decided I could fix the issue by keeping faux-receipts electronically in Trello. A single list. Card title is the place I spent the money, description is the amount, a label to represent the account. Once there’s a ledger entry, archive the card.
And that would have been enough. I decided to take it a step further and automate things.
I use a webhook subscribed to that single list to look for new cards, get the data from them, automatically add the record to my ledger, then archive the card. I’m essentially using Trello’s app as an interface for my own so that I don’t have to make a mobile interface.
It’s a bit hacky but I figured I’d throw some of the code out here since I feel like there’s not a lot of Trello webhook documentation out there. Unlike my usual, I’m going to redact some of the code as it deals with my financial system and I’d prefer not to put that out there.
As I said, I’ve replaced some of the actual code with obscured-away psudo-functions. I’ll point those out those spots as necessary while going through this piece-by-piece.
Right off the bat we pull in my Trello API wrapper class (which really could use some love, maybe I’ll get to that sooner or later) and instantiate an object that we’ll use later. Then we pull in the data Trello posted to us via the input stream.
Webhooks subscribed to a list don’t give us all the details of cards on the list, so if we detect a card creation and we’re certain we’re getting data from the webhook attached to the list, we add a webhook to the new card. This is done with a POST to /1/webhook passing in a description (which is optional and doesn’t really matter), callbackURL (same as the URL of this script, though obscured here), and idModel (the ID of the card). For future reference, we then post the ID of the newly-created webhook to a comment on the card via POST to /1/card/<card_id>/actions/comments with text set as needed.
The rest of the code only fires if we’re receiving from the new webhook attached to the card, and if it’s triggered by either a card creation (which should never happen since the webhook won’t have been created yet), card update (which happens when I set the card’s description), or label addition (which is how I define what account the transaction is for). We only care about those actions because they match the ones I take on the front-end.
From there, we use attributes of the card to build the transaction. The transaction date is whatever the action is taking place. The label is the name of the card. The amount comes from the card description. The category is determined based on the label. The account_id is determined based on the first label selected. A single-item, multidimensional array (the actual interface handles more complex data) is assembled from this data.
If we got an account ID and a category and the amount is numeric, we know we got good data and we’re ready to try to add the record. First, though, we make a GET call to /1/cards/<card_id>/actions with filter set to commentCard and fields set to data, so that we can get all of the comments posted to the card. We loop through them until we find the one where we stored the card webhook ID and then we save that ID off. There’s probably a better way to do this.
If we found a webhook ID (and we always should), we delete the webhook, archive the card, and create the ledger entry. The first is accomplished with a DELETE request to /1/webhook/<webhook_id>. The second is a PUT to /1/card/<card_id>/closed with value set to true.
If we didn’t get a webhook ID, something is seriously broken so we log that finding to the card as a comment via a POST to /1/card/<card_id>/actions/comments with text set to “No webhook found.”
Lastly, if we don’t have all of the things we need to make a ledger entry we log that to the card as a comment. Again, that’s done with a POST to /1/card/<card_id>/actions/comments.
As I said, it’s hacky. It gave me a chance to play with Trello webhooks a bit, though, and was a lot of fun. As I use it more, we’ll see what I did wrong.
One of the first things I wrote about when I started this blog was my workaround solution for exporting from TechSmith Snagit to Amazon S3. That worked okay for Windows but I’ve started working on Mac significantly more of late and I missed that functionality. As such, I took another look at options for this since TechSmith itself still hasn’t developed a Snagit to S3 output for either Windows or Mac.
I feel like exporting on Mac shouldn’t be a problem. There’s no S3 Browser available but you could replace it with s3cmd and do the same thing. The catch: There’s no Program Output option in Snagit Mac. That’s right, on Windows you can essentially make your own outputs but on Mac you’re out of luck.
I came up with a workaround, though. It’s not pretty but it works. It also works on Windows, but with better options available I’m not sure there’s a reason to use it.
I use ExpanDrive to map my S3 buckets as a local drive. Then I can save from Snagit straight to the location I want in S3. That part’s great. It’s pretty much seamless. ExpanDrive is a really awesome tool. Probably too expensive if all you’re using it for is Snagit exporting, but worth taking a look at if you’re working with S3 in other ways.
The problem is you don’t get the uploaded URL out of this. That’s where it gets hacky.
I wrote a Chrome extension that gets me a list of the last five files uploaded to this particular S3 bucket. So after saving my file, I have to go to my browser to get its URL. Extra steps. The bonus is that I can get the URL any time later.
Since the ExpanDrive part of it works out of the box, here’s the breakdown of my Chrome extension.
I start with a script on the server side that uses the AWSSDKforPHP2 to read in the files from my filebox, sort by date, and grab the five most recent. Those five are then spit out as JSON.
The important things here are the UL element with the ID of content and the inclusion of main.js. The UL will be targeted by our JS for dynamically adding elements.
The request_data function wraps a call to the PHP script noted above. Onload of that data, we call populate_list.
The first thing we do in populate_list is parse the text we got from the PHP script into an actual JSON object. Then we remove any list items we may have in our previously-mentioned UL. We loop through each of the items in our JSON object and create new elements for them. Each item gets an LI with an A inside it. The A has an HREF of the item’s URL and the TARGET is set to _blank so it opens in a new window. Additionally, we use the copy_to_clipboard method that I grabbed from someone’s GitHub to save that URL to the clipboard, setting it as an onclick event for the A tag.
I’m certain that this could be cleaned up and made more configurable and turned into a publicly-available extension but I’m not going to bother with it. I figured I’d put this out and hope that it helps someone.
I will say that one idea I’m intrigued by is replacing the PHP script with an AWS Lambda function that triggers any time the S3 bucket is updated. I’m not entirely certain how that would work but it seems possible.
I Tweeted on Friday about futzing with Google Hangouts, something I hadn’t had to deal with in years. The link I sent out went to a blog post about lessons learned from that experience but I realized that I never bothered to actually write up the code I wrote in that project and figured it might help some people.
The issue I was trying to solve was that I was working on a team that was trying to incorporate a remote employee and a handful of people that sometimes worked from home. In the office, the team was split across two spaces that were next to each other. Each space was given a TV, webcam, and microphone. A problem immediately became apparent when the two local stations started causing feedback between each other. Someone solved this by turning off the speakers and microphone on one of the stations, which left that room unable to easily communicate with people in the hangout.
To solve this, I took advantage of the fact that the two stations were logged in as the same user and wrote a Hangouts app to run on those machines. The app looks for another person logged in as the same user and mutes them, eliminating feedback. It also blocks that station from appearing on video if another user is available to be seen.
As per my usual, I’ll start with the big block of code, then break it down in small chunks.
We start by pulling in the Hangouts API JS from Google, then we open our own script block. The first thing we do is define my_id as 0 and my_hangout_id as 0. We’ll store the user’s ID and the Hangout ID (which is the user/machine’s unique connection to a Hangout) in these spots later.
We define an init() function that we’ll fire off later. In it, we use gapi.hangout.onApiReady to attach this code to the event of the API being loaded and ready for us to use. If eventObj.isApiReady is true (and it should be, because this should only be fired if the API is ready, but I went off some sample code that included this), we can do some stuff.
“Some stuff” is setting my_id to gapi.hangout.getLocalParticipant().person.id and my_hangout_id to gapi.hangout.getLocalParticipant().id. Then we fire off the update_video_options() function and set a listener on gapi.hangout.onParticipantsChanged to run that function again each time someone enters or leaves the Hangout.
Here’s the bread and butter, the update_video_options() function. We start by using gapi.hangout.getParticipants() to get the list of Hangout participants, then we loop through it to find users with the same ID as the machine this is running on.
If the user is the same, we use gapi.hangout.av.setParticipantAudible() to mute them by passing in the user ID and false. If there are not only two users in the Hangout (which means someone other than the two local machines is logged in) and the user we’re looping through is not the machine this is running on, we also hide the user and reset their avatar. Hiding the user, much like muting them, is done by calling gapi.hangout.av.setParticipantVisible() and passing in the user ID and false. To change the avatar we call gapi.hangout.av.setAvatar() and pass in the user ID and a URL to the new avatar.
If there are only two participants in the Hangout (meaning the two local stations are the only thing logged in), we may as well show the other participant. We do the opposite of what we did to hide them, calling gapi.hangout.av.setParticipantVisible() and passing in the user ID and true.
We wrap up the update_video_options() function by making sure we don’t somehow get stuck on the video feed for the other local station if someone else is available to see. We use gapi.hangout.layout.getDefaultVideoFeed() to get some data about the displayed video feed. If the displayed participant (getDisplayedParticipant()) is the same user as the machine this is running on and there are three participants to choose from (the two local stations plus someone remote), we want to make a switch. We loop through the participants to find one that isn’t the same user, then we use setDisplayedParticipant() to set the video feed to that user. We re-check the number of participants before looping again because of some API weirdness I can’t explain.
Lastly, if we’re not displaying the other local station or there are multiple remote users to choose from, we clear the displayed participant with clearDisplayedParticipant() and let the Hangout decide who should be shown. Because the other local machine won’t be shown at this point and is muted, it allows whichever remote user is talking to appear.
The last thing we do is use gadgets.util.registerOnLoadHandler() to set the init() function to run when the utility is fully loaded.
One thing I think I’d do differently if this were still in use is update it to handle a variable number of local stations. There’s no reason not to account for three or four or whatever, aside from the fact that there were only two when I wrote it.
My latest experiment was a look into using their batch call, which I don’t see a lot of documentation about so I figured was worth writing up.
Batch functionality lets you fire off a series of GET requests (and they have to be GET requests) as one request. Depending on your code and what requests you’re making (and what data you’re getting back), this should speed things up a bit. My test script went from running in 37 seconds to 20, for example.
In an extremely simple (and pretty much useless) case, you could replace GET calls to /1/members/me and /1/boards/511e8c0101d3982d05000d5b with a single batch call, /1/batch?urls=/members/me,/boards/511e8c0101d3982d05000d5b.
As shown, /1/batch takes the URLs parameter, a comma-separated list of the calls you want to make, minus their version number prefix.
Of course, this means you get only a single response back, and it looks a little different from a normal response. The response is an array of objects – but not of the normal response objects you might expect. Instead, it’s an object with a single property, with a name set to the HTTP response code of the request.
So if your first request was to /1/boards/511e8c0101d3982d05000d5b, a normal response would start as follows:
The batch version of that response would look like this:
Obviously that’s simplified, I just don’t think it’s necessary to show the whole response.
One nice little gotcha with that response and working in PHP is handling a numeric property name, which is done by putting curly braces around the number, as seen in the code to follow.
Lets say you want to get the names and IDs of all the boards you’re assigned to and the names and IDs of all of the lists on each of those boards. Without batching, you could do the following:
With batching, that becomes this:
This assumes that the API will respond with a 200, of course.
As I said, I didn’t see a ton of documentation about batch calls in the Trello API. This is a stupid simple example but I thought it was worth putting out there.
Update, 8/6/2019: It’s been nearly five years since I wrote this post but a question came up yesterday in the Trello Community Slack and I wanted to add to this to specifically call out something I glossed over.
As noted above, the /batch call expects a set of API endpoint URLs to be provided as values of its urls parameter. When those URLs are things like /members/me and /boards/511e8c0101d3982d05000d5b, as in my example, simply comma-separating them works just fine.
But what if your API endpoint URL has parameters of its own, such as /cards/560bf4dd7139286471dc009c?fields=badges,closed,desc?
Adding that as is to your urls parameter won’t work because both urls and fields are comma-separated and it’s not smart enough to deal with that.
In that case, we need to switch to providing the values of fields individually and also URL encode the resulting value for use in the urls parameter.
The URL /cards/560bf4dd7139286471dc009c?fields=badges,closed,desc would become /cards/560bf4dd7139286471dc009c?fields=badges&fields=closed&fields=desc, which URL encodes to %2Fcards%2F560bf4dd7139286471dc009c%3Ffields%3Dbadges%26fields%3Dclosed%26fields%3Ddesc.
It’s worth noting that not every URL being provided to batch needs to be URL encoded, just the ones that cause an issue like this.
If we added this request to the batch call from above, it would look as follows: /1/batch?urls=/members/me,/boards/511e8c0101d3982d05000d5b,%2Fcards%2F560bf4dd7139286471dc009c%3Ffields%3Dbadges%26fields%3Dclosed%26fields%3Ddesc.
There appear to be cases where even this won’t return exactly what you might expect. For example, a batched request to /boards/511e8c0101d3982d05000d5b/actions?fields=date&fields=type&limit=1 will return the id and the memberCreator in addition to the requested date and type. I don’t know why this is.
Update, 8/13/2019: After I worked out the above solution, the Trello API docs were updated with an official solution: Manually URL encode the commas in each call and then URL encode the entire urls parameter.
I was going through my portfolio recently and realized that I have an entry for my Press Your Luck game but I’ve only described how it works, never taken a deep dive into the code.
The current version (if you can call something no longer in use “current”) runs entirely on the client side. There is one HTML file (with inline jQuery), one CSS file, an XML file with configuration values, and a handful of images and sounds.
Some parts of these files have been modified for display purposes. None of the changes impact functionality.
We’ll start with the config file…
We’re defining a set of images, the tiles that make up the game board. Each has a thumbnail (the image displayed on the standard game board) and a full-size image (the one displayed in the center when that tile is selected by the player) and we define their URLs here. We also define whether this is a prize image or a whammy, which determines what sound plays when that tile is selected.
Fairly simple. Now we move on to the CSS…
More pretty simple stuff. The page has a background. There’s a div that contains all the game elements. Those are positioned as needed. The tiles have a background image for their active and inactive states. The sound controls are hidden.
Now we get to the fun, the HTML and jQuery. Here’s the full page, we’ll break down the important parts afterwards…
Get the basic stuff out of the way… We import our CSS. We import jQuery UI. We lay out the game board and we set up some audio elements for the game sounds (which I pulled from some site that had all sorts of game show sounds archived, I can’t remember where it was).
The first thing we do is initialize some stuff. Define our board images, build our possible game boards, throw a board onto the screen. Now let’s see how we do that.
We’re loading that config file, then looping through each “image” element to find the “thumb”, “large”, and “type” definitions we discussed earlier. Then we’re dropping those into an array.
When I wrote this I was shocked that there wasn’t an easier way to do this using XML. If it were similarly-structured JSON, it’d just parse automatically. Instead I have to do it manually. Considering what the X in AJAX stands for, I expected more out-of-the-box support for XML. Maybe I’m just missing something.
With our available images defined, we cache a set of fifty possible game boards. We do this by shuffling the array of images (using a function I just grabbed from somewhere else) and adding them in order to a new set until there are 18 in that set. If we run out before we get to 18, we shuffle again and keep going. This means we can have as many or as few (as long as there’s at least one) images configured.
Finally we load the game board. We make sure no tiles are active, we set the middle image back to our placeholder, we get a randomly-selected one of our cached tile sets and display it on the board. Then we define some key events that allow the game to be controlled from the keyboard or from a presentation mouse, so that any event will trigger the start of the game. We bind the same action on touchend so that the person who commissioned this can play on her phone.
Our function for getting a random set is simple enough. Get a random number from 0 to the size of the set (should always be 50). If we don’t want to allow the same set to be picked twice in a row, compare that number to the current one and do it again until we get something different. Return the set of images with that number as the key.
To print out the board, we loop through each image on the board that isn’t the one in the middle. We use the index of the image and pull from the array we set in get_random_set() to reset said image’s attributes.
Ahh, yes, now we start the actual gameplay. We wipe out all of the events we set earlier and set new ones on the same triggers, this time for stopping the game. We start playing our in-game music. Then we set an interval to reload the game board every 850 milliseconds (allowing for the same board to be played twice in a row this time) and for the active tile to shift every half-second. I got those numbers from watching way too much Press Your Luck.
How do we switch the active tile? Well we know there are 18 tiles so we randomly select a number 0 to 17 until that number is not the same as the one we’ve already got. Then we remove the active class from whatever tile is active and add it to the one that corresponds to our randomly-selected number.
Our last step is to stop the game and it’s made up of a bunch of little things.
First we clear our intervals so the game won’t continue, then we wipe out our event bindings and set up new ones for the same triggers. These new ones will reset the game board and get us in a position to start a new game.
We get the winning tile and stop the in-game music. Based on what type of image that winning tile is, we play either the “buzz-in” sound or the “whammy” sound.
This is how we make the lights around the winning tile flash and it’s ugly. We add and remove the “active” class from that tile in 100 millisecond intervals. Partway through that, we change the center image on the game board to match that of the winning tile. Again, those times were selected from watching way too much Press Your Luck.
And that’s really all there is to it. There may be a better way by now (I hope there is for that flashing bit) but this is what I knew at the time. It was a lot of fun to write and it was a lot of fun to see people play.
Last week I published a bit of code that uses the Trello API to keep parent and child cards synced across a set of boards. It was a little piece of research that has absolutely taken off around the office so I’ve been expanding on it and demoing it and talking about it and generally losing my mind.
The thing I expanded on most is a flaw that appeared in my original script whereby a user could create a child card outside of the normal workflow and it would never be linked to the parent card. Obviously “outside of the normal workflow” means it’s already an edge case but that doesn’t mean it’s as uncommon as we’d like, so I came up with a way to handle it. It does rely on the child card being tagged with the same card tag as the parent but it’s better than nothing.
As with my previous post, this uses my Trello API wrapper class and pulls in the $GLOBALS[‘config’] array of configuration values from another file. Also as with my previous post I think it’s commented pretty well but we’re going through the code piece-by-piece anyway.
We loop through all of the cards on our Work in Progress (“WIP”) board and use a regular expression to see if they have a card tag as a prefix (appearing in the pattern of “[TEST4] Test Project 4”). If the card does, we save an array of data about the parent to an array of parent cards for reference by card tag later.
Then we loop through our list of child board names and get every card on that board using a GET request to /1/board/xxxxxx/cards (where xxxxxx is the board ID). If the card’s description doesn’t match our convention for linking back to a parent, we know we’ve found a rogue card.
We check to see if the card has a tag in its name, using the same regular expression as we did earlier. If it does, we can use it to move forward. If we know the parent that tag belongs to, we can do even more.
The first thing we do is fire off a PUT request to /1/cards/yyyyyy (where yyyyyy is the ID of the rogue card) with desc set to the current description with our parent link prepended to it. This gives our child card the necessary link to the parent.
On the off chance that the parent card doesn’t have a label, we use the fact that we already know what board the child card is on to set one. That involves a PUT request to /1/cards/zzzzzz/labels (where zzzzzz is the ID of the parent card) with value set to the color name of the label that corresponds to the board.
Then we get ID of the parent card’s “Slices” checklist, as that’s where the parent card links to each of it’s children. We make a GET request to /1/cards/zzzzzz/checklists and loop through each one until we find the one with the right name, then save that ID off for later.
What if we didn’t get a checklist ID? Then we make one. We fire off a POST request to /1/cards/zzzzzz/checklists with name set to “Slices” and that gives us back a bunch of data about a newly-created checklist. We save off the new checklist ID so we can move forward.
And our last step of the loop is to link the parent card to the rogue child. We fire off a POST request to /1/cards/zzzzzz/checklist/cccccc/checkItem (where cccccc is the “Slices” checklist ID) with name set to the URL of the rogue child card. Trello’s interface will convert that to the name of the child card when the parent card is viewed.
As I mentioned in my previous post, this is my first pass and I’m sure there’s a better way to do this. This fixes a gap in that earlier implementation, though, so obviously iterating on it is working.