Saturday, December 2, 2023

How to disable AWS Lambda recursion detection through code instead of support

AWS Lambda recently introduced loop detection which it will shut down certain kinds of recursion (For more info see this article). The problem with this is that there are certain kinds of chained batched processing in Lambda that will trigger this incorrectly. For instance, if you use a Lambda that consumes a paginated API and invokes itself with information of the next page through an SQS queue event it will now only process the first 16 pages of data before it gets shut down.

To make this worse, even though you can turn off this behavior by contacting AWS support you can only do this if you are paying for an AWS support subscription (At the time of writing $29 / month minimum). Also, this will disable it for all Lambda's in your account. I've come up with a very simple snippet that solves this in code for Node.js on a per Lambda basis without any AWS support interaction.

    export function disableLoopDetection() {
        // This little piece of magic disables the loop detection in the AWS SDK
        if (process.env._X_AMZN_TRACE_ID) {
            process.env._X_AMZN_TRACE_ID = process.env._X_AMZN_TRACE_ID.replace(/:\d+$/, ":1");

The loop detection uses the AWS X-Ray trace header. The very last number in the header that looks like this.


The very last number in that string is the invocation count. This code snippet changes the environment variable that contains the X-Raw trace header and changes the invocation count back to 1 with every invocation. You need to make sure you call this method in every call to the handler since each call will set this environment variable to a new value.

Tuesday, November 28, 2023

How to automate cross posting between Mastodon and Twitter using IFTTT

I recently moved from Twitter (X) to Mastodon and as part of this, I wanted to automate cross-posting between the two even though I mostly interact with Mastodon. I have an account with IFTTT but unfortunately, it does not support Mastodon natively yet with some scripting magic you can achieve this relatively easily.

The first thing you need to deal with is that depending on if you have an image in your post or not would want to use two different actions in IFTTT. The way I solved this was to set up my chain of triggers and actions like this by adding both of the Twitter actions in the same recipe.

The RSS feed you want to feed is the URL of your account with the string .rss appended at the end. For my account, the whole URL is Finally, you need a little bit of filter script magic listed below.

const message = Feed.newFeedItem.EntryContent.replace( /(<([^>]+)>)/ig, '');

if (Feed.newFeedItem.EntryImageUrl && 
    Feed.newFeedItem.EntryImageUrl !== "") {
  Twitter.postNewTweet.skip("Had photo: \"" + Feed.newFeedItem.EntryImageUrl + "\"");
} else {
  Twitter.postNewTweetWithImage.skip("No photo");

The two things you need to fix in the script are that Mastodon adds a whole bunch of tags and stuff to their RSS feed that Twitter will not like so we filter that out. Secondly, we determine if there is a picture associated or not and this is done by checking if the image URL is either empty or pointing to the IFTT "no picture" placeholder. Then we post the tweaked entry to the correct action and skip the one that doesn't match if we have an image or not.

Tuesday, October 24, 2023

How I worked around Alexa deprecating IFTTT support

Alexa just announced that they are deprecating the IFTTT support so I figured I would explain how I worked around that problem with some enginuity and existing gadgets in my house.

First I changed my IFTTT applets to take a Web Request trigger instead of Alexa and make note of the URL required to invoke it.

Using my Hubitat hub, which still does have Alexa support, I then created virtual devices for all the IFTTT integrations I want (Pretty sure SmartThings also have virtual devices). Easiest is to use a virtual Switch. Then you create a rule when the switch turns on which makes a HTTP request to the IFTTT web request trigger from the previous step. Secondly in your rule you also need to change the switch back to off immediately so that it can be turned on again without having to turn it off manually first.

Finally you allow Alexa to control your virtual switches which allows you to turn it on in Hubitat using your Alexa which in turn causes the IFTTT applet to run. Once you have the switch in Alexa you can also tie the button to whatever routines you might want in Alexa as well.

After I did this I myself I discovered the Virtual Smart Home service which allows you to do the exact same thing as above but without the requirement of your own smart home hub.

Sunday, May 28, 2023

How I got my web UI tests to run under Amazon Linux 2023 in CodeBuild

So it seems this distribution doesn't seem to have either Chromium or Chrome easily available so I solved this with the following snippet in my buildspec.yml file.

      - wget -q
      - sudo yum install -y ./google-chrome-stable_current_x86_64.rpm
      - sudo ln -s /usr/bin/google-chrome-stable /usr/bin/chromium

Basically it downloads Chrome from Google and then symlinks the chromium to it. After this I can run both Pupeteer and Cypress tests from it.

Thursday, March 30, 2023

The importance of a service dashboard

When running an online service it is vitally important that you have a dashboard that can give you a concise overview of how the service is doing. The general purpose of this dashboard is twofold. First off you want to be able to easily see if your users are generally having a good experience using your service. The second purpose is to hopefully see if there is something weird or unexpected that is happening.

For Underscore Backup I have created the following dashboard (There is also a second dashboard that tracks usage instead of the correctness of the service).

The other day I discovered something weird happening in one of our least used regions where the storage usage suddenly doubled over night with no new source or accounts registering. When you see something weird, it is a good principle to look closer to what could have caused it.

Sure enough, when I looked closer I discovered a bug where in some cases after doing a log optimization the old log was not properly deleted from the storage after being completed. Fortunately, having found it the bug was trivial to fix before the new stable release was done.

Sunday, February 26, 2023

Automating testing of signup

The main trick of testing the sign-up process of most sites is to handle the email verification since that requires you testing framework to parse an incoming email amd follow a link to complete it. I solved this by using SES incoming email functionality whixh through SNS passes the email to a Lambda. This Lambda then stores the parsed verification link in a DynamoDB table indexed by the recipient.

I then use the same Lambda set up with a simple function URL to do an HTTP redirect when you do a get to the verification link. That way you can automate your verification easily by just loading the Lamba URL. Below is the entire Lambda function code for receiving the email, storing it in a DynamoDB table called EmailLinks and also implements a HTTP responder where you call it with link {functionUrl}?email={email to confirmation to redirect to}.

const AWS = require('aws-sdk')
const findLinkRE = /(whatever you need to find the confirmation link from your email in the first match group)/;
const dbClient = new AWS.DynamoDB.DocumentClient();

exports.handler = async (event) => {
    if (event.Records) {
        for (let i = 0; i < event.Records.length; i++) {
            const message = JSON.parse(event.Records[i].Sns.Message);
            const content = message.content;
            const match = findLinkRE.exec(content)
            if (match) {
                for (let j = 0; j < message.mail.destination.length; j++) {
                    console.log(`Set ${message.mail.destination[j]}: ${match[1]}`);
                    await new Promise(function(resolve, reject) {
                            TableName: "EmailLinks",
                            Item: {
                                "Email": message.mail.destination[j],
                                "Link": match[1]
                        }, function(err, data) {
                            if (err) reject(err); else resolve(data);
    } else if (event.rawQueryString.startsWith("email=")) {
        const item = await new Promise(function(resolve, reject) {
                TableName: "EmailLinks",
                Key: { "Email": decodeURIComponent(event.rawQueryString.substring(6)) }
            }, function(err, data) {
                if (err) reject(err); else resolve(data);
        if (item && item.Item) {
            const response = {
                statusCode: 301,
                headers: { "location": item.Item.Link, "Content-Type": "text/html" },
                body: JSON.stringify(item.Item),
            return response;
        } else {
            return {
                statusCode: 404,
                headers: { "Content-Type": "text/html" },
                body: "Not found"
    } else {
        return {
            statusCode: 400,
            headers: { "Content-Type": "text/html" },
            body: "Bad request"

Friday, February 24, 2023

Optimized database layout for Underscore Backup

Tonight I spent a few hours optimizing the storage of objects in the back end of Underscore Backup. I expect more than 99% of the DynamoDB storage for this application to be a single table that contains every object that any source has stored with the service. Each source is identified by a unique UUID and each object contains a field depicting which source it belongs to. The optimization I realized is that I was storing the UUID as a string, including the '-' characters of the UUID which is 36 bytes long instead of the 16 bytes it would take to store the UUID in binary form. Based on my estimates, this will likely reduce the overall storage requirements in DynamoDB for the service by over 20%. Not to mention the smaller size will also mean it consumes fewer read units when querying and scanning the table.

This was a breaking change that required migration of old data so I am glad I did this now instead of later when the data volume would have made this a much harder problem to solve.