How we saved $15,000/month and increase conversion 300% with Lifecycle automation.

Summary

On my nights and weekends I worked with a startup to automate their Lifecycle email program. Within 3 months of working together the company saved $15k/month and increased new trial subscription starts from 50 per week to 200 per week.

Weekly sales revenue growth before Lifecycle and after. The different color bars and coupon discounts that I tested to optimize pricing.

How did they achieve this?

  1. Prioritized revenue automation as our Q3 goal
  2. Eliminated any initiatives not directly related to the Q3 goal. (Such as brand marketing)
  3. Identified events critical during the customer lifecycle that needed to be sent to the marketing platform.
    1. new user sign up
    2. paywall abandon cart
    3. trial and paid subscription churn (user cancellations or time-based expirations)
  4. Partnered with the engineering teams to get the lifecycle events flowing into the marketing platform of choice. In this case, Intercom.
  5. Monitored and tested the data coming from engineering.
  6. Launched Lifecycle programs

Company Profile

This startup is a subscription-based text-to-speech productivity tool available on the web, iOS, Android, and as a Chrome Extension. To keep things simple, I’ll refer to this startup as TTS Inc.

TTS Inc. is a small-but-mature company that the CEO started in dorm-room in his dorm room a number of years ago. The company had mild growth and accelerated when they got into aggressive paid advertising The team is currently ~100 people, including contractors like myself.

TTS Inc. offers a freemium product. The free tier is the core text-to-speech product with a limited feature set. For example on the free tier the reading speed is capped at 150 words per minute while the Premium tier listening speed is 900 words per minute. 900 words per minute sounds crazy but it’s possible to comprehend.

Most users upgrade to the Premium tier subscription by joining a 3-day trial. The trial requires a credit card and converts to a 12-month subscription, renewable annually.

TTS Inc. reached out to me because they wanted to get their martech stack aligned to allow them to scale and automate their Lifecycle marketing campaigns through email, push and in-app messaging.

Execution

Step 1: Prioritize revenue automation

Lifecycle marketing started as a component within the TTS Inc. Brand marketing team. Brand marketing had two goals.

(1) Increase revenue with Lifecycle
(2) establish a consistent brand identity.

Having these two goals caused TTS Inc. to make some missteps that harmed them. Part of the Brand team’s plans for the future was to send a highly stylized, beautiful weekly email newsletters to the active user base. Keeping that future brand-focused state caused me personally to get distracted because I was trying to push their marketing infrastructure to support for a future that had not been realized yet. For instance, I was advocating for us to switch to an email service provider that would make sending high volumes of email easier and cheaper than their current system.

By the end of my first two months we missed our revenue goals. We were getting single-digit trial starts per day. Not good. Our lifecycle program was abject failure.

Since we missed our goal, leadership chose to eliminate the Branding team except for Lifecycle. As a regular listener of Alex Hormozi’s podcast I now understand his view in episode 382: branding is a game the big companies. During this time TTS Inc. was not at the stage where it could spend a lot of money on activities that did not have immediate measurable impact on sales – branding is one of those. Brand-related activities included surveys, newsletters, and highly stylized one-off email sales announcements were a distraction and difficult to attribute to sales.

Now my priority was clear: improve LIfecycle revenue.

Step 2: Eliminate all non-core activities

To become hyper-focused on our one goal we had to stop doing non-sales activity. We dropped doing branding which in turn allowed us eliminate other products.

WE eliminated unnecessary software

TTS Inc. had two marketing-related systems: Drip and Intercom.

TTS Inc. was spending $12-$15k per month on Drip — an email marketing platform designed for small-to-medium sized business. TTS was getting very little out of Drip because they had not dedicated serious engineering resources to get their data synchronized with their own user database.

Having two systems made things worse because engineering didn’t know which systems needed which event? If both systems needed the same events or data there would be some minimal amount of maintenance work involved with keeping things accurate on both platforms. It was an all around mess.

With branding removed from the priorities, leadership made the decision to eliminate Drip from the tech stack. Canceling the drip account saved the company $10,000-$15,000 per month.

WE Eliminated resource-heavy activities

Understanding that Intercom is not well-suited to sending a lot of email (too expensive, bad interface) we pared down email production down to just one person to do all the work with minimal concern for visual appeal.

Brand-focused email:

4 people, 1 week

  1. Mockup email (Lifecycle)
  2. Write email copy in the brand’s voice (copywriter)
  3. Design a pretty email (email designer)
  4. Build the HTML email (HTML developer)
  5. Set up the automation (Lifecycle)
  6. Launch the email (Lifecycle)

Ugly email:

1 person, 2 hours

  1. Cobble together some copy text (Lifecycle)
  2. Build HTML email – basic styling (Lifecycle)
  3. Set up the automation (Lifecycle)
  4. Launch the email (Lifecycle)

In the weeks after eliminating the “pretty” emails I have found that sales have not been noticeably harmed. In fact, email-to-trial conversion rates may have actually gone up.

Step 3: Identify critical events and data to use in the marketing platform

Automation was difficult regardless of marketing platform. Neither of TTS Inc.’s marketing platforms were integrated with the business’s data pipelines. Intercom, in particular, was missing a lot of critical account and subscription-related information. In order to get Intercom operational quickly we stripped down our requirements to a set of eight subscription-oriented Lifecycle events that could be the bare minimum backbone for direct sales messaging.

  • New user created an account
  • User saw paywall but did not buy (abandoned cart)
  • Trial started
  • Trial expired
  • Trial canceled
  • Subscription started
  • Subscription expired
  • Subscription canceled

In my experience the subscription events are difficult to get because subscription data comes from many sources: paypal, stripe, apple, Google, etc. The team responsible for managing subscription did a fantastic job in delivering subscription events to Intercom on an hourly basis.

Step 4: Partner with engineering

TTS Inc.’s leadership did an outstanding job of aligning engineering resources to prioritize Lifecycle. This was easy because the company’s engineers are naturally business-minded. Leadership communicated clearly and directly to the engineering teams that Lifecycle’s goal for the quarter was to start making money and to prioritize Lifecycle requests over other initiatives. Being business-minded folks, the engineers were also innately eager to help us get these projects off the ground.

We met with the engineers a few times to discuss our minimum needs and the engineers added in some things that would make reporting easier for other departments (such as ad-buying group, or finance). Since we agreed on the set of events ahead of time I was able to start building the user messaging journeys in Intercom in parallel with the engineering data team’s work.

There were many teams involved in this work:

  • Platform engineering did the work to synch subscription data to Intercom
  • Web engineering had to start sending new user signup events to Intercom
  • Product management team found flows that broke and needed to be migrated from Drip to Intercom

After a rapid 3-4 week development cycle engineering completed their work and we were ready to turn things on.

Step 5: Monitor and test the data

Once engineering launched their respective work we had to double check their data.

For the first few days of the launch we observed that many events were under-firing. Engineering found that in many cases the user’s identity was not correctly being established before sending the data to Intercom. This is not necessarily name & phone number identity — i mean database unique id which is usually a jumble of letters and numbers.

This is an important bug. Marketing systems don’t magically tie data to a user. Engineering has to put operations in the correct order. If you do not have the correct order events that do not have a proper user ID will go to a black hole.

Correct order

  1. user sign up
  2. wait until user id assigned by backend system
  3. establish connection to marketing platform
  4. send events

Incorrect order

  1. user sign up
  2. establish connection to marketing platform
  3. send events
  4. user id assigned by backend system

This is an engineering thing but it’s important to know. In one bug we found, the web app was sending a “new user signed up” event before the user’s ID was established on TTS Inc.’s database. So we were getting a lot of events but no user data. The web app made a small code change to wait until the new user’s id was established on the backend before sending the event data to Intercom. After that we were able to send messaging to new web users.

This is the case for many marketing platforms and if you are losing a lot of event data or getting a lot of “anonymous” or “unidentified” users in your SaaS platform this might be the issue.

In summary: order-of-operations is important. Your engineers need to know that the user’s ID needs to be established before logging any events to your marketing or analytics platforms. (Amplitude, Segment, Iterable, Braze, Intercom, Drip, etc.!)

Step 6: Launch the Lifecycle programs

This is the easy part! Once the data is live and flowing we are free to experiment using the marketing platform.

There is no secret sauce here. Other subscription apps and e-commerce websites follow a similar pattern of direct-to-consumer sales. We mapped the events directly to their own set of emails.

  • New user: send a welcome series of emails
  • Paywall: send abandon cart email
  • Trial expired/canceled: send a win-back email
  • Paid subscription expired/canceled: send win-back email

We launched the lifecycle automations and by the first full day of emails upsells were 2x the previous day. Over a 3-week period the average weekly web sales volume was three to four times the typical week.

Further refinements that we made:

  • built a customized and editable web landing page to test and measure conversions
  • offered discounted plans
  • sent in-app messages in each workflow

Conclusion

Although it was a bumpy ride at first, TTS Inc.’s marketing automation stack is well-situated for growth. Within the first week, their lifecycle sales automation performed well far beyond my expectations. Now that they have had consistent sales baseline they can begin to experiment with branding and other marketing tricks to boost conversion performance.

The MVP for a multi-million dollar product built on Google Sheets and Iterable

In April 2020, I helped launch Calm Sleep School, which was an early prototype of the coaching product that later became Calm Sleep Coaching (Sept 2020-2021) and continued to grow into what we now know as Calm Health.

In late 2019 Calm hired a Chief Medical Officer to spearhead the company’s initiative to move into the mental health realm. The Chief Medical Officer-slash-head-of-sales came to us from a mental health platform that sold as a business-to-business product for large companies to offer therapy as an employee benefit. His Big Idea was for us to create a similar product that wasn’t quite therapy-level help but would address broad problems that affect mental health (i.e. sleep, diet, physical health, stress, anxiety, etc.). Leadership settled on sleep coaching in part because our CEO admitted that he was having trouble sleeping at night so he could be the Founder AND a customer.

The idea was simple: professional sleep therapists would conduct a series of 6 one-on-one video coaching calls with individual students to identify issues that were causing poor sleep and to provide a personalized sleep plan for the clients to follow. An example sleep plan might be to stop drinking caffeine after 3pm, turn off the TV at 9pm, and go to bed consistently each night.

The design concept was beautiful (see the Launch Email). Our product designers contracted a professional illustrator to create one-of-a-kind art with everything in a deep sleepy purple. Design was top-notch.

Calm Sleep School had some serious challenges. Engineering would commit near-zero resources to building the system. Since this was only going to be prototype with unknown value, leadership would not commit significant development time because the engineering team was already stretched thinner than a spider’s web.

Engineering worked on these components:

  • One backend engineer to create the SKU and hook up the Stripe billing mechanism. I believe she accomplished this within one sprint.
  • Frontend web team built a landing page
  • Frontend web team built the checkout page for customers (students) to pay via Stripe (via the billing backend)

Lifecycle (my team) was responsible for all of the onboarding and this is where I am quite proud of our achievement because I was able to build the onboarding AND a coaching management system using Google Sheets, Google App Script, and Iterable. (My equivalent of duct tape and paper clips.)

Launch Email

Sleep School Onboarding

Before onboarding could begin people would arrive on the landing pages and checkout built by engineering.

Upon successful purchase, a Welcome email to explain the product and inform the new student the details of Sleep School.

Using Google App Script I scripted the user intake form to take several actions:

The Welcome email pointed to a user intake form (Google Form) which asked a series of questions to understand why the new student needed help sleeping.

(1) It emitted an event to Iterable so that we would have a record of user’s progress.

(2) It sent an email and logged a row to our internal teams to know a new student signed up.

The coaching team would manually match the new student to one of six Sleep Coaches. The match of sleep coach to student was logged on the sheet. With the match made we would send an email with the introduction of the coach with name, picture, and bio.

(3) The script had to copy a Google Drive folder filled with a handful of assets that coaches used for taking notes and sharing reports with the students.

Since each student needed to have a unique Sleep Journal I logged the folder, Sleep Journal Google Form, and other things into the Google Sheet as well as Iterable. By saving the Sleep Journal Google Form ID we could send an email to each student that had a link to their personal Sleep Journal.

After students’ 7 days of journaling they would have a one-on-one Zoom call with their Sleep Coach and the coaches would then carryout delivering the remainder of the Sleep School experience.

Sleep coach match

We launched Sleep School slowly by sending batches of 10,000 emails to existing Calm Subscribers so that we could pace the flow of student enrollment and not overwhelm the coaching staff. Our goal was to sign up roughly 100 students to complete the 6-week course which we were able to accomplish. Students were satisfied with the program! I don’t think we had to issue any refunds related to the quality of the course. Some folks canceled because they didn’t realize the program cost around $600 USD. In further sales development our Head of Sales had inked a deal with some corporate clients to

As I said before, this was an early prototype. Leadership was encouraged by the product so they decided to build Calm Sleep Coaching product which was staffed by 1-2 full-time engineers. As of 2022 that product further evolved into Calm Health which has a full staff of 12+ full-time engineers dedicated to bringing mental wellness as an employee benefit for thousands of companies and tens of thousands of employees!

How I manage my Iterable Catalogs using Google Sheets

Calm’s dynamic emails leverage the Iterable Catalog feature extensively to keep campaigns evergreen and easy to maintain. I use it for a variety of use cases:

  • Email localization strings. Calm serves content in English, French, Spanish, Japanese, Korean, Portuguese, and German.
  • Content calendars. Calm has a variety of content that releases on a regular cadence including the Daily Calm and the Daily Trip.
  • Follow-up lessons. For special programs we send messages that remind listeners of key takeaways, lessons, or review material.
  • Top N lists. The most recent Catalog I added allows others to curate lists of top/new/hot/trending content.

I have been most tickled with my most recent effort to store Top-N lists (Top 10 newest music, Top 10 most popular, Top 5 fill-in-the-blank) in our Catalog because it streamlines our process for creating ad-hoc special announcement emails. Creating this did not require a lot of code and makes me wish I had thought of doing this earlier.

Google Sheet as Content Management System (CMS)

Using a Google Sheet shared with my colleagues allows us to have a mix of curated and automated feeds controlled by different:

Automated feeds:

  • the newest Sleep Stories for each language
  • the newest Music releases for each language
  • the newest Meditations for each language
  • 3 content types * 7 languages = 21 individual feeds

Curated feeds

  • featured English new releases
  • English-language blog content
  • special blocks for other departments to provide content (b2b, marketing, product, engineering)
The Sheet Tab name “konewSleep” becomes the name of the data feed in the Catalog. The sheet columns headers are key names of each object.

The Catalog

See how the Google Sheet columns map to JSON key names.

Retrieve and Render Iterable Catalog Data

Using a known key, such as ennewSleep I am able to retrieve the appropriate content for Korean language new Sleep Stories.

{{!-- look up new Sleep Stories in English (en) --}}
{{#catalog "Datafeeds" "ennewSleep" as |datafeed|}}
{{#lookup datafeed "programs" as |prog|}}
<h1>{{{ prog.title }}}</h1>
<p>{{{ prog.description }}}</p>
<hr />
{{/lookup}}
{{/catalog}}

Basic HTML result would be

<h1>Sleep story title 1</h1>
<p>Description of story 1</p>
<hr />
<h1>Sleep story title 2</h1>
<p>Description of story 2</p>
<hr />

Addendum: Roadblocks along the way

My initial intent for this project was to lean on Iterable Datafeeds and Google App Scripts to provide my dynamic data. I created a Google Apps Script to create a web app that would grab some API data, package it up into a more email-friendly rendering payload, and return JSON. I ran into 2 major problems while trying to set this up which eventually led to my Catalog solution.

First, I found out that I cannot change the user-agent HTTP header when using the UrlFetchApp library. UrlFetchApp sends something like Google/app-script-client when doing its HTTP fetch. I needed control of the user-agent header to pull the different language content from the API. My solution to this was to set up a free Netlify account that would proxy a request Google -> Netlify -> Calm. I created the proxy as a serverless function. This only took a couple of hours thanks to Netlify for its very developer-friendly deployment tools.

My second problem is that Google App Script only serves a few requests per second (expected). I was expecting to lean on the caching option in the Iterable datafeed fetcher. My theory was Iterable would hit my endpoint once and cache the response for 1 hour as described in their documentation. Despite several attempts at configuring the templates and datafeeds I could not get Iterable to cache my feeds properly. During campaign send time Iterable would hit my Google App Script hundreds of times per second and fail out due to rate limiting problems.

Figuring that the caching roadblock would be unsolvable on my own, I decided to use the Iterable Catalog solution instead. The Catalogs are useful because I don’t have to worry about maintaining uptime on the services but not as nimble as a API solution because I have to constantly sync my Google Sheet to the Catalog(s).

Do Gmail promotion cards improve email performance?

Long story short: gmail promotion cards did not have statistically significant positive impact on our Black Friday email campaign.

My first year being on the retailer’s side of Black Friday was 2019. It was a harrowing experience due to lack of experience and being new to my job. We hectically threw together our Black Friday campaigns around 3 weeks before Thanksgiving: copy writing, design, QA, pre-testing, and identifying segments for the biggest revenue week of the year all done as quickly as possible. This is not an experience I recommend. In 2019 I knew about Promotion Cards but since we had a very short timeline to work on the emails and I didn’t have time to test the Promotion Cards. In 2020, we started planning in early October. Unfortunately I forgot about Promotion Cards until the Black Friday week. Remembering that I wanted to test the cards in 2019 I scrambled to incorporate them into a handful of our early campaigns sent on Tuesday of Thanksgiving week to see if the cards would be able to measurably increase sales conversion rates for our biggest email sends running from Black Friday to Cyber Monday.

I tested the promo cards on 4 different emails with volume around 4 million people. I looked primarily at 2 metrics: open rates and send-to-purchase conversion. I know open rate is a vanity metric — call me vain! Promotion Card’s best performance garnered +5% better open rates (stat sig) but send-to-purchase conversions were -1.2% worse than control (not stat sig). The Promotion Cards lagged in both metrics in other campaigns as well and could not achieve statistical significance for purchase conversion in any test. Since I couldn’t get stat sig on the sales I decided to take the easy road: no Promotion Cards for the remainder of Black Friday 2020 campaigns.


What are Promotion Cards?

Several years ago Gmail came up with the “Promotions” tab to automatically separate out marketing emails from people’s inboxes and into a separate folder. Marketers freaked out because the Promotions tab, essetially, is a Spam-lite inbox. All the marketing riffraff ends up lost in the outer limits email equivalent of the axe-murderer-creepy cluttered basement or the spider-filled attic. The Promotions tab on Mobile devices is especially challenging because the tab lies within a menu drawer.

I find out about new GMail features from the Product Managers that work at Google. Both at Flipboard and Calm, the Gmail account management team has reached out to test new features: I was able to use promotion cards and AMP for email as part of the Gmail beta programs. Is it an exclusive club? I don’t know. But it does feel kind of cool to test new features in the world’s most popular email service before most people.

In this case I think the kindly folks at Gmail came up with Promotion Cards annotation to give emails extra visibility on mobile clients to make up for banishing marketing emails into the Promotions tab. You can see a Promotion Card in action below.

As you can see above, Gmail added a bold “Promotions” section at the top of the Primary Inbox. That space contains a couple of teasers for the Promotions Tab (buybuy Baby, Carter’s, etc.).

Observations: increased open rate in 3 tests, decreased in 1. The case where promo code loss may be due to poor subject line choice.

Observation: less clicks

Observation: sales inconclusive.

Result: slight edge to promo cards but it will not change your business. Take it or leave it.

How to set up your email domains and SPF records

What is SPF (Sender Policy Framework)?

SPF is a DNS TXT entry for your domains and subdomains that list the services and individual IP addresses that you use to send emails.

v=spf1 include:_spf.google.com include:_something.amazonaws.com ~all

Hypothetical SPF record. This says that I use Google and AWS to send emails

SPF helps prevent email address spoofing because ISPs can look up the TXT record and compare with the email headers to verify that I acknowledge that I want Google or AWS to send emails for me.

Here is a larger SPF record. Note that in the block above I have 4 include: lines. That means I have 4 SPF records for my domain.

# infoentropy.com
v=spf1
include:_spf.google.com         # 1
include:u123456.wl.sendgrid.net # 2
include:other.service.com       # 3
include:other.service2.com      # 4
-all

Now, a problem: according to specificiations, each domain can have a maximum of 10 SPF records in each DNS TXT record. If you have more than 10 SPF records the ISP could ignore the entire DNS record or only look at the first 10 while ignoring the rest.

If I used my root domain for all email services, I would reach the 10-record limit quickly. Look at the table below — a medium-sized company could end up using many different SaaS platforms, each of which vies for space on the root domain.

Software serviceDepartmentwhat do they send?
GmailEveryoneall emails
MailchimpMarketingEmail blasts
Blog platform (WordPress, Squarespace, etc)MarketingBlog announcements
CRM (Salesforce, Hubspot, etc.)SalesCorrespondence with customers
e-commerce (Squarespace, Magento, etc.)SalesPurchase receipts
Customer Support (Zendesk, etc.)SupportHelp tickets
Surveys (SurveyMonkey, etc)MiscellaneousProduct survey, customer satisfaction survey, etc.
Transactional emails (Mailgun, Sendgrid, Sparkpost, AWS SES)EngineeringThings related to your apps
Engineering Internal Services (Jenkins, datadog, github)EngineeringStuff engineers look at
Legal things (DocuSign, etc)Legal
Finance (ADP, Carta, etc)Finance
Recruiting emailsHR
More stuff! JIRA, ASANA, Slack, BOX, DropboxEveryone
Various outbound emails & vendors that might use your domain name

This is 13 SPF records trying that would need SPF records. However, not all of these services necessarily need to be on the root infoentropy.com domain. I can have more than 10 SPF records by using multiple subdomains. Each subdomain can have 10 records.

Use multiple email subdomain names to stay under the SPF 10-record limit.

My proposed solution is to, loosely, divide my email domains according to business function: sales, customer support, marketing/bulk email and product, and internal.

Doing so will (1) give recipients have some clue of who is sending the email and (2) allow you to identify vendor deliverability issues. For shared SaaS services you might include that sender in more than one subdomain SPF record. SurveyMonkey, for example, might be used for more than one business function.

  • Sales: biz.infoentropy.com
    • reports@biz.infoentropy.com
    • Sales
      • hubspot
      • salesforce
      • squarespace
      • SurveyMonkey*
  • Help: help.infoentropy.com
    • support@help.infoentropy.com
    • Support
      • zendesk
    • Transactional (password reset, billing?)
      • AWS SES*
  • Marketing: email.infoentropy.com
    • deals@email.infoentropy.com
    • High volume bulk email
      • Iterable
      • Mailchimp
      • Pardot
      • Marketo
    • Surveys
      • SurveyMonkey*
  • Product: app.infoentropy.com
    • notifications@app.infoentropy.com
    • Transactional (you did something in the app!)
      • Sendgrid/Mailgun/AWS SES*
    • Notifications (X sent you a message)
      • Sendgrid/Mailgun/AWS SES*
  • Internal/corporate: corp.infoentropy.com
    • jeff@corp.infoentropy.com
    • Gmail/Outlook 365?

*Single service used on multiple subdomains.

In this situation I need 5 separate SPF TXT records, 1 for each domain. Each domain lists the specific services that I use to send emails.

# help.infoentropy.com
v=spf1
include: _spf.zendesk.com        # 1 Zendesk
include: _spf.amazonaws.com      # 2 AWS
-all

In this case I only use 2 services to send something@help.infoentropy.com emails. If I tried to send with the address jeff@help.infoentropy.com via Mailchimp, the verification would fail DMARC and the email would end up in the SPAM folder.

How to make your logo visible in the inbox with a BIMI record

I just heard of BIMI for the first time a few weeks ago. Most of the resources I can find are trying to sell DMARC management services. While these services are useful and valuable I was trying to figure out some rules of thumb to self manage my own DNS with regard to DMARC/DKIM/SPF/BIMI compliance.

First thing to note is that in order to get your email logo on the inbox, you need to comply with BIMI standards. That means you need to:

  • Create a BIMI record
  • Configure domain names
  • Configure DKIM & SPF
  • Create your DMARC rules
  • Monitor your DMARC
  • Lock down your DMARC

Set up your BIMI record

I have this step early in the blog because you can set it’s the simplest step. You won’t get the snazzy BIMI logo until you complete all of the steps.

Create an SVG version of your logo and upload it somewhere on your CDN or wherever you serve your images. You can probably put it in the same directory as the ubiquitous favicon.ico file.

Add a TXT record to your DNS that points to the newly uploaded SVG file.

default._bimi.acme.com

v=BIMI1; l=https://cdn.acme.com/_email/logos/acme-icon.svg

Configure your domain names

If you’re putting together your email system from scratch, (i.e. you work for a 1-10 person startup and you happen to be the devops guy) see other post for guidance on how to Setup your email domains and SPF records. This is a hairy process that IT people need to be involved with because it requires mucking with DNS and planning out how you will send emails in the future. Do it right early!

Configure DKIM & SPF for your domains.

In order to pass DMARC both your SPF and DKIM need to validate.

The SPF record means that you added the 3rd party services to your DNS, as described above.

The DKIM is a signature key that you share with your email sending service(s). These services will add the DKIM signature to the email headers of every message they send for you so that recipient ISPs can verify that emails that have your domain on it are coming from you.

In order to qualify for BIMI, you need to make sure the SPF and DKIM are “aligned“. For example, f you use Sendgrid, you should have sengrid in your SPF and sendgrid in the DKIM signature.

Create your DMARC rules

In order for your logo to show up on emails, your DMARC must be set to “quarantine” or “reject”.

Now, monitor the results of your DMARC (SPF+DKIM) configuration

Using a service such as Valimail, you can get reports of which domains and IP addresses are attempting to send mail using your domain names. If you recognize IP addresses or domains that are “failing” that should not be, then you need to check configuration of those services. In keeping with the Sendgrid example, if you are using them as your provider and you see a sendgrid.net in your “Mostly failing IPs” box then something is wrong with your SPF/DKIM DNS configuration.

Definitely a phish/spam/spoofer

Once DMARC you have quarantine on, and your SPF and DKIM are “aligned” your BIMI logo might start showing up!

I say “might” because not all email providers suppport the the BIMI standard. Anecdotally, as of 2020, I do believe Hotmail, Yahoo mail have logos in emails. Gmail has its own brand of JSON-LD syntax to jam logos into the inbox

Iterable recipe: inject data from a custom event into userProfile for email template

An email I’ve been working on needs to refer back to information from a previous event.

... TEMPLATE ...

Hi {{firstName}}, 

Thank you for watching {{title of movie I watched}}!

... RENDERED ...

Hi Jeff, 

Thank you for watching STRANGER THINGS!

The problem here is that the I wanted to reference the title of a video that was watched in the past within a blast email. This is not possible without saving the information somewhere in Iterable. In my case I decided to store the data on the user’s profile.

To save data from an event onto the user profile I use a workflow and the Change Contact Field node.

The event data might look like this:

{
    "eventName": "Video : Watched",
    "eventType": "customEvent",
    "email": "<EMAIL_ADDRESS>",
    "dataFields": {
        "video_title": "Stranger Things",
    }
}
  1. Create workflow
  2. Add node “change contact field”
  3. In the CHANGE CONTACT FIELD node define the data I want to save in JSON format.
{"video_title":"{{video_title}}"}
## (You should use better field names than this) ##

Now when the event comes through Iterable will replace the template variable with the new value on the user profile and thus making it accessible on any email template.

Here I can put the saved_program_title into the subject line or email template body using the Handlebars.

Limit number of php-cgi processes spawned by nginx

Here’s how to Mine was around here somewhere.

File was here

sudo nano /usr/bin/php-fastcgi

Original file looked something like this.

#!/bin/bash
/usr/bin/spawn-fcgi -a 127.0.0.1 -p 10005 -u www-data -g www-data -f /usr/bin/php-cgi

Added -C argument to enforce a limit.

#!/bin/bash
/usr/bin/spawn-fcgi -C 2 -a 127.0.0.1 -p 10005 -u www-data -g www-data -f /usr/bin/php-cgi

http://forum.slicehost.com/index.php?p=/discussion/3671/limit-number-of-php-cgi-processes-nginx/p1

I just got a notice from my VPS host that my box was using too much swap and therefore impacting other users on the machine. As such, my provider did a hard reset on my instance which summarily stopped my web app (I don’t start the app on startup. I should.)

I just upgraded my server a few versions up the Ubuntu chain because of the heartbleed SSL bug so obviously it was something related to the upgrade. I’m not super savvy with figuring out which components changed so I figured if I could just cut back on the memory usage things wouldn’t go to swap. My server is running an app and a couple of old Drupal blogs that don’t get much traffic but I like to keep around for nostalgia. So, I figured I could sacrifice substantial performance on the blogs.

Things I changed to hopefully save memory:
/etc/mysql/my.cnf
* reduce max_connections to 50 from Not specified.

/etc/php/cgi/php.ini
* reduce memory_limit from 128M to 64M

/usr/bin/php5-cgi
* add “-C” flag to constrain number of processes to 3. (default was 5)

The php5-cgi change by far had the largest affect because each php5-cgi process was using 25MB per thread (~125MB, or 50% of my 256MB VPS).

HTML Email: Image height is 1px high for Gmail in IE10

Lately I have had to redo some of the HTML emails. We had an outside contractor do most of the work and he did a fantastic job. However, I was noticing that under a very specific condition, the email images were not rendering properly in desktop Gmail on Internet Explorer 10. I have no idea how many people are using GMail+IE10 but since this is our first real contact with the user, I though it would be important to ensure the best user experience possible. Broken images are not a good experience.

Here is the problem:
Email image poorly rendered

Old HTML

<img class="imageScale" 
style="display: block; width: 550px; height:auto;" 
width="550" height="auto" src="{{ img_url }}" 
border="0" />

New HTML

<img class="imageScale" 
style="display: block; width: 550px;" src="{{ img_url }}" 
border="0" />

Final outcome:
email-good

Why does it fix it?

I came across a couple of quirks at play in this porblem.

First, I learned that GMail automatically converts CSS height attribute to min-height with reckless abandon.

Second, the original HTML IMG tags have height:auto. With GMail’s reckles height conversion it becomes min-height:auto which essentially means 0px or 1px.

To solve, I removed height attributes on the img tags. It turns out that all of the browsers will automatically just render the image at full size of the parent container. In this case we have a series of nested tables that set maximum width of the parent to about 550 and the minimum width is 100% of screen width.

Working on a real iOS app.

As a personal endeavor I’m trying to make an iOS app. Making a native app for iOS has been on my to-do list for at least 2-3 years but I could never figure out the Objective C language and I haven’t done a lot of object-oriented programming. With newfound determination I have been trying to learn this stuff when I have downtime at work. I’ve had a lot of downtime lately so my learning progress has been good!

Things I’ve been skimming through:

Stanford undergraduate course on iOS through iTunes U (Not Recommended)

I started off trying to learn iOS by following Stanford University courseware via iTunes U. I thought “hey, Stanford is a great school. This should inspire me.” Big mistake — I despised computer science lectures when I was in college 10 years ago and evidently I still hate them. The first few lectures were long-winded and impractical for the purposes of building a simple app.

Big Nerd Ranch Guide (2012)

Big Nerd Ranch guide proved useful for understanding some fundamentals of Objective C. It’s densely written and some of the chapters are difficult to parse in my head. This is slightly easier to deal with than a boring iTunes U lecture but it has heavy reliance on extending code in preceding chapters so you cannot jump around from chapter-to-chapter to pick what you need. I would prefer a just-in-time piecemeal approach.

Beginning iOS 6 Development (Apress, Jan 2013)

This book is written more simply than Big Nerd Ranch. I ran through the first few chapters and finally understood the workflow of using XCode to create the UI by linking buttons to actions and code. Understanding XCode was a major breakthrough for me. The first handful of chapters are useful because the examples are step-by-step and do not depend heavily on using code from previous examples.

Apple’s documentation (horrible)

Large swaths of Apple’s iOS documentation and example code is outdated. However, I slogged through some of the examples to help understand some of the components that are not discussed in the other 2 books.

Various web tutorials

As you would expect this day in age, there are tons of video tutorials on the web, YouTube and other content authors who blog about learning iOS on their own. These are often helpful. The problem with these tutorials is that many of them are for older versions of XCode and iOS. Working with iOS has apparently changed quite substantially over the years because the code samples can often look completely different from modern stuff. I’m sure the recent release of iOS 7 makes this truer than ever.

Stack Overflow

Whenever I run into  issues Stack Overflow has the answer 97% of the time. Long live SO!

Make a media player –¬†http://www.codigator.com/tutorials/how-to-make-a-custom-ios-music-player/