Black Friday 2021 notes

I identified a profitable segment combining app usage and sales data from the previous year

I found a particularly profitable segment of users by looking cross-referencing app usage and purchase behavior. For the Black Friday 2020 campaign we sold far more high ticket items to people who had used the app for 12 consecutive months (the size of the circles represent # plans purchased). It helped us send our most active users our best deal. (High price but high discount)

Bigger circle = more sales. X-axis is consecutive months of activity.

Spreadsheet command center

To keep track of the 60 campaigns I built out a spreadsheet to track the different offers and email template modules to include in the campaign. Different segments received personalized offers (pictured) based on analysis from our Data Science team that found the optimal offer according revenue per email sent.

Each cohort, the day of the campaign, and the sales modules to be offered to the recipients

Landing page switch led to 40% conversion improvement

Toward the end of the campaign I created an A-B test for the email click’s landing page.

You should notice that the original page is a very simple credit card page. Because we were advertising a Black Friday time-limited sale our logic was to collect payment as quickly as possible.

You will notice that Treatment version has a lot more information. It has a big hero image, some science facts, a product comparison checklist in the middle, and testimonials on the bottom. Our product & engineering team had refined and tested this landing page for peak performance earlier in the year but we had never tested these two variants against each other within the context of a holiday sale.

The A-B test showed 50%+ sales conversion improvement over the 3-day observation period for the campaign. I wished I had tested these two landing pages earlier in the campaign!

I believe that part of the reason the page performed so much better is that we only sent this page to users who never subscribed before. These people barely know what Calm is so I believe that having the extra information on the page properly educated the returning inactive users and clearly did a better job at explaining the value of the Black Friday deal that we were offering them.

Next year I’ll be sure to compare the two variants across broader audience (churned, current paid, etc.)

Original
Treatment

Treatment landing page (green line) outperformed for duration of the sale
52% improvement & statistically significant

Patterns in engineering job hunting

Newer job seekers and people not may not be familiar with the common job hunting/candidate recruitment process. I’ll try to explain the commonly used process that I’ve noticed to be used by many companies in the Bay Area.

Phase 1: Finding potential jobs

Roughly in order of effectiveness:

Internal Employee Referral – warm

If you happen to know somebody who works at a nice company and you’d like to join them there reach out! Take them out to coffee or get lunch or just IM chat with them. It really depends on how well you know the individual. It’s probably useful to be upfront that you’re open to new opportunities early. If your friend or co-worker is on a team that is actively hiring, you’ve got a warm employee referral on your hands! Going to interviews on a warm employee referral is so much better because I’ve gone to an onsite interview — completely blown half of the questions — and still gotten the job because I knew people that worked at the company who were able to vouch for my real work abilities that aren’t always apparent in a marathon of 1-hour interviews. (Plus jangled nerves)

Internal Employee Referral – cold

You contact your friend at Company X but her group isn’t hiring right now. It’s still OK to go into the pool of general applicants. This isn’t as effective because it could be a while before you hear back from anyone. Effectively you become the same as any other non-referral applicant but at least you’re in the system and maybe down the road your friend will get a referral bonus because you got hired through her efforts to get you into the system.

Internal recruiter cold-email reach out

A recruiter sends you an email or a LinkedIn InMail. Respond to it and you’ll at least get to an informational phone call. That is useful because at least you can reach out to the recruiter later to confirm that you were seen and acknowledged as a human being. You do exist and your resume had something worthy of attention!

LinkedIn network status update

People say they are hiring all the time these days. DM your LinkedIn connection and ask them to refer you into their recruiting system. They can leave notes about you and it will put your resume toward the top of the list. It doesn’t mean you will get called back but it increases the likelihood that your resume will have been seen and reviewed at least for 4 or 5 seconds. Which is better than the methods below.

Direct application on Careers section of website

Surprisingly, I find myself sending people to the careers page of places I’ve worked many times. At startups, the CVs do get reviewed by people internally. Larger companies are less effective

3rd party recruiter cold-email reach out or cold call

Very often these recruiters have very poor information on the job. GuruJobs or Robert Hat. It does not appear that they have a direct connection with the hiring manager. Unclear to me how effective a job hunt will be through these sources.

Big Job Board job application (LinkedIn, Indeed, Monster, HotJobs, Dice, Craigslist)

By far job boards are probably the worst way to find a job. I don’t care what all the commercials say. Don’t waste your time crafting a personal statement or cover letter on these job boards it’s basically pointless. Go directly to a company’s website and submit your resume there instead.

Phase 2: figure out which company is the most interesting

If you can get past the resume screen, companies will start with an informational 30-minute call with a sourcer/recruiter responsible for filling the position. Recruiters are friendly professionals that specialize in talking to other people. They are usually not subject matter experts in engineering, marketing, or whatever your field of work may be.

They are experts at identifying sifting out legitimate experience and from bullshit. In the informational call the recruiter’s job is to describe to you the responsibilities of the role and gauge your interest in the position. Then they sniff out whether you correctly speak the jargon and match the profile of the type of candidate that the hiring manager wants (years of experience, projects worked, interests).

Recruiters will tell you about the company’s mission and culture. They won’t know the company’s five-year product roadmap. They won’t typically know the specifics of the day-to-day of your role. They will be able to tell you is the manager is looking for a specialist or a jack-of-all-trades? How big is the team? Do they prefer Ivy League graduates? You still need to do homework about the company so that you don’t ask blatantly un-researched questions about the company.

Phase 3: Interviewing

After you have talked with the recruiter and sufficiently proven that you are interested in the role, the company, and the department within the company you are ready for some interviews. This is my knowledge of Silicon Valley-style tech interview process.

First, you will have the phone screen interview. Recently interviews have moved off of the phone and into Zoom with screen shares and social coding platforms. The phone screen should last 30-60 minutes and administered either on phone or Zoom where you’ll talk to a fellow engineer who may be a member of the team that wants to hire you. Usually the interviewer has pulled a pre-determined problem pulled from a bank of questions that the team has agreed to use. You will be asked to solve the problem within that 30-60 minute period. You do need to be able to finish the interview within the allotted time because the interviewer probably has another meeting right after your call. Phone interviews are supposed to be “easier.” After many years of dealing with this interview pattern, I know that if I find myself barely getting through the phone interview I know that the on-site is definitely going to be a waste of time for everyone involved.

I think I’ve been asked these “basic” items than a few times on a first-round phone interview:

  • What is a closure?
  • How do you reverse a list?
  • Code a basic HTML/CSS page that can do X,Y,Z

The “on-site” interview (should change the name now that Zoom has become prevalent) is where you line up a day’s worth of interviews and teams grill with 4+ different exercises in varying subjects. For example you might get the abstract puzzle interview, followed by SQL exercise, followed by an algorithm test, followed by a system design discussion. Having failed at literally dozens of these interviews it’s hard to believe I actually ever was able to survive in Silicon Valley for 15 years as an engineer. I do know why I never made it to FAANG because I am pretty horrible at all of these types of interviews and really can only pass the hands-on “let’s build something practical together” type of interview. I don’t have any tips on how to pass these interviews. However, expect to see something like

  • Traverse a graph to do something
  • System design something for millions of people. Web servers and load balancers and calculations of throughput will likely be involved. (I wouldn’t know because I don’t pass these tests)
  • Do something recursive, probably with sorting. It definitely needs to run better than O(n^2) or O(n!)

Phase 4: Results

Now this is the shitty part. Sometimes you might do pretty good and pass all the technical aspects of your interviews. Even if you pass all the tests, you may only have 5% of landing the job because you are competing against 20 other candidates just like you who also passed the tests. This is especially true at a hot startup or a FAANG company.

After your interview all the people you met with will convene in a meeting to decide your fate. The more people you interviewed with during the process, the more chances there are for one of those interviewers to say “no.” That’s not a death sentence but it does hurt you a lot. You would never know it but that’s what goes on after the interviews are over. I have been on hiring panels of 6 people. Sometimes 5 people say “medium yes” and 1 says “strong no” you’re toast. But if you get 5 “strong yes” and 1 “medium no” you still have a chance — that “no” can be swayed.

This is just me, but receiving a “Thank you” note from a candidate does nothing to sway me in any direction for or against you. Experts suggest you to do it but, honestly, the second you walked out of the door or hung up the Zoom I already know if I’m going to vote for you or against you. An after-interview email is meaningless and chances are I probably won’t see it at all because I don’t read email. All of my work in JIRA and Slack.

At the end of the day, if you get rejected from a job application know that it’s not necessarily something about you. There are infinite factors during an interview that are outside of your control that have nothing to do with you. Just be as authentic as you can be. Answer questions as honestly as you can. Be “right” or “correct” on factual questions as much as you can be. Then be prepared to do at least 10+ full interviews (phases 1,2,3,4) before you receive an offer.

How I manage my Iterable Catalogs using Google Sheets

Calm’s dynamic emails leverage the Iterable Catalog feature extensively to keep campaigns evergreen and easy to maintain. I use it for a variety of use cases:

  • Email localization strings. Calm serves content in English, French, Spanish, Japanese, Korean, Portuguese, and German.
  • Content calendars. Calm has a variety of content that releases on a regular cadence including the Daily Calm and the Daily Trip.
  • Follow-up lessons. For special programs we send messages that remind listeners of key takeaways, lessons, or review material.
  • Top N lists. The most recent Catalog I added allows others to curate lists of top/new/hot/trending content.

I have been most tickled with my most recent effort to store Top-N lists (Top 10 newest music, Top 10 most popular, Top 5 fill-in-the-blank) in our Catalog because it streamlines our process for creating ad-hoc special announcement emails. Creating this did not require a lot of code and makes me wish I had thought of doing this earlier.

Google Sheet as Content Management System (CMS)

Using a Google Sheet shared with my colleagues allows us to have a mix of curated and automated feeds controlled by different:

Automated feeds:

  • the newest Sleep Stories for each language
  • the newest Music releases for each language
  • the newest Meditations for each language
  • 3 content types * 7 languages = 21 individual feeds

Curated feeds

  • featured English new releases
  • English-language blog content
  • special blocks for other departments to provide content (b2b, marketing, product, engineering)
The Sheet Tab name “konewSleep” becomes the name of the data feed in the Catalog. The sheet columns headers are key names of each object.

The Catalog

See how the Google Sheet columns map to JSON key names.

Retrieve and Render Iterable Catalog Data

Using a known key, such as ennewSleep I am able to retrieve the appropriate content for Korean language new Sleep Stories.

{{!-- look up new Sleep Stories in English (en) --}}
{{#catalog "Datafeeds" "ennewSleep" as |datafeed|}}
{{#lookup datafeed "programs" as |prog|}}
<h1>{{{ prog.title }}}</h1>
<p>{{{ prog.description }}}</p>
<hr />
{{/lookup}}
{{/catalog}}

Basic HTML result would be

<h1>Sleep story title 1</h1>
<p>Description of story 1</p>
<hr />
<h1>Sleep story title 2</h1>
<p>Description of story 2</p>
<hr />

Addendum: Roadblocks along the way

My initial intent for this project was to lean on Iterable Datafeeds and Google App Scripts to provide my dynamic data. I created a Google Apps Script to create a web app that would grab some API data, package it up into a more email-friendly rendering payload, and return JSON. I ran into 2 major problems while trying to set this up which eventually led to my Catalog solution.

First, I found out that I cannot change the user-agent HTTP header when using the UrlFetchApp library. UrlFetchApp sends something like Google/app-script-client when doing its HTTP fetch. I needed control of the user-agent header to pull the different language content from the API. My solution to this was to set up a free Netlify account that would proxy a request Google -> Netlify -> Calm. I created the proxy as a serverless function. This only took a couple of hours thanks to Netlify for its very developer-friendly deployment tools.

My second problem is that Google App Script only serves a few requests per second (expected). I was expecting to lean on the caching option in the Iterable datafeed fetcher. My theory was Iterable would hit my endpoint once and cache the response for 1 hour as described in their documentation. Despite several attempts at configuring the templates and datafeeds I could not get Iterable to cache my feeds properly. During campaign send time Iterable would hit my Google App Script hundreds of times per second and fail out due to rate limiting problems.

Figuring that the caching roadblock would be unsolvable on my own, I decided to use the Iterable Catalog solution instead. The Catalogs are useful because I don’t have to worry about maintaining uptime on the services but not as nimble as a API solution because I have to constantly sync my Google Sheet to the Catalog(s).

Do Gmail promotion cards improve email performance?

Long story short: gmail promotion cards did not have statistically significant positive impact on our Black Friday email campaign.

My first year being on the retailer’s side of Black Friday was 2019. It was a harrowing experience due to lack of experience and being new to my job. We hectically threw together our Black Friday campaigns around 3 weeks before Thanksgiving: copy writing, design, QA, pre-testing, and identifying segments for the biggest revenue week of the year all done as quickly as possible. This is not an experience I recommend. In 2019 I knew about Promotion Cards but since we had a very short timeline to work on the emails and I didn’t have time to test the Promotion Cards. In 2020, we started planning in early October. Unfortunately I forgot about Promotion Cards until the Black Friday week. Remembering that I wanted to test the cards in 2019 I scrambled to incorporate them into a handful of our early campaigns sent on Tuesday of Thanksgiving week to see if the cards would be able to measurably increase sales conversion rates for our biggest email sends running from Black Friday to Cyber Monday.

I tested the promo cards on 4 different emails with volume around 4 million people. I looked primarily at 2 metrics: open rates and send-to-purchase conversion. I know open rate is a vanity metric — call me vain! Promotion Card’s best performance garnered +5% better open rates (stat sig) but send-to-purchase conversions were -1.2% worse than control (not stat sig). The Promotion Cards lagged in both metrics in other campaigns as well and could not achieve statistical significance for purchase conversion in any test. Since I couldn’t get stat sig on the sales I decided to take the easy road: no Promotion Cards for the remainder of Black Friday 2020 campaigns.


What are Promotion Cards?

Several years ago Gmail came up with the “Promotions” tab to automatically separate out marketing emails from people’s inboxes and into a separate folder. Marketers freaked out because the Promotions tab, essetially, is a Spam-lite inbox. All the marketing riffraff ends up lost in the outer limits email equivalent of the axe-murderer-creepy cluttered basement or the spider-filled attic. The Promotions tab on Mobile devices is especially challenging because the tab lies within a menu drawer.

I find out about new GMail features from the Product Managers that work at Google. Both at Flipboard and Calm, the Gmail account management team has reached out to test new features: I was able to use promotion cards and AMP for email as part of the Gmail beta programs. Is it an exclusive club? I don’t know. But it does feel kind of cool to test new features in the world’s most popular email service before most people.

In this case I think the kindly folks at Gmail came up with Promotion Cards annotation to give emails extra visibility on mobile clients to make up for banishing marketing emails into the Promotions tab. You can see a Promotion Card in action below.

As you can see above, Gmail added a bold “Promotions” section at the top of the Primary Inbox. That space contains a couple of teasers for the Promotions Tab (buybuy Baby, Carter’s, etc.).

Observations: increased open rate in 3 tests, decreased in 1. The case where promo code loss may be due to poor subject line choice.

Observation: less clicks

Observation: sales inconclusive.

Result: slight edge to promo cards but it will not change your business. Take it or leave it.

How to set up your email domains and SPF records

What is SPF (Sender Policy Framework)?

SPF is a DNS TXT entry for your domains and subdomains that list the services and individual IP addresses that you use to send emails.

v=spf1 include:_spf.google.com include:_something.amazonaws.com ~all

Hypothetical SPF record. This says that I use Google and AWS to send emails

SPF helps prevent email address spoofing because ISPs can look up the TXT record and compare with the email headers to verify that I acknowledge that I want Google or AWS to send emails for me.

Here is a larger SPF record. Note that in the block above I have 4 include: lines. That means I have 4 SPF records for my domain.

# infoentropy.com
v=spf1
include:_spf.google.com         # 1
include:u123456.wl.sendgrid.net # 2
include:other.service.com       # 3
include:other.service2.com      # 4
-all

Now, a problem: according to specificiations, each domain can have a maximum of 10 SPF records in each DNS TXT record. If you have more than 10 SPF records the ISP could ignore the entire DNS record or only look at the first 10 while ignoring the rest.

If I used my root domain for all email services, I would reach the 10-record limit quickly. Look at the table below — a medium-sized company could end up using many different SaaS platforms, each of which vies for space on the root domain.

Software serviceDepartmentwhat do they send?
GmailEveryoneall emails
MailchimpMarketingEmail blasts
Blog platform (WordPress, Squarespace, etc)MarketingBlog announcements
CRM (Salesforce, Hubspot, etc.)SalesCorrespondence with customers
e-commerce (Squarespace, Magento, etc.)SalesPurchase receipts
Customer Support (Zendesk, etc.)SupportHelp tickets
Surveys (SurveyMonkey, etc)MiscellaneousProduct survey, customer satisfaction survey, etc.
Transactional emails (Mailgun, Sendgrid, Sparkpost, AWS SES)EngineeringThings related to your apps
Engineering Internal Services (Jenkins, datadog, github)EngineeringStuff engineers look at
Legal things (DocuSign, etc)Legal
Finance (ADP, Carta, etc)Finance
Recruiting emailsHR
More stuff! JIRA, ASANA, Slack, BOX, DropboxEveryone
Various outbound emails & vendors that might use your domain name

This is 13 SPF records trying that would need SPF records. However, not all of these services necessarily need to be on the root infoentropy.com domain. I can have more than 10 SPF records by using multiple subdomains. Each subdomain can have 10 records.

Use multiple email subdomain names to stay under the SPF 10-record limit.

My proposed solution is to, loosely, divide my email domains according to business function: sales, customer support, marketing/bulk email and product, and internal.

Doing so will (1) give recipients have some clue of who is sending the email and (2) allow you to identify vendor deliverability issues. For shared SaaS services you might include that sender in more than one subdomain SPF record. SurveyMonkey, for example, might be used for more than one business function.

  • Sales: biz.infoentropy.com
    • reports@biz.infoentropy.com
    • Sales
      • hubspot
      • salesforce
      • squarespace
      • SurveyMonkey*
  • Help: help.infoentropy.com
    • support@help.infoentropy.com
    • Support
      • zendesk
    • Transactional (password reset, billing?)
      • AWS SES*
  • Marketing: email.infoentropy.com
    • deals@email.infoentropy.com
    • High volume bulk email
      • Iterable
      • Mailchimp
      • Pardot
      • Marketo
    • Surveys
      • SurveyMonkey*
  • Product: app.infoentropy.com
    • notifications@app.infoentropy.com
    • Transactional (you did something in the app!)
      • Sendgrid/Mailgun/AWS SES*
    • Notifications (X sent you a message)
      • Sendgrid/Mailgun/AWS SES*
  • Internal/corporate: corp.infoentropy.com
    • jeff@corp.infoentropy.com
    • Gmail/Outlook 365?

*Single service used on multiple subdomains.

In this situation I need 5 separate SPF TXT records, 1 for each domain. Each domain lists the specific services that I use to send emails.

# help.infoentropy.com
v=spf1
include: _spf.zendesk.com        # 1 Zendesk
include: _spf.amazonaws.com      # 2 AWS
-all

In this case I only use 2 services to send something@help.infoentropy.com emails. If I tried to send with the address jeff@help.infoentropy.com via Mailchimp, the verification would fail DMARC and the email would end up in the SPAM folder.

How to make your logo visible in the inbox with a BIMI record

I just heard of BIMI for the first time a few weeks ago. Most of the resources I can find are trying to sell DMARC management services. While these services are useful and valuable I was trying to figure out some rules of thumb to self manage my own DNS with regard to DMARC/DKIM/SPF/BIMI compliance.

First thing to note is that in order to get your email logo on the inbox, you need to comply with BIMI standards. That means you need to:

  • Create a BIMI record
  • Configure domain names
  • Configure DKIM & SPF
  • Create your DMARC rules
  • Monitor your DMARC
  • Lock down your DMARC

Set up your BIMI record

I have this step early in the blog because you can set it’s the simplest step. You won’t get the snazzy BIMI logo until you complete all of the steps.

Create an SVG version of your logo and upload it somewhere on your CDN or wherever you serve your images. You can probably put it in the same directory as the ubiquitous favicon.ico file.

Add a TXT record to your DNS that points to the newly uploaded SVG file.

default._bimi.acme.com

v=BIMI1; l=https://cdn.acme.com/_email/logos/acme-icon.svg

Configure your domain names

If you’re putting together your email system from scratch, (i.e. you work for a 1-10 person startup and you happen to be the devops guy) see other post for guidance on how to Setup your email domains and SPF records. This is a hairy process that IT people need to be involved with because it requires mucking with DNS and planning out how you will send emails in the future. Do it right early!

Configure DKIM & SPF for your domains.

In order to pass DMARC both your SPF and DKIM need to validate.

The SPF record means that you added the 3rd party services to your DNS, as described above.

The DKIM is a signature key that you share with your email sending service(s). These services will add the DKIM signature to the email headers of every message they send for you so that recipient ISPs can verify that emails that have your domain on it are coming from you.

In order to qualify for BIMI, you need to make sure the SPF and DKIM are “aligned“. For example, f you use Sendgrid, you should have sengrid in your SPF and sendgrid in the DKIM signature.

Create your DMARC rules

In order for your logo to show up on emails, your DMARC must be set to “quarantine” or “reject”.

Now, monitor the results of your DMARC (SPF+DKIM) configuration

Using a service such as Valimail, you can get reports of which domains and IP addresses are attempting to send mail using your domain names. If you recognize IP addresses or domains that are “failing” that should not be, then you need to check configuration of those services. In keeping with the Sendgrid example, if you are using them as your provider and you see a sendgrid.net in your “Mostly failing IPs” box then something is wrong with your SPF/DKIM DNS configuration.

Definitely a phish/spam/spoofer

Once DMARC you have quarantine on, and your SPF and DKIM are “aligned” your BIMI logo might start showing up!

I say “might” because not all email providers suppport the the BIMI standard. Anecdotally, as of 2020, I do believe Hotmail, Yahoo mail have logos in emails. Gmail has its own brand of JSON-LD syntax to jam logos into the inbox

Iterable recipe: inject data from a custom event into userProfile for email template

An email I’ve been working on needs to refer back to information from a previous event.

... TEMPLATE ...

Hi {{firstName}}, 

Thank you for watching {{title of movie I watched}}!

... RENDERED ...

Hi Jeff, 

Thank you for watching STRANGER THINGS!

The problem here is that the I wanted to reference the title of a video that was watched in the past within a blast email. This is not possible without saving the information somewhere in Iterable. In my case I decided to store the data on the user’s profile.

To save data from an event onto the user profile I use a workflow and the Change Contact Field node.

The event data might look like this:

{
    "eventName": "Video : Watched",
    "eventType": "customEvent",
    "email": "<EMAIL_ADDRESS>",
    "dataFields": {
        "video_title": "Stranger Things",
    }
}
  1. Create workflow
  2. Add node “change contact field”
  3. In the CHANGE CONTACT FIELD node define the data I want to save in JSON format.
{"video_title":"{{video_title}}"}
## (You should use better field names than this) ##

Now when the event comes through Iterable will replace the template variable with the new value on the user profile and thus making it accessible on any email template.

Here I can put the saved_program_title into the subject line or email template body using the Handlebars.

Podcast generic takeaways

As a long time wantrepreneur with a brutal commute I listen to a lot of podcasts about business, real estate, investing, and online marketing. Podcasts and books in these realms have common threads that I believe must be basic common traits for being “successful” at what doing what you want to do.

The best way to learn is to start doing it.

Most people never even try. Just start. Do little things. Get over the fear of failure. Trying and failing is better than not trying at all. Take “massive action.”

Basically every business related podcast mentions this fact.  BiggerPockets almost every episode. Grant Cardone for sure. Pat Flynn pretty regularly.

Learn from a mentor or coach.

On the Rich Dad Radio podcast Robert Kiyosaki of Rich Dad, Poor Dad pushes for people to get a coach constantly. Pat Flynn of Smart Passive Income frequently touts coaching as well. Yes they have some financial interest in persuading you to use their companies as your coach but you don’t have to use them. I’m starting to think it could be valuable to me. Tim Ferriss uses many coaches to be more efficient (not a shortcut!) when picking up new skills.

Network, network, network.

It doesn’t matter if you’re an introvert. Networking can be achieved online in your pajamas through forums, social media, and Facebook Groups. Face-to-face meetings are better but the point is that building honest relationships with people is fundamental to success. 

Give more than you receive.

Serving others well (especially helping out your network) will return 100x whatever time/energy/expense you put in. Karma. Don’t give with the expectation of something in return. This is basic human kindness but I think it does take some effort.

Limit number of php-cgi processes spawned by nginx

Here’s how to Mine was around here somewhere.

File was here

sudo nano /usr/bin/php-fastcgi

Original file looked something like this.

#!/bin/bash
/usr/bin/spawn-fcgi -a 127.0.0.1 -p 10005 -u www-data -g www-data -f /usr/bin/php-cgi

Added -C argument to enforce a limit.

#!/bin/bash
/usr/bin/spawn-fcgi -C 2 -a 127.0.0.1 -p 10005 -u www-data -g www-data -f /usr/bin/php-cgi

http://forum.slicehost.com/index.php?p=/discussion/3671/limit-number-of-php-cgi-processes-nginx/p1

I just got a notice from my VPS host that my box was using too much swap and therefore impacting other users on the machine. As such, my provider did a hard reset on my instance which summarily stopped my web app (I don’t start the app on startup. I should.)

I just upgraded my server a few versions up the Ubuntu chain because of the heartbleed SSL bug so obviously it was something related to the upgrade. I’m not super savvy with figuring out which components changed so I figured if I could just cut back on the memory usage things wouldn’t go to swap. My server is running an app and a couple of old Drupal blogs that don’t get much traffic but I like to keep around for nostalgia. So, I figured I could sacrifice substantial performance on the blogs.

Things I changed to hopefully save memory:
/etc/mysql/my.cnf
* reduce max_connections to 50 from Not specified.

/etc/php/cgi/php.ini
* reduce memory_limit from 128M to 64M

/usr/bin/php5-cgi
* add “-C” flag to constrain number of processes to 3. (default was 5)

The php5-cgi change by far had the largest affect because each php5-cgi process was using 25MB per thread (~125MB, or 50% of my 256MB VPS).