Welcome to the interview; please sit down and choose a color

A superstar engineer with a bad attitude can place more drag on your engineering org than they put in, which is why I like to focus my interviews on attitude and culture. Naturally, the candidate is on their best behavior, so how do you break down these barriers and see what they’re really like?

Let’s play a board game!

A lot of us at Sincerely enjoy playing board games. If you visit us during lunch, you’ll often catch us playing Chess, Blokus, Settlers, Carcassonne, or Acquire; lately we’ve come to enjoy playing a turn a day of the Game of Thrones board game (it’s a 6-7 hour game!). Through these sessions, I’ve come to realize how much game playing styles mirror how people interact with their coworkers away from the game table. One day when we had an engineering candidate on site, I got to wondering if we should invite them to the table.

How much can you learn?

We’ve played games where @hua was losing so badly that he was effectively out of the game. Yet, he’d still show up every day to play a round, in good spirits and doing what he could to influence the game. Likewise, he has the unparalleled ability to make lemonade from lemons, and I’ve never seen him in bad spirits. Even in stressful situations, Hua has a smile on his face.

Some time ago, a talented engineer joined our team who’d have so much potential if not for a stubborn attitude. Code reviews were always a one-sided chore, as we’d always get so much push back. Low and behold a pattern emerged when others played games with him: he’d get so angry at others for attacking him that they’d just stop doing so. And so went the code reviews – they became thinner and fewer, and his code more isolated. Would an interview game have exposed this attitude and spared us the bad hire, I wonder?

The ideal interview game

The ideal game involves strategy (over chance), plenty of interaction, and an opportunity for alliances. Game of Thrones would be ideal if not for it’s 6 hour play time; the ideal game lasts no more than an hour. I’ve played dozens of great strategy board games, but three stand out in my mind as best for an interview:

Carcassonne: Plenty of interaction, strategy, and some alliance possibilities. Easy to learn for new players, and quick to play.

Blokus: Fun & easy to learn with clear strategy. No opportunity for alliances, but plenty of interaction.

Settlers: Plenty of interaction, fewer alliance possibilities, some strategy difficulties for new players. A tad bit on the long side, but some engineers will already know how to play.

What to look for

In general, you’ll learn a lot about a person and their future interactions with your team just by playing a game with them. Just as they say there is a lot of truth in a joke, I believe that people often let their guard down when playing a game, and expose more of who they are. There are few interactions to pay particular attention to:

How are they at winning and losing?

There’s nothing wrong with savoring a win, but players who gloat over successes tend to be the engineers who have trouble interacting with and leading teammates later on. Being a sore loser can stem from jealousy issues, which hints at poor teamwork skills. The player that faces a battle or a game with grace despite the outcome will most likely be a good teammate.

How do they manage their alliances?

If the game involves making and breaking alliances, it can be telling to observe how a player manages those relationships. Do they take advantage of opportunities and break alliances at appropriate times? Players who maintain permament alliances even if it means losing a game are telling you that they’re loyal but may lack self-initiative. Players who “take one for the team” by preventing a particularly strong player from winning a round are showing you they are team players. Players who make a habit of breaking alliances too early may have difficulty with impulse control and bigger picture thinking.

How effectively do they learn?

How quickly does the candidate learn the rules to the game, and do they enjoy the process? How quickly do they pick up strategies on their own? How do they deal with uncertainty if a rule or strategy isn’t explained to them? These can indicate how quickly they’ll pick up new technologies and stay ahead of the curve.

It’s all in good fun!

By the time the candidate has walked through your door, they’ve probably already gone through several days of technical interviews. Imagine their surprise when you break that monotony by asking them to play a game with the team? If nothing else, you’ve all just bonded over a good game and made your company stand out in their mind as one that values culture.

I’m looking forward to implementing and refining our own interview game process at Sincerely and would love to hear your thoughts and feedback. The next time you have a candidate in for an interview, consider making it a group interview over a friendly board game and tell me how it goes.

PS: We’re hiring!


Marriage finances for the modern era: a proposal

My wife and I got married 2 years ago today. It’s been a wonderful ride! Looking back, one decision that’s worked out well for us is how we manage our finances. As I think it’s a model that could work well for most couples, improving both happiness and financial health, I wanted to tell you about it.

I subscribe to the notion that marriage is a partnership. It isn’t about merging two into one – it’s about two individuals planning the rest of their lives together.

At first we considered the traditional “share everything” approach to marriage finances: funnel everything into and out of a single joint account. While still the most socially acceptable choice, for two upwardly mobile professionals with individual wants and needs, it seemed to lack flexibility. On the flip side, keeping everything independent as we did prior to marriage wouldn’t scale for the sorts of long term planning we wanted to do, like buying a house or having kids.

We ended up going with a halfway approach: we kept our individual accounts, and added a shared joint account to which we contribute a percentage of our paychecks every month (say, 60%). A percentage makes more sense than a fixed dollar amount because it scales to life’s changes as they come along: if one of our startups fail or we became pregnant, the working one would share more of the load. Most payroll providers offer options for percentage contributions, so this is both easy and automatic.

This system has worked out great for us because it takes the contention out of minor everyday financial decisions, while keeping our sights on long term planning. If Heather wants to buy some new shoes or I the latest crowd-funded board game, we use our individual accounts. That 3d printer we’ve had our eyes on? Shared ( pitter patter my heart ). A big purchase on the horizon? We’ll start contributing a higher percentage to the pot.

With how much you hear about couples fighting over money issues, and how natural and effortless this system seems to us, I wanted to share it with couples looking for alternatives to the norm.

How do you manage your finances after marriage? Any other marriage tips to share? I’d love to chat about it with you over at Hacker News.

If you found this interesting, you should follow me on Twitter.


The mathematics of team productivity

When it comes to growing the productivity of a software engineering team, I believe there are four basic types of engineers: Adders, Subtracters, Multipliers, and Dividers. I find this framework helpful during hiring as well as determining when to let someone go.

Adders are your standard, talented engineers. They learn and grow over time, striving to improve themselves and their code. They add to your team’s productivity by being team players and strivers of excellence.

Subtracters are your below average performers. They complete what is assigned to them, and perhaps even do good work from time to time, but they subtract from the overall productivity of the team. Subtracters write code that must be refactored later, don’t stay current, and generally aren’t passionate about software development. Subtracters can become adders given time and a culture of code reviews or pairing, but you must already have enough adders and multipliers on your team for this to work.

Multipliers are your superstar engineers. They are not only talented, but they level up the whole team. They’re your evangelists of good practices and the ones that the rest of your engineers look to when they have a challenging problem. This is the engineer who stays up late working on a tricky bug, participates in hackathons, and is the first person you go to for the low down on the latest hot technology. They literally multiply the productivity of your entire team through their leadership and upward momentum.

Dividers are those engineers that rot the productivity of your team. They take several forms, but usually as some sort of attitude or behavioral problem. Perhaps they have a toxic attitude, are a crusher of new ideas, or are otherwise disruptive to the rest of your team. They usually get hired because they’re incredibly talented, but they take a group of adders and multipliers and divide their productivity. These are the folks you want to avoid hiring at all costs, but are sometimes the hardest to fetter out during the typical engineering interview. This is why I place the most weight on attitude and culture metrics when hiring engineers, but it’s still a challenge.

How do you avoid hiring dividers? I’d love to hear your thoughts on Hacker News.

If you found this interesting, you should follow me on Twitter.

Thanks to my friend Aamir who first told me about this helpful framework.


I break stuff all the time

Continuous integration as a development practice already feels pretty magical. Imagine writing code and then deploying it to production in one seamless step, all the while knowing that your tests have run and your application is good to go. Until recently, continuous integration was one of those dev tool nice-to-haves that we hadn’t quite found time to implement.

That day changed when we came across CircleCI: Running tests is no longer a chore to remember to do and wait for before every merge to master – it’s just something that happens in the course of committing new code to your branch. We’ve only been using them for six months now and it has quickly become one of the tools we rely on daily.

CircleCI will run your tests (which have 100% code coverage like ours do, right? 😉 ) whenever you push a new commit and email you if you break something. Honestly, how many times have you deployed what seemed like a simple fix to production, forgetting to run tests first, and ended up breaking something? CircleCI makes this a thing of the past because your tests always run.

Besides being a cinch to set up, it’s the integration with GitHub that seals the deal for me. One sunny day we noticed these curious little green checkmarks next to commits in our pull requests.

Green means go!

The integration is so clean, it looks like a GitHub feature. But clicking on those glorious checkmarks reveals a deep integration with CircleCI. If the dot is yellow and the ‘Merge Pull Request’ button is grey, your tests are being run. GitHub even chides you to ‘Merge with Caution’:

Be careful, young one

Seriously, who wants to be responsible for clicking that? If it’s a red x, you know broke something. I’m particularly familiar with this state:

Seriously, this happens all the time

But if you see that green check mark, all of your tests passed and you’re good to go! It’s the best kind of magic: I don’t know how on earth they accomplished such a tight integration, but it works wonderfully for our dev flow.

Speaking of, we’ve completely switched to a Pull Request-driven development process here at Sincerely. That is, everything destined for production starts life as a branch and ends up in a Pull Request which is reviewed by one or more of your teammates. This flow enables better code collaboration (and quality!) without slowing our process by any meaningful amount. And CircleCI integration keeps us honest: GitHub makes it very clear when a PR hasn’t had its tests run yet. You’d have to be riding quite the freight train to mistakenly commit code that breaks a test in production.

Getting started with CircleCI is like the day you switched from SVN to Git. You might spend a few hours rethinking your process and getting used to your new environment, but you’ll quickly realize that you can never go back.

It’s so powerful, I’ve even caught myself and my teammates spontaneously writing unit tests. It’s a sickness I tell you.

Have you tried CircleCI? I’d love to hear your feedback. Feel free to discuss on Hacker News or follow me on Twitter.


My First 5 Minutes On A Server; Or, Essential Security for Linux Servers

Server security doesn’t need to be complicated. My security philosophy is simple: adopt principles that will protect you from the most frequent attack vectors, while keeping administration efficient enough that you won’t develop “security cruft”. If you use your first 5 minutes on a server wisely, I believe you can do that.

Any seasoned sysadmin can tell you that as you grow and add more servers & developers, user administration inevitably becomes a burden. Maintaining conventional access grants in the environment of a fast growing startup is an uphill battle – you’re bound to end up with stale passwords, abandoned intern accounts, and a myriad of “I have sudo access to Server A, but not Server B” issues. There are account sync tools to help mitigate this pain, but IMHO the incremental benefit isn’t worth the time nor the security downsides. Simplicity is the heart of good security.

Our servers are configured with two accounts: root and deploy. The deploy user has sudo access via an arbitrarily long password and is the account that developers log into. Developers log in with their public keys, not passwords, so administration is as simple as keeping the authorized_keys file up-to-date across servers. Root login over ssh is disabled, and the deploy user can only log in from our office IP block.

The downside to our approach is that if an authorized_keys file gets clobbered or mis-permissioned, I need to log into the remote terminal to fix it (Linode offers something called Lish, which runs in the browser). If you take appropriate caution, you shouldn’t need to do this.

Note: I’m not advocating this as the most secure approach – just that it balances security and management simplicity for our small team. From my experience, most security breaches are caused either by insufficient security procedures or sufficient procedures poorly maintained.

Let’s Get Started

Our box is freshly hatched, virgin pixels at the prompt. I favor Ubuntu; if you use another version of linux, your commands may vary. Five minutes to go:


Change the root password to something long and complex. You won’t need to remember it, just store it somewhere secure – this password will only be needed if you lose the ability to log in over ssh or lose your sudo password.

apt-get update
apt-get upgrade

The above gets us started on the right foot.

Install Fail2ban

apt-get install fail2ban

Fail2ban is a daemon that monitors login attempts to a server and blocks suspicious activity as it occurs. It’s well configured out of the box.

Now, let’s set up your login user. Feel free to name the user something besides ‘deploy’, it’s just a convention we use:

useradd deploy
mkdir /home/deploy
mkdir /home/deploy/.ssh
chmod 700 /home/deploy/.ssh

Require public key authentication

The days of passwords are over. You’ll enhance security and ease of use in one fell swoop by ditching those passwords and employing public key authentication for your user accounts.

vim /home/deploy/.ssh/authorized_keys

Add the contents of the on your local machine and any other public keys that you want to have access to this server to this file.

chmod 400 /home/deploy/.ssh/authorized_keys
chown deploy:deploy /home/deploy -R

Test The New User & Enable Sudo

Now test your new account logging into your new server with the deploy user (keep the terminal window with the root login open). If you’re successful, switch back to the terminal with the root user active and set a sudo password for your login user:

passwd deploy

Set a complex password – you can either store it somewhere secure or make it something memorable to the team. This is the password you’ll use to sudo.


Comment all existing user/group grant lines and add:

root    ALL=(ALL) ALL
deploy  ALL=(ALL) ALL

The above grants sudo access to the deploy user when they enter the proper password.

Lock Down SSH

Configure ssh to prevent password & root logins and lock ssh to particular IPs:

vim /etc/ssh/sshd_config

Add these lines to the file, inserting the ip address from where you will be connecting:

PermitRootLogin no
PasswordAuthentication no
AllowUsers deploy@(your-ip) deploy@(another-ip-if-any)

Now restart ssh:

service ssh restart

Set Up A Firewall

No secure server is complete without a firewall. Ubuntu provides ufw, which makes firewall management easy. Run:

ufw allow from {your-ip} to any port 22
ufw allow 80
ufw allow 443
ufw enable

This sets up a basic firewall and configures the server to accept traffic over port 80 and 443. You may wish to add more ports depending on what your server is going to do.

Enable Automatic Security Updates

I’ve gotten into the apt-get update/upgrade habit over the years, but with a dozen servers, I found that servers I logged into less frequently weren’t staying as fresh. Especially with load-balanced machines, it’s important that they all stay up to date. Automated security updates scare me somewhat, but not as badly as unpatched security holes.

apt-get install unattended-upgrades

vim /etc/apt/apt.conf.d/10periodic

Update the file to look like this:

APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Download-Upgradeable-Packages "1";
APT::Periodic::AutocleanInterval "7";
APT::Periodic::Unattended-Upgrade "1";

One more config file to edit:

vim /etc/apt/apt.conf.d/50unattended-upgrades

Update the file to look like below. You should probably keep updates disabled and stick with security updates only:

Unattended-Upgrade::Allowed-Origins {
        "Ubuntu lucid-security";
//      "Ubuntu lucid-updates";

Install Logwatch To Keep An Eye On Things

Logwatch is a daemon that monitors your logs and emails them to you. This is useful for tracking and detecting intrusion. If someone were to access your server, the logs that are emailed to you will be helpful in determining what happened and when – as the logs on your server might have been compromised.

apt-get install logwatch

vim /etc/cron.daily/00logwatch

add this line:

/usr/sbin/logwatch --output mail --mailto --detail high

All Done!

I think we’re at a solid place now. In just a few minutes, we’ve locked down a server and set up a level of security that should repel most attacks while being easy to maintain. At the end of the day, it’s almost always user error that causes break-ins, so make sure you keep those passwords long and safe!

I’d love to hear your feedback on this approach! Feel free to discuss on Hacker News or follow me on Twitter.


There’s a great discussion happening over at Hacker News. Thanks for all the good ideas and helpful advice! As our infrastructure grows, I definitely plan on checking out Puppet or Chef – they sound like great tools for simplifying multi-server infrastructure management. If you’re on Linode like us, the above can be accomplished via StackScripts as well.


A user is stealing from us right now and I don’t mind

As I write this, some guy in Florida is using stolen credit cards to successfully steal tens of thousands of dollars of products from us. Or at least, that’s what he thinks he’s doing.

When someone steals, buys, or generates a credit card number with the intention of committing purchase fraud, the typical first step is determining if the card is valid. A stolen number runs the risk of being cancelled at any moment, and nothing stops a promising career in white collar crime in its tracks quite like a decline in the Walmart checkout aisle with $5000 of merchandise in the cart.

The preferred method then is to run a small online transaction on each stolen card. Once you’ve found a valid card number, you re-magnitize a card and the shopping spree begins! This is why if you’ve ever had your card stolen, you’ll almost always see a smaller test transaction at an online retailer before the large purchase at a retail store.

As an online retailer dealing in micro transactions (<$5), we have to be especially cautious about this form of credit card fraud. Most of our products aren’t especially tempting to fraudsters given their customizability (i.e. you can’t resell an Ink card) – but the low transaction amounts are ideal for testing stolen cards. Undetected fraudulent transactions result in chargebacks and rising merchant account fees.

My favorite way ( by far ) of combating this type of fraud is called the hellban. If you’re not familiar with the concept, it’s pretty straightforward and totally insidious: once a user is hell-banned, the site or app behaves normally for them – but none of their actions have any effect. It’s a popular method of forum moderation – if a user starts trolling your members or posting spam, you just hellban them. They’ll eventually give up on your site when no one seems to respond to their posts.

The same concept can be applied to credit card fraud prevention: a user who is hell-banned by our system (either through automated or manual means) sees their purchases go through (with some declines mixed in for realism) and receives ‘fake’ credits that let them buy products we never send. Of course, we’ve completely blocked all credit card transactions from going through at this point – protecting us from the liability of chargebacks.

Couldn’t you just delete the user account or ban their IP?

We sure could! This would effectively boot them off our system – but for how long? We are a tempting target for credit card fraudsters, and they expect to be banned for their bad behavior. They’d likely just switch to another VPN, sign up for another free account, and do it all over again, which means I now have another user account I need to hunt down and ban.

A hell-banned user as a rule sticks around for longer, all the while collecting especially poor empirical data on their credit cards. This in turn allows us to collect logs that are helpful in identifying them (and other fraudsters) in the future and reporting their activity to authorities.

Most importantly, it’s especially good sporting fun!

Continue the discussion on Hacker News and follow me on Twitter


Setting up MySQL replication without the downtime

I clearly don’t need to expound on the benefits of master-slave replication for your MySQL database. It’s simply a good idea; one nicety I looked forward to was the ability to run backups from the slave without impacting the performance of our production database. But the benefits abound.

Most tutorials on master-slave replication use a read lock to accomplish a consistent copy during initial setup. Barbaric! With our users sending thousands of cards and gifts at all hours of the night, I wanted to find a way to accomplish the migration without any downtime.

@pQd via ServerFault suggests enabling bin-logging and taking a non-locking dump with the binlog position included. In effect, you’re creating a copy of the db marked with a timestamp, which allows the slave to catch up once you’ve migrated the data over. This seems like the best way to set up a MySQL slave with no downtime, so I figured I’d document the step-by-step here, in case it proves helpful for others.

First, you’ll need to configure the master’s /etc/mysql/my.cnf by adding these lines in the [mysqld] section:

binlog-format   = mixed

Restart the master mysql server and create a replication user that your slave server will use to connect to the master:

CREATE USER replicant@<<slave-server-ip>>;
GRANT REPLICATION SLAVE ON *.* TO replicant@<<slave-server-ip>> IDENTIFIED BY '<<choose-a-good-password>>';

Note: Mysql allows for passwords up to 32 characters for replication users.

Next, create the backup file with the binlog position. It will affect the performance of your database server, but won’t lock your tables:

mysqldump --skip-lock-tables --single-transaction --flush-logs --hex-blob --master-data=2 -A  > ~/dump.sql

Now, examine the head of the file and jot down the values for MASTER_LOG_FILE and MASTER_LOG_POS. You will need them later:

head dump.sql -n80 | grep "MASTER_LOG_POS"

Because this file for me was huge, I gzip’ed it before transferring it to the slave, but that’s optional:

gzip ~/dump.sql

Now we need to transfer the dump file to our slave server (if you didn’t gzip first, remove the .gz bit):

scp ~/dump.sql.gz mysql-user@<<slave-server-ip>>:~/

While that’s running, you should log into your slave server, and edit your /etc/mysql/my.cnf file to add the following lines:

server-id               = 101
binlog-format       = mixed
log_bin                 = mysql-bin
relay-log               = mysql-relay-bin
log-slave-updates = 1
read-only               = 1

Restart the mysql slave, and then import your dump file:

gunzip ~/dump.sql.gz
mysql -u root -p < ~/dump.sql

Log into your mysql console on your slave server and run the following commands to set up and start replication:

CHANGE MASTER TO MASTER_HOST='<<master-server-ip>>',MASTER_USER='replicant',MASTER_PASSWORD='<<slave-server-password>>', MASTER_LOG_FILE='<<value from above>>', MASTER_LOG_POS=<<value from above>>;

To check the progress of your slave:


If all is well, Last_Error will be blank, and Slave_IO_State will report “Waiting for master to send event”. Look for Seconds_Behind_Master which indicates how far behind it is. It took me a few hours to accomplish all of the above, but the slave caught up in a matter of minutes. YMMV.

And now you have a newly minted mysql slave server without experiencing any downtime!

A parting tip: Sometimes errors occur in replication. For example, if you accidentally change a row of data on your slave. If this happens, fix the data, then run:


Update: In following my own post when setting up another slave, I ran into an issue with authentication. The slave status showed an error of 1045 (credential error) even though I was able to directly connect using the replicant credentials. It turns out that MySQL allows passwords up to 32 characters in length for master-slave replication.

Update #2: An astute reader noted that he ran into a “MySQL server has gone away” error while running the initial dump. The solution he found was to add the following during the import on slave:


An inside look at the app that powers Sesame

Though Sincerely has been shipping physical goods to our users’ homes since day one, last week’s Sesame Gifts launch marks the first time we’ve done fulfillment in-house. So how does a startup go from shipping apps to shipping boxes? By building an app, of course!

From the start, we knew we wanted a Sesame gift to be more than just a brown box in the mail – that receiving one would be an experience in itself. We also knew that we’d want the same freedom to quickly iterate on new fulfillment and packaging ideas that we’d become accustomed to in software development. So we decided to do it ourselves and transform our beautiful office in downtown San Francisco into a state-of-the-art fulfilment warehouse, like this one:

Amazon's warehouse is like a huge Walmart that caters to pro shoppers

Ok, we’re not quite there yet! Tasked with setting up a fulfillment center in less than 8 weeks, our team created an internal iOS app that helps ensure orders get out the door accurately and efficiently. I’m quite proud of what we’ve accomplished and wanted to give you an inside look at what happens when you place that Sesame order for your mom that you’ve been meaning to send this week:

Introducing Rocket: the app that helps our team ship beautiful Sesame gifts.

The objectives of Rocket are simple: provide a list of orders that need to go out, print shipping labels for those orders, enforce accuracy throughout the process, and track everything. And of course, avoid any bobcats from ending up in our boxes.

And who says internal apps can’t have a little fun first? Maybe it was because we started specing Rocket the week of the successful SpaceX launch, but first time users of the app are greeted by an animation of a rocket ship with smoke trailing taking off with Elton John’s “Rocket Man” playing in the background. If you think it’s a bit much, I’ll agree with you only when I stop smiling every time I hear a new user open the app.

A user is given two possible activities: Packing or Shipping. Because we have a limited number of gift sets and a relatively complex packing process, we chose to keep these two processes decoupled for operational efficiency. Gift boxes are packed from inventory and placed in a staging area; when orders come in, packed boxes are pulled from staging, sealed with a personal card from the sender, and shipped.

Packing a box

A user who selects Packing will see a list of gift sets sorted by priority (sorted by available box vs pending orders). Choosing a box type shows a list of items to grab from inventory. The user enters how many boxes they want to pack and clicks Print, which prints a unique QR code inventory control label for each box. The packer affixes the QR code labels to the packed boxes, places them in the staging area, and moves onto the next box.

Rocket talks to our server via an API, which lets us keep track of things like how many boxes have been packed and are ready for fulfillment, as well as who packed each one and when.

List of Rocket Orders

If the user chooses Shipping, they see a list of pending orders as they come in, sorted by priority. Selecting an order from the top of the list, they see a screen that tells them what box type to grab from staging, and asks them to scan the QR code on the box with the camera.

Scanning the Box

This step is important, because it essentially “checks in” the box to the order: the QR code ties the box to a specific id in our database, so we know that the scanned box matches what was ordered (in case it was accidentally placed on the wrong shelf), and we know who packed the box and when. So if Joe Customer writes me a week later telling me that someone took a bite out of one of the chocolate truffles in his Ultimate Unwind box, I’ll know that Jane Packer has a sweet tooth.

Shipping Station

Provided everything matches, the shipping label and the personalized greeting card are immediately printed at their shipping station. All the shipper needs to do is insert the card inside the box, seal it, and place the shipping label on the outside. An email is then dispatched to the customer, informing them that their order is on the way, along with their UPS tracking number.

You may be wondering how we print labels and PDF greeting cards from the phone when the shipper clicks print. Actually, Rocket hits the server API, which then sends a request to the local CUPS server at the shipping station to print the label and card. But that’s all gravy – the important bit is that our packers and shippers don’t have to worry about marking orders as shipped, reconciling inventory, matching greeting cards with orders, or printing labels on their own – it all just works with timing synchronized to their workflow.

Launching Sesame and developing Rocket was a fun technical and operational challenge for the team here. It was a great mix of app building, api hacking, and interfacing with the real world. We’ve only just begun using Rocket, but it’s already helping us keep up with the rapid growth of our new product. As we expand our operations, we expect Rocket to be a hidden ingredient of our special sauce.

Continue the discussion on Hacker News


The silent state of the explosion

When I was 15 or so, I started a BBS using my family’s Macintosh LCIII, mailing several months worth of my allowance money to a guy named Terry Teague who promptly mailed me back a 3.5″ floppy of WWIV BBS.

My BBS was pretty low-frills as far as they go, focusing on shareware distribution and other nerdtastic things – but I did a decent job advertising it through other local BBS systems (you twitter kids have it easy, let me tell you). Within a week or so, my userbase was up to something like 10 people (!), and it became apparent that I’d need a dedicated phone line. I can’t recall exactly how I convinced my parents (something dripping with guilt about not wanting me to be “unprepared for the future” I hope), but that lasted for another few months, until it was clear that I really needed a second modem because it was always in use.

Back then, it was devastatingly obvious when things were “taking off”. I’d go to use my computer, and the modem would be in use. And handling an explosion of growth was a chore, like ordering equipment and calling the phone company.

When I set up my first web site, I used an old PC and our DSL line. It could handle like 2 people visiting the site at once before its disk would whirr and the fan would enter “jet mode”. A digg back then (had it existed of course, you kids) would have been devastatingly obvious – my computer would have probably just exploded.

When Brian Hawthorne and I launched, we got digged, and hard. Our little Rails site got over 25 million hits that first weekend. Needless to say, with Rails 0.83 or whatever we were using back then, this wasn’t even remotely an easy thing to wrangle. One moment of elation (“we made the front page!!”), to “um, the servers aren’t looking so good”. This time, calls to our data center and some refactoring, and we were back in action.

Today, an explosion of users is handled silently with scaling platforms like S3 and Mosso. You don’t really even need to monitor your server status if you don’t want to. Scaling up to handle the explosion, is effortless. Not having to do work is awesome, absolutely, and by saying this I feel a little like that old man in the grocery store mumbling something about walking uphill in the snow, but this lack of involvement has negative consequences as well.

Over this past weekend, I launched my first iPhone application to the App Store. I half expected to sell a single unit (to myself) and let it be a fun learning experience. This morning, I looked at my sales reports and was shocked at what I saw – I’d sold lots more than 1. Still modest numbers, definitely, and it was thrilling, absolutely, but it made me feel strange. Call me a glutton for punishment, but I sort of miss the good old days of the whirring fan and that blinking busy light.

I’d liken it to the difference between jogging along Crissy Field looking out at the Golden Gate Bridge, and jogging on the squeaky treadmill at 24 Fitness. Yeah, you’re still running, and you feel the endorphons all the same, but you don’t have that connection with the pavement, the air, the world, that you do when you’re outdoors in a beautiful place. That whirring fan or real-time 14.0 load average are the indicators I once used to determine if something was successful.

I want to see those sales numbers climb in real-time, see that hockey-stick graph, and know “I made this, I made something people want!”. Without that, or some indication of growth or the “challenge” of scaling, it just all feels too easy. I’m a technology worker Apple, I grew up in the days before eWorld, let’s at least pretend this is hard, why don’t you?


Join me at DIYdays SF today

2:15 to 3:15pm
PANEL: The art and science of crowdsourcing
There is power in the crowd. When they rise up they can fund, create, distribute and promote. But how do you turn an audience into an active community where members become collaborators? Panelists: Slava Rubin (indieGoGo), Skot Leach (Lost Zombie), Jason Harris (Mekanism), Bryan Kennedy (, Blair Erickson (Millions of Us) Discussion Leader: Lance Weiler

I hope you come, it should be an interesting discussion!