Production NodeJS: The Complete Guide

After searching the internet for hours and attempting to setup my production server only to corrupt /etc/passwd and needing to start over, I have decided to document what I do in case I need to do it all again. From my findings, I see a ton of blog posts telling how to do one thing I need, and most are either way out dated or don’t consider modern day security practices. Here’s my take.

Goal: Release my NodeJS app onto the internet (for as little money as possible)

The only paid services I’ll be using is Digital Ocean and Namecheap. I could host using Heroku, and use a MongoDB service, and some logging service, and some whatever service for whatever else. But then I’d be spending $100/mo on all those $5-10/mo services just to have a production ready app. For this app, I’ll be going with a single Digitial Ocean droplet for $0.007/hr aka $5/mo and pay ~$11.00 for my a year of my domain.


  • Namecheap for domain registration
    • With Comodo PositiveSSL for the SSL certificate, or just go with the free Let’s Encrypt certificate
  • Digital Ocean for VPS hosting running Ubuntu 14.04
  • JustHost for email
  • NodeJS for the web application
  • NGINX for the web server as a reverse proxy
  • MongoDB for the database
  • PM2 for the node process manager

Table of Contents

  1. Buy a Domain Name
  2. Get NodeJS Hosting
  3. Connect Your Domain to Your Droplet and Setup IPv6
  4. Setup Email
  5. Setup Your Name Servers
  6. Create a New User and Disable Root
  7. Update Your System
  8. Update Node and NPM
  9. Install MongoDB
  10. Install the SSL Certificate onto Your Server
  11. Create a Test App
  12. Install NGINX with SSL
  13. Test Your Nginx Configuration
  14. Create The User to Run Your App
  15. Install Git
  16. Install PM2
  17. Test Your PM2 Setup
  18. Setup PM2 Log Rotation
  19. Setup PM2 Deployment
  20. Finalize the PM2 Startup Configuration
  21. Considerations if Staying at 512MB
  22. Checkup on Your Security
  23. Back to Being a Developer

Buy a Domain Name

First things first: buy a domain name. I went with Namecheap for this app, but it’s a domain name, so go with the cheapest option you can. Can’t think of a clever name? Me neither, sometimes it’s the hardest part of all this. For the purpose of this guide I will be using

Screen Shot 2016-06-09 at 4.59.49 AM

Looks like it’s available. I’ll just register it real quick.

Add it to your cart. Checkout. Search Google for promo codes. Pay. Next, setup Namecheap’s Two Factor Authentication (2FA) right now. Don’t wait, do it now.

Get NodeJS Hosting

I’ll be using Digital Ocean. If you don’t want to use Digital Ocean, then this is the end of the guide for you. I’m going with DO because it’s cheap, it’s pretty, scales very well both in cost and performance, and I don’t go with fad hosting just because someone said so. Google around make the best choice for the time you are reading this.

Make an account and create a droplet. I chose to use the ‘One-click Apps’ to install node, because that’s the easiest method for node. This will be on Ubuntu 14.04.

Screen Shot 2016-06-09 at 5.13.18 AMSince we are just setting up right now, I went with a $0.007/hr droplet since it’ll take a few hours and we can always increase the specs later. Choose a data center near you. I’m using San Francisco 1. Generally it should be placed near your customers, but DO is so fast it won’t matter right now. Select IPv6 since you live in the 21st+ century.

Never have a server with root capable of logging in. We will first use a SSH Key to login as root, create a new user, then remove the ability for root to login through SSH. GitHub recommends you don’t recreate a new key if you already have one, that would break a lot of things you may have used SSH authentication.

Add a SSH Key, click the ‘New SSH Key’ button and follow the instructions there. The current How To link points here.

Once that’s done, create the droplet. Last, setup 2FA for Digital Ocean.

Connect Your Domain to Your Droplet and Setup IPv6

After your droplet has been created and is running, click ‘Add a domain’ from the ‘More’ menu on the droplets page.

Screen Shot 2016-06-09 at 5.37.36 AMType your domain into the ‘domain’ box and make sure your new droplet is selected. This will create a new A record for the IP address assigned to you. (If you decide to create a snapshot and destroy your droplet it will have a new IP when recreated, so you must update the IP.) To have a www subdomain (or any subdomain) you should create a A record with ‘www’ as the name and ‘’ (Notice the period at the end) as the hostname.

For IPv6 we will need to create an AAAA record. If you go to your droplet, at the top it will show the IPv6 address.

Screen Shot 2016-06-19 at 6.44.22 AM

Copy that address, go back to the networking page, then click domains. Select the magnifying glass to view  your domain’s DNS. There you’ll add an AAAA record for your host with ‘@’ and for your ‘www’ subdomain. I’ve always done CNAME’s for subdomains, but you can’t replicate records in A and CNAME, this Stack Overflow explains all about that. My mail exchange does not support IPv6, but if yours does you will need to do the same as above for your mail server but with a IPv6 address. Because my mail exchange doesn’t support it, I’m technically not fully IPv6 compatible, but that’s O.K. since basically all IP Clients (browsers/mail/etc) should fall back to IPv4.

My final setup with email looks like this:

Screen Shot 2016-06-19 at 7.25.47 PM

Setup Email

As far as the email setup goes, I already had email through JustHost for my personal website. I registered my ‘’ domain with my JustHost account and copied over the DNS records which relate to email. These were the A record for mail, the CNAME records for pop and smtp, the MX 0 record, and the TXT record for spf verification. I’m doing this because I’m already paying for hosting at JustHost which includes email. When that expires, I may move away and chose a different option. I have a few other websites and each have their own emails, so it’s quite cheap for me to do my email this way. All my email is forwarded to my gmail account so I still get the benefit of using gmail. You could also look into managing email yourself, but I really don’t recommend it.

Setup Your Name Servers

At the bottom of the DNS records you can see three name servers. We will tell Namecheap these are the name servers for our domain. Login to your Namecheap account. Next to your domain, click ‘Manage’. Set the DNS to Custom and enter in the three name servers, then click the green check.

Screen Shot 2016-06-09 at 5.54.01 AMDNS records usually take 30 minutes to reset. This is set according to your TTL. You can find out your current TTL by doing a dig command for your domain. This is the dig for my domain. The TTL is in seconds, so 21243 seconds is like 6 hrs…

Screen Shot 2016-06-09 at 5.57.28 AM

Quite a bit so far. As far as progress we’ve got about 1% of the way done. This is what it’s like to be a full stack dev. Well mostly the next part is. At this point you have a domain and hosting. Make sure to ping your domain and verify that it’s running and on the Digital Ocean IP address.

Create a New User and Disable Root

For this we will be mostly following this guide:

SSH into your droplet as your only user: root.


Add a new user. Which will be your primary account, so chose your own username.

adduser example

You will be prompted for a password. Soon we will make it so you won’t use this password to login, but you will need this password to execute commands as root. For all the other info, just leave it blank if you wish.

Add your account to the sudo group so you can execute commands as root:

gpasswd -a example sudo

I’ve seen people use ‘gpasswd’, ‘usermod’, ‘adduser’, and ‘visudo’ commands to add a user to the sudo group. I went with gpasswd.

Copy over the public key we installed on our server for root to now be used for your user:

mkdir -p /home/example/.ssh
cp /root/.ssh/authorized_keys /home/example/.ssh/authorized_keys

Change the permissions of the copied key for your user:

chgrp example -R /home/example/.ssh
chown example -R /home/example/.ssh
chmod 700 /home/example/.ssh
chmod 600 /home/example/.ssh/authorized_keys

This is basically the same as the guide says, but just a different method, all while staying in the server. On your local machine open another terminal and test that you can login using your new user without having to type a password:

# from your local machine

The guide does not mention disabling password authentication, so that whole deal with high security using public keys is useless without also disabling password authentication.

Back on the server, open the sshd config file. I used emacs, but use whatever you’re comfortable with.

emacs /etc/ssh/sshd_config

Find and change these lines from: (they aren’t actually next to each other)

PermitRootLogin yes
#PasswordAuthentication yes


PermitRootLogin no
PasswordAuthentication no

Then restart the SSH service:

service ssh restart

Your SSH key access works because of the default settings in sshd_config ‘RSAAuthentication’ and ‘PubkeyAuthentication’ are set to ‘yes’. To test that you set everything up attempt to ssh from inside your server:


You should see this message:

Permission denied (publickey).

Now try it from your local machine, and you should be able to login just fine. This is because your droplet doesn’t have a SSH key generated to login to itself, only your local machine has that key.

Now logout of your server. This should be the last time you are the root user, and from now on you will use ‘sudo …’ to perform root actions. If you are loving 2FA so far, Digital Ocean has a guide on how to install 2FA for SSH: I think all this public key stuff is pretty secure already. I would recommend 2FA for SSH if you aren’t using a SSH key, or if you have a few locations where you have SSH key access.

Login with your new user account and lets continue:


You now have secured your server. From here on we will be setting up our production NodeJS server, finally. You may want to make a snapshot of your current droplet. You won’t have to do all this next time you setup a server. And in case you mess up in the following steps (like I did) you won’t have to start from scratch (like I am now…)

Update Your System

I assume everything is up to date, but just to be sure update and then upgrade the packages. If you want, you can also run autoremove to clean up any unused packages.

sudo apt-get update
sudo apt-get dist-upgrade
sudo apt-get autoremove

Edit: Update Node and NPM

After completing my guide I realized that the default versions of node and npm that Digital Ocean installed were very old versions. This caused the wrong version of my packages to be installed. According to, you can use to get the latest version: Currently it’s at 6.x, so I used:

curl -sL | sudo -E bash -
sudo apt-get install -y nodejs

For npm, you can have it update itself:

sudo npm install -g npm@latest

For some packages you will need build tools:

sudo apt-get install -y build-essential

Install MongoDB

Many guides I’ve read each gave their own way of installing mongodb, then I found out that most of these guides are also telling how to install old versions of the database. I will be instructing you on how to install the latest version and then we will make sure. It’s also important that the version you are running on your dev machine matches up so there aren’t differences when you push your code to the production server. I recommend that you always have the latest version on all your machines.

We will follow the official guide of installing MongoDB on Ubuntu: I won’t be replicating the steps, instead I will only tell you how to verify you are on the latest version.

First they ask you to import the public key. All public keys for Ubuntu packages can be found on the keyserver: Search for MongoDB and verify that the key is for the latest stable version. The latest version I see is named:

MongoDB 3.2 Release Signing Key <>

At this time there is a 3.4, but that isn’t the latest stable release. Next, you will create a source list for apt-get to use when searching for MongoDB’s packages. MongoDB’s guide gives a one line command for this. To make sure it’s the correct command you can visit the repo url and find the url they are referencing. Run this command to get your Ubuntu version and code name:

lsb_release -a

Mine shows:

Distributor ID: Ubuntu
Description:    Ubuntu 14.04.4 LTS
Release:        14.04
Codename:       trusty

So I can see that the repo url uses the latest version, 3.2, and the codename trusty. There is an option for stable instead of 3.2, but you should stick with a known version number. Sometimes upgrades aren’t compatible and you wouldn’t want an apt-get upgrade to break your app.

After an install, the service usually starts running. You can verify it with:

sudo service mongod status

From here we will be following a Digital Ocean guide on setting up NodeJS with nginx, along with git and PM2. The guide is here: We’ll skip the sections on node installation since we chose a NodeJS One-Click App when making our droplet. One thing to note is I’ll assume we are using port 3000 for our node app.

Install the SSL Certificate onto Your Server

We will loosely follow these guides:, and Except I will focus on using Nginx and Namecheap.

Login to Namecheap, enter in your 2FA code, select the ‘Manage’ button next to your domain, click ‘Product List’, select your SSL certificate and click ‘Activate’. Since I’m redoing this step I have to click the lock icon next to my domain and click ‘Manage’, then I have to Reissue the certificate. You can reissue your certificate as many times as you like until it expires, the old certificate will be revoked and be invalid.

Now you should see a page like this:

Screen Shot 2016-06-10 at 7.44.00 AM

You will now generate a CSR and key file. You should follow the directions provided, but I will give what’s necessary. Do this in your account’s home directory for now. We will move the files to a secure location later. You’ll see I named the generated files with my domain name so I can keep track of them very easily. Also make sure you are using the minimum number of bits for your time period. Comodo requires at least 2048 right now.

openssl req -new -newkey rsa:2048 -nodes -keyout -out

Cat out the csr, copy it, and paste it into namecheap.


Make sure to select the option with ‘Nginx’ and click Next. Choose the verification method that’s easiest for you. After verification you will be emailed a zip of your certificate and CA bundle.

Upload the files to your server. You can use sftp by connecting to your server over ssh using your ssh key. I will show how to do this with scp. All with command line. On your local machine open another terminal. I unzipped the file sent from Comodo. Here I’m copying the entire unzipped folder to my remote user’s home directory:

scp -r /Users/me/Downloads/example_com/*

You can close your local terminal and focus back on the remote machine. For Nginx, we will need to combine the two files provided. Use cat to append the bundle file to the certificate file.

cat example_com.crt >

We will move the chained certificate and key files once we install Nginx. You may delete the original files (, example_com.crt,

Create a Test App

We will first create a “Hello World” app to make sure we are setting up PM2 correctly. Once everything is verified, we will remove this app and install our actual app.

In your home directory, create a new file:

emacs app.js

and paste in this code:

var http = require('http');
var server = http.createServer(function (request, response) {
  response.writeHead(200, {"Content-Type": "text/plain"});
  response.end("Hello World\n");
server.listen(3000, '');
console.log("Server running at");

When we setup our Nginx configuration we can test it with this simple node site.

Install NGINX with SSL

Install Nginx:

sudo apt-get install nginx

Now we will move the certificate files to a secure location. Make a directory with root privleges:

sudo mkdir /etc/nginx/ssl

A quick ‘ls -l’ confirms we made the directory with root privileges.

ls -l /etc/nginx

Screen Shot 2016-06-15 at 11.13.48 PMNow move the certificate and key files to that folder:

sudo mv /etc/nginx/ssl/
sudo mv /etc/nginx/ssl/

Change the files ownership and permissions to root:

sudo chown root -R /etc/nginx/ssl
sudo chgrp root -R /etc/nginx/ssl
sudo chmod 640 /etc/nginx/ssl/

Now we write the configuration file. Most guides will tell you to modify the default site configuration file. I think this is bad practice, because it doesn’t scale. Most likely you will only have one configuration for this server, but I always expect to have multiple configurations for a web server. You may want a dev site configuration or host some other setup.

Go to Nginx’s configuration files:

cd /etc/nginx/

Remove the symlink for the default site configuration in the enabled sites:

sudo rm sites-enabled/default

Create a new configuration file in site-available for your website:

sudo emacs site-available/

Paste in this configuration file, changing ‘’ to your domain:

server {
    listen 80 default_server;
    listen [::]:80 default_server;


    return 301$request_uri;

server {
    listen 443 ssl;
    listen [::]:443 ssl;


    ssl_certificate /etc/nginx/ssl/;
    ssl_certificate_key /etc/nginx/ssl/;

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

    ssl_session_cache shared:SSL:20m;
    ssl_session_timeout 60m;

    ssl_prefer_server_ciphers on;
    ssl_dhparam /etc/ssl/certs/dhparam.pem;

    return 301$request_uri;

server {
    listen 443 ssl;
    listen [::]:443 ssl;


    ssl_certificate /etc/nginx/ssl/;
    ssl_certificate_key /etc/nginx/ssl/;

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

    ssl_session_cache shared:SSL:20m;
    ssl_session_timeout 60m;

    ssl_prefer_server_ciphers on;

    ssl_dhparam /etc/ssl/certs/dhparam.pem;

    location / {
        proxy_pass http://localhost:3000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;

To explain what this is doing, I’ll go from top down, because that’s what nginx will do. At the top we defined a server which will listen for all IPv4 and IPv6 traffic on port 80 . All those requests will be replied with a HTTP 301 redirecting the request to[whatever]. Next, we define a server which will listen for all IPv4 and IPv6 traffic on port 443 to the domain ‘’. All of these requests will be redirected with a HTTP 301 to[whatever]. Last, we define the server that will proxy our node application. This will only listen for all IPv4 and IPv6 traffic on port 443 going to ‘’. You’ll notice that the configuration for both the ssl servers are the same. This is the current secure configuration, which I verified using You can read more about each of those on your own. For the location, we process all requests through a proxy. This proxy forwards the request to ‘http://localhost:3000’, which is were our app will be listening.

In the configuration, I specify a dhparam file. On my version of Ubuntu, the default file is at 1024 bits, which is not secure anymore. Use OpenSSL to generate a safer dhparam file:

sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 4096

Create a symlink for this configuration to site-enabled:

 sudo ln -s /etc/nginx/sites-available/ /etc/nginx/sites-enabled/

The idea of having a sites-enabled and site-available is to easily disable a configuration without losing the file itself. Nginx will load all sites in sites-enabled, so if you need to disable a configuration just delete the symlink in site-enabled and restart the service.

Restart the Nginx service to enable our configuration:

sudo service nginx restart

Your server will now be listening on ports 80 and 443, then redirecting the requests. If the restart shows [FAIL], then check the logs for any mistakes:

sudo cat /var/log/nginx/error.log

You can also watch the access file as you navigate your website with tail’s follow flag:

sudo tail -f /var/log/nginx/access.log

Test Your Nginx Configuration

Run the test app we made earlier:

cd # make sure in your home directory
node app.js

You should see a console output of “Server running at”. If you go to your website you should now see “Hello World”. This confirms your Nginx is setup correctly, so quit the node application with Ctrl-C and lets set up your app for production.

Now we have Nginx setup and ready to serve our nodejs app. Next thing we’ll do is setup PM2 to manage our nodejs process. We will be following some elements from this guide:, but mostly these next parts I had to come up with due to the lack of up-to-date guides on this subject.

Create The User to Run Your App

For security purposes we will create a new user which will run the app. This ensures that if the app is compromised, the attacker will only have as much access as our app user. We won’t give any permissions to this user other than running the app. Originally, I had come up with a much more secure method then I’ll be presenting, but there were a couple of issues with PM2 that prevented me. You may name the user whatever you wish, I recommend something that resembles your app like the app name.

Create the user (without a password, no need):

sudo adduser appuser --disabled-password

Copy the authorized keys from your user into the app user so you can ssh in as the app user:

sudo mkdir -p /home/appuser/.ssh
sudo cp /home/example/.ssh/authorized_keys /home/appuser/.ssh/authorized_keys

Change the permissions to your app user.

sudo chgrp appuser -R /home/appuser/.ssh
sudo chown appuser -R /home/appuser/.ssh
sudo chmod 700 /home/appuser/.ssh
sudo chmod 600 /home/appuser/.ssh/authorized_keys

You now have another user, just like your account but without the ability to sudo. Having this separate account also allows us have the option of locking it down more then we could with our normal account. You should be able to ssh into this account from your local machine:


Install Git

Back in your account:

sudo apt-get install git

Done, that was nice and easy…. Except we need to do some more SSH key stuff to make everything secure. This is required if you have a private repo and pointless if it’s a public repo. We will follow this guide and take advantage of this

Switch into your app user’s account:

sudo su - appuser

Generate a new key: (Don’t enter using a password, this will prevent us from automating the deployment tasks, and this will be read only access anyway)

ssh-keygen -t rsa -b 4096 -C ""

Cat out the key to copy it:

cat .ssh/

In your repo, click ‘Settings’, ‘Deploy Keys’, ‘Add deploy key’, and aste this key in GitHub.

Screen Shot 2016-06-19 at 3.48.31 AM

Now we need to make sure we add github to our known_hosts and verify we setup our keys:

ssh -T

Now you can securely pull your repo, which will be necessary for the next steps. While we’re talking about security, have you setup your GitHub 2FA?

Logout of your app user account, which will bring you back to your account:


Install PM2

PM2 will be used as our process manager to persist our nodejs application. Check out their website for a quick start and more about it: It’s just like forever and supervisor, but made for production. You can use PM2 during development as well. Honestly, PM2 was quite difficult to setup the way I wanted. I had to find dozens of docs/tutorials/blogs/etc to figure it all out. It seems that it’s a new product with more and more features being added all the time. While most tutorials will show how to install PM2 and get it running, I will show this and how to use all the modern features: deploy, startup, logging, remote monitoring, etc.

Install PM2 with npm:

sudo npm install pm2 -g

From here we will be performing the PM2 commands within our app user’s account. PM2 was not designed for a multi user system, they are currently working on coming up with the best method to handle multi-user. What this means is each user who runs pm2 basically has their own setup. We want our app user to manage pm2 entirely, so our app user needs to run all those commands. To accomplish this, switch into your app user:

sudo su - appuser

If you want, go ahead and get an account on keymetrics (people who make PM2) so you can remote monitor your app for free. Their paid platform is pretty intense, and their free plan does just what we need. If you do create an account, then link it (they also provide a string to copy paste):


Then install the monitoring module:

pm2 install pm2-server-monit

and setup PM2 to run on startup with your app’s user:

pm2 startup

This will generate a command for your to run. Exit from your app user to your main account, then copy and paste that command. You needed to exit since the generated command requires root privileges. This will create a startup script for your machine. This script is installed in init.d for Ubuntu, which is where all the startup scripts are placed. It will look at the .pm2 configuration files in your app user’s home directory and start the processes you have saved. Right now we don’t have any processes saved for PM2, we will get to that.

Test Your PM2 Setup

At this point you should be in your home directory of your account. If not, run ‘cd’. Change the owner of the test app you made earlier to the app user:

sudo chown appuser app.js
sudo chgrp appuser app.js

Move the test app to your app users’ home directory:

sudo mv app.js /home/appuser/

Switch into your app user’s account:

sudo su - appuser

Run your test app using pm2:

pm2 start app.js

This will start the app and display pm2’s status. Here’s mine:

Screen Shot 2016-06-19 at 6.02.02 AM

Now your website should display “Hello World”, just as before. If the app crashes, PM2 will restart it. To test that we did the startup script correctly, you can save the running process:

pm2 save

And reboot your machine:

exit # go back to your account
sudo reboot

Want to know when your server has finished rebooting? Ping your domain and when it stops timing out, it’s back online: ping

When your system reboots you should see your website displaying “Hello World” again. You can also see that PM2 was started and that the process is running by checking the status:

sudo su - appuser
pm2 status

We no longer need our test app to run. Delete the app from PM2 and save:

pm2 delete all # You should put the process id when deleting one app
pm2 save

Return back to your account for the next steps:


Setup PM2 Log Rotation

For logs, you can use the native logrotate or their module pm-logrotate. I first was using their module, but found that there were a lot of issues at the time I was working on this. I’ll be using the native logrotate that Ubuntu uses to rotate its logs.

Back on your account, use pm2 to create the logrotate configuration file:

sudo pm2 logrotate -u appuser

This will create the file /etc/logrotate.d/pm2-appuser which should look like this:

/home/appuser/.pm2/pm2.log /home/appuser/.pm2/logs/*.log {
    rotate 12
    create 0640 appuser appuser

The generate file won’t work for us. Edit the file:

sudo emacs /etc/logrotate.d/pm2-example

And make it look like this:

/home/<strong>appuser</strong>/.pm2/pm2.log /home/<strong>appuser</strong>/.pm2/logs/*.log {
    <strong>su appuser appuser</strong>
    rotate 12
    create 0640 appuser appuser

The only changes I made was setting the location of the logs files within the app user’s account and performing the operation as the app user.

Setup PM2 Deployment

We will use PM2’s deploy to manage the deployment when we make changes in the future. On your local machine, open a new terminal, and cd to the base of your app. The next command will generate a file which we need to commit.

Install pm2 on your local machine:

npm install pm2 -g

Create the ecosystem file:

pm2 ecosystem

Open that file and edit it according to your app. Mine looks like this:

  apps : [
      name: "ExampleApp",
      script: "./bin/www",
      env: {
      env_production : {
        NODE_ENV: "production"

  deploy: {
    production: {
      user: "appuser",
      host: "",
      ref: "origin/master",
      repo: "",
      path: "/home/appuser",
      "post-deploy" : "npm install && pm2 startOrRestart ecosystem.json --env production"

As you can see from this file (or not see…took me a while to decipher each key), when deploying production it will use our app’s user to ssh into our host. Once there it will first cd to path. Next, it will use our git SSH key to pull the latest changes from our repo in the branch specified with ref. Once the pull is complete, it will run the post-deploy command on our server. I setup this command to run npm install so the packages are updated. Then it runs pm2 startOrRestart ecosystem.json –env production, which will either either restart or start pm2 according to the apps configuration in our ecosystem.json file. Our server now has this file we made on our local machine, because we committed it to our repository and it was pulled to the server. When pm2 starts, it will name the app so we can recognize it in pm2 status and the deploy script will basically run pm2 start script. The ecosystem file has one more special feature, and that it will set the environment variables for the script according to the key env_[deploy]. In our case, we will deploy production, so it will load env_production into the environment. Doesn’t matter how you deploy your app, any key in env will be also loaded into the environment.

After you write your ecosystem file, pm2 needs to setup the server by creating a few folders. On your local machine run the setup command:

pm2 deploy production setup

Once that is done, run the deploy command to perform the actions I described above:

pm2 deploy production

Anytime you make and push changes you can use that command to update your production server. I had a couple issues when first running npm install which caused pm2 to fail. For me, I just ran the deploy command again and it worked.

Another thing you should consider for your production machine is the npm dependencies. In some rare cases the versions you have on your local development machine might not be the exact same as the ones that will be pulled from your production server. NPM has a feature called shrinkwrap: This will generate a shrinkwrap file, which you will commit. When deploying, the npm install command will read the shrinkwrap file and ensure you are installing exactly the same versions that you deployed. Personally, I won’t be using shrinkwrap, but look into it to judge for yourself.

Finalize the PM2 Startup Configuration

Once we have the site ready we need to save it in PM2. Back on the server in your account, switch to your app user:

sudo su - appuser

Make sure your app appears in the status:

pm2 status

You should see your app in the list, it doesn’t matter if it’s running or errored out. If it’s not in that list, try to deploy again.


pm2 save

This will save whatever you have currently running in pm2. This ensures that if we restart our server, our startup script will start our app. To test this, exit from your app user:


And reboot your server:

sudo reboot

When it boots back up your node app will already be running.

Considerations if Staying at 512MB

I am adding this section since I discovered that my small droplet couldn’t handle intensive loads. At first I expected this since it’s the smallest droplet, but since part of my goal was “for as little money as possible” I looked into how to make it work.

I installed siege on my local machine, using macports, and have it attacking my server. I have a test page on my server which adds and removes 100’s of records in my mongodb, and siege is directed to call the url to activate that. The result is flood of mongo requests, nginx requests, node requests, the works. At first, the server quickly froze and the only option was to restart from Digital Ocean’s website. After a lot of searching and debugging I’m certain it was a memory issue. The memory filled up, and new processes couldn’t be started. Nginx is a pro, and was fine. But it seemed PM2’s module pm2-server-monit started crashing and respawing until it couldn’t. A few times mongodb crashed and was a real hassle to get started again.

For mongo’s memory issue I noticed that as of 3.2 the default storage engine is WiredTiger. It has a lot of features, and to me, each one looks like a bad idea for a small server. First, the minimum memory it uses for cache is 1G, which is twice our RAM. Next, it uses compression. This is nice if our database gets large, but it requires more CPU which we only have 1 to process everything. On my local machine, I’ve been using the original engine MMAPv1. I switched the engine to that and my memory usage under siege drastically went down.

If you want to change your MongoDB engine, follow these steps:

# stop mongod
sudo service mongod stop

# edit the conf file
# add this line under storage, uncommented 'engine: mmapv1'
sudo emacs /etc/mongod.conf

# delete any databases that were created with WireTiger
# (I don't care since I'm still setting things up)
sudo rm -r /var/lib/mongodb/*

# start mongod
sudo service mongod start

# you may need to restart/deploy pm2 to reconnect to mongo

For the PM2 memory issue, this led me to find this Stack Overflow:

A swap file is a physical location on your hard drive which your cpu can use as (a slower) RAM. Consider it backup memory, and since we’re on a 512mb droplet it’ll come in handy. From the sources I’ve read, some recommend 2x the current RAM you have, some do not. In my stress tests, I never exceeded 700mb of swap. I’ll go with 2G because it’s the minimum most people recommend.

If you want to add a swap file, follow these steps: (copied from SO, which copied from a DO tutorial)

# Create a 2G swapfile
sudo fallocate -l 2G /swapfile

# Secure the swapfile by restricting access to root
sudo chmod 600 /swapfile

# Mark the file as a swap space
sudo mkswap /swapfile
sudo swapon /swapfile

Last, just to make sure we don’t run into problems with node, let’s restrict the amount of memory our node process can use. I found two methods, one from node and one from PM2. The node method is a flag --max_old_space_size which will abort the process when it exceeds the memory limit, then pm2 would restart it. However, this did not work for me on Ubuntu. The other method from PM2 is a configuration option: max_memory_restart. This does work, but because of how it’s implemented the worker will only check the memory in intervals. The default is every 30 seconds. So you may exceed your limit within the 30 seconds. You can lower the worker interval, but that would increase the cpu usage. I’m going to assume that node may exceed it’s memory usage rarely and I’ll use my swap for worse case scenarios, so I won’t decrease my interval.

If you want PM2 to restart your app when it uses too much memory, follow these steps:

# Open your ecosystem.json file on your local machine
# add 'max_memory_restart: "256M"' within your app declaration
emacs ecosystem.json

My app configuration now looks like this:

    name: "ExampleApp",
    script: "./bin/www",
    env: {
    env_production : {
        NODE_ENV: "production"
    max_memory_restart: "256M"

Checkup on Your Security

There are a lot of websites that will check your website for common issues. I have found a few and used them to test my setup. Any issues I found, I have already went back and added them into the guide. If you find issues with your setup, comment and I’ll update the guide. – Verifies your website was setup for IPv6 – Another IPv6 validator with different information – A fast SSL check – An extensive SSL server test – Checks your DNS servers – Checks your MX records, make sure to click ‘Find Problems’. This one finds a few problems that are out of my control like the mail server and some Digital Ocean DNS rules.

Make sure to check your domain ‘’ and all the subdomains like ‘’ for tests that don’t automatically do that.

Back to Being a Developer

This concludes the sysadmin stuff. I did destroy my droplet and follow my own instructions to see if it’s all correct. There were a lot of changes over the week that I wrote this. I hope it helps someone else securely setup a server without spending a week searching around.

If your app is like mine, then it’s probably not running for some odd reason. I develop on my MacBook Pro and my production machine is a completely fresh copy of Ubuntu, so I expected a couple hiccups. You may just have a filename case issue or you used a file path local to your machine you forgot to commit.

To help with fixing any potential problems, the log files are located in /home/appuser/.pm2/logs. The log filenames are generally formatted with the app name, the file type, then app id it’s running as. So for me it’s ExampleApp-error-2.log and ExampleApp-out-2.log.

Next, you should consider backup options. I may just go with Digital Ocean’s snapshot backups or eventually roll my own version for free.

Core Graphics – Drawing an angle dimension from 3 arbitrary points.

For a recent project, I needed to have the ability for a user to draw an angle dimension on top of a picture. They can drag the 3 points and the dimension lines should update in real time. I needed to draw two lines and an arc between the two lines. Drawing the two lines is simple, but drawing the arc took a little bit of thinking.

At first I was calculating the inscribed angle using the law of cosines, then trying to determine the starting angle. The end angle would be the starting angle + the inscribed angle. Big mistake. This is complicated and has too many edge cases for each quadrant a starting or ending point is located in. Then I tried the handy atan2 function, which previously proved very useful in game design, and worked perfectly. In this case, imagine a line of 3 points, where each point can be moved and the angle between the two segments are shown.

Below is my drawing code to achieve this:

- (void)drawRect:(CGRect)rect
    [super drawRect:rect];

    CGFloat lx =;// first point along edge
    CGFloat ly =;
    CGFloat mx =;// middle pt
    CGFloat my =;
    CGFloat rx =;// other point along edge
    CGFloat ry =;

    CGContextRef context = UIGraphicsGetCurrentContext();
    CGContextSetStrokeColorWithColor(context, [self.lineColor CGColor]);

    // thickness
    CGContextSetLineWidth(context, 2.0f);

    // starting pt
    CGContextMoveToPoint(context, lx, ly);
    // middle pt
    CGContextAddLineToPoint(context, mx, my);
    // ending pt
    CGContextAddLineToPoint(context, rx, ry);
    // draw path, make sure to call this here to reset the stoke path points

    // angle arc
    // distances
    CGFloat lmDist = sqrt(pow(lx - mx, 2) + pow(ly - my, 2));
    CGFloat mrDist = sqrt(pow(rx - mx, 2) + pow(ry - my, 2));
    // angles
    CGFloat startAngle = atan2(ry - my, rx - mx);// right to middle
    CGFloat endAngle = atan2(ly - my, lx - mx);// left to middle
    CGFloat radius = 0.20 * fmin(lmDist, mrDist);// 20% of min distance along lines
    int clockwise = YES;
    CGContextAddArc(context,,, radius, startAngle, endAngle, clockwise);

    // draw arc path

Here’s a screenshot from the prototype app:

(Ignore the background, it’s just something nice to look at while working)

Override UIWebView’s Pinch Gesture (or any gesture)

In a recent application I needed to use the UIPinchGestureRecognizer on a UIWebview. However, UIWebviews use a UIScrollView which already has a pinch gesture attached to it. I could not find a solution so I came up with this:

1. Disable the pinch gesture recognizer on the web view’s scroll view. (below)
2. Enable your own gesture recognizer

for (UIGestureRecognizer *gesture in self.webView.scrollView.gestureRecognizers) {
    if([gesture isKindOfClass:[UIPinchGestureRecognizer class]]){
        [gesture setEnabled:NO];
        break;// don't waste time once found

When you want to re-enable the web views default functionality:

1. Disable your gesture recognizer
2. Re-enable the pinch gesture recognizer on the web view’s scroll view.

for (UIGestureRecognizer *gesture in self.webView.scrollView.gestureRecognizers) {
    if([gesture isKindOfClass:[UIPinchGestureRecognizer class]]){
        [gesture setEnabled:YES];


You could also remove and store the target on the existing scroll view pinch gesture, and add your own. Then replace it again when done. There may be many more ways to do this as well.


This involves working within the structure of UIWebview, which is not guaranteed to always be the same across various versions of iOS. In this case, scroll view is an accessed property of a web view, and gestures are always added to that subview, but Apple doesn’t guarantee this (as far as I know).


Reversi is probably my favorite quick play game, which I play almost daily on my iPhone.

Today I decided to write Reversi for the browser is javascript. My main purpose is to then develop AI for the game to experiment. I have the code on my github here.

Play it in the iframe below, or here on a separate page.
Click here to reset the game.

Apple LLVM 6.0 Warning: profile data may be out of date

The Problem:
After building for archive I got 8 warnings saying my ‘profile data may be out of date’.

Screen Shot 2014-12-16 at 1.26.15 PM

I have no clue what this meant, so I googled the problem. I only ended up with some search results such as these:

That last one was good since the commit message actually described the problem:
a warning that triggers when profile data doesn’t match for the source that’s being compiled with -fprofile-instr-use=“. Now I know this is about PGO, which related to XCode’s Generate Optimization Profile. I had deleted the build folder earlier and this caused the warnings.

Note the importance of always having descriptive commit messages.

How To Fix:
Here, Apple described how to use the Optimization Profile and how to enable it. I don’t want to use the profile I generated and corrupted, so I’m going to properly delete it and disable it. You can always regenerate the profile to fix this problem.

  1. Delete the group folder ‘OptimizationProfiles’ in your project
  2. Set the ‘Use Optimization Profile’ for the release to ‘No’
  3. Build again


Learning how to c**** software

I don’t know any legality on this topic, and from what I’ve read so far within the hacking community, writing this sentence is way beyond what I should be saying already.

I’m currently learning to decompile and decided to experiment on a piece of software I use regularly, which I did buy. The only thing I’ll say is I found a string in Base64, so I decoded it. The result was a binary plist. I noticed a plain ASCII string which looked like the format of the serial key (which I determined from another subroutine), and when used resulted in this message:

Screen Shot 2014-11-29 at 1.43.41 AM

When I first found the key I thought no way they were this dumb. They weren’t. Don’t pirate software.

Also c**** is ‘crack’.

PHP C Code Coverage

I want to increase the code coverage for the php source, but I don’t know how this works. Specifically what interested me was unreached coverage on files which have around 99% coveraged. My goal was to go through the c source code and figure out if I could write a PHP test case which would cover the missed lines of code. Most of the uncovered lines are specific edge cases like when zend args aren’t matched, which rarely happens because of how many checks are done, but I wanted to try.

After many days, I have decided to postpone this task for another time. I was writing down what I was doing so I can post about it, but I had so many problems and my list kept getting longer and eventually unorganized.

I will say I didn’t give up easy, I tried the scripts on a couple servers, manually compiled libraries, read through many man pages, rewrote PHP’s code coverage scripts, and on and on. I was able to get code coverage data generated but it was never the same as PHP’s code coverage, and how was I able to know if I’m improving the coverage?

Issues I encountered which deterred me:

  • Mavericks update
    • Changed my apache conf and php ini files
  • Quickly learning apache’s new syntax
  • lcov not available on systems
  • lcov compiled with errors
  • lcov (after fixing errors) installs incorrectly from ./make install
  • gcov from Apple not having -v flag, but lcov require it
    • Fixed by changing gcov’s name to, and overwriting /usr/bin/gcov with a bash script to correctly call gcov.orig
    • I believe I also tried modifiying lcovs source
  • time, soooo much time was required
    • PHP Build: 20mins each time
    • PHP Tests:  Up to a few hours per build
  • school, the ultimate decider to put this project aside

Never Have I Ever App

View in the store:

November 29:
After a couple weeks being released I have 11 downloads. After checking the database I see a the game was played 557 times and 1 person submitted a suggestion: ‘Kissed a cow’.

November 13:
Status changed from In Review at 8:04, then changed to Ready for Sale at 17:46. Which means now my app is available on the app store! Also I just realized I named the app ‘Michael Ozeryansky’, and I have to submit an update to change the name.

Nov 11:
Originally I sent the app for review on October 15 1:02am, but it was rejected by 8:52am that same day. The report they sent back said a button in the app did not respond, which was against the violation of having bugs. I thought it was a problem with Magical Records block save method, so I updated that and submitted again at 11:17pm. The next day at 2:45pm, the app was rejected for the exact same reason. I tested the app by simulating a fresh installation, but I could not reproduce their bug. I replied to the rejection notice asking for more information, and a screamingly  automated reply said the bug was happening on the iPad, but I could not reproduce a bug their either. When the app starts a UIView is created to cover everything with a half transparent black cover, then show the tutorial on that. So I changed the app so the tutorial had its own UIViewController and view, which was presented after viewDidAppear. I have resubmitted the app Nov 6 7:27pm. I will update this post as things move along.

Sep 30:
I plan on releasing a Never Have I ever app by Oct 30th.

GroupMe Bot

I wanted to interface with GroupMe and I was hoping they have an API. Not only is there an API but they have an entire system for making Bots.

In one group I’m a part of, the members kept changing their names and it was hard to keep track who really was who.

This bot has no memory, thus when it sees someone change their name it will download all the messages ever posted then process who’s who. The group I tested this on had 2500+ messages, which takes about 5-10 seconds to process. For larger groups, and as smaller groups continue, it would be best to cache messages instead of re-lookup all past messages.


$token = 'YOUR_TOKEN';
$bot_id = 'YOUR_BOT_ID';

$data = json_decode(file_get_contents('php://input'));

if($data != null){
	if($data->system != true){
	if(strpos($data->text, 'changed name') === false){
	preg_match('/(.+?) changed name to (.+)/', $data->text, $matches);
	$new_name_change = $matches[2];

// download all the messages
$url_base = "".$group_id."/messages?token=".$token."&limit=100&";
$before_id = '';
$messages = array();

do {
	if($before_id != ''){
		$url = $url_base.'before_id='.$before_id;
	} else {
		$url = $url_base;
	$response = json_decode(file_get_contents($url));
		// no response, no results
	$response = $response->response;
	$count = $response->count;
	$messagesNew = $response->messages;
	$messages = array_merge($messages, $messagesNew);
	$before_id = end($messagesNew)->id;
} while($count > 0);

// sort only the system messages for name changes
$system = array();
$system_all = array();
foreach($messages as $message){
	if($message->system == true){
		$system_all[$message->created_at] = $message->text;
		if(strpos($message->text, 'changed name') !== false){
			$system[$message->created_at] = $message->text;

// build maps
$system = array_reverse($system);

// overrides
$overrides = array(
	// 'correct real name' => 'name when joined'

$map = $overrides;
foreach($system as $change){
	preg_match('/(.+?) changed name to (.+)/', $change, $matches);
	$orig = $matches[1];
	$new = $matches[2];

	if(in_array($orig, array_values($map))){
		$keys = array_keys($map, $orig);
		$key = $keys[0];
		$map[$key] = $new;
		if($key == $new){

		$map[$orig] = $new;

	$keys = array_keys($map, $new_name_change);
		// user changed their name back to their real name
	$name = $keys[0];
	// build url
	$message = urlencode($new_name_change.'\'s real name is '.$name);
	$url = ''.$bot_id.'&text='.$message;//.'&token='.$token;
	// send message
	$ch = curl_init();
	curl_setopt($ch, CURLOPT_URL, $url);
	curl_setopt($ch, CURLOPT_POST, true);
	curl_setopt($ch, CURLOPT_POSTFIELDS, array());
	curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
	$result = curl_exec($ch);