By Clay Smith
“But it works on my machine!”
That is an excuse too often overheard in conversations between developers and operations teams. Even with sophisticated tooling, virtually unlimited computing capacity in the cloud, and advanced continuous integration workflows, the differences between developing applications locally and running them in production remains a persistent source of bugs and other problems. Dev and ops teams often turn to virtual machines, pre-built images, and/or configuration management systems like Puppet and Chef to achieve better parity with Linux-based production environments and Mac or Windows development environments.
All those approaches can help, but the problems can still persist. Fortunately, the new Docker for Mac beta offers an opportunity to create a more resilient local environment that better mirrors production. MacOS and Windows have traditionally not supported the Linux-based container technology that powers Docker, but the latest release of Docker for Mac and Windows now makes it easier to get started creating and running containers in those environments with less overhead. Let’s put a simple Node.js application in a Docker container as an example.Less fragile developer environments with Docker containers
Developer workstations are fragile. Upgrading the operating system, botched package installs, conflicting dependencies, and the need to use multiple programming language runtimes remains a persistent source of frustration for developers. Many language-specific tools have been built to manage this complexity, including virtualenv for Python, rbenv for Ruby, and jenv for Java. Docker, however, presents an elegant new alternative.
Containers, like virtual machines, offer a way to isolate the complex dependencies applications require from the host operating system and other applications. Unlike VMs, containers are less resource intensive and usually take only seconds to start.
Docker became a developer darling by combining Linux container technology with a specialized file system and command-line interface that also runs on Mac and Windows with the help of a Linux virtual machine. The additional requirements needed to run Docker on non-Linux environments have been simplified in the latest beta release of Docker’s software, making it easier to work with.
Once installed, Docker images, often available for popular open-source projects from the Docker Hub, are used to instantiate running containers that execute application code. (Understanding the difference between a container and image is particularly important—more information is available on the official Docker tutorial.)
Difference between Docker images and containers
The new Docker for Mac beta software has an easy-to-use installer that dropped certain dependencies—VirtualBox, most notably—in favor of a lightweight Linux virtual machine using a macOS-native virtualization solution.
The new Docker beta has a toolbar helper for Mac OS X
After installing the new version of the Docker client for Mac, it’s possible to immediately start pulling the images that will create a container. This can be done using the command line or the Kitematic GUI interface (a separate download that works with the Mac beta).
The output of this command is “
Hi from Docker running on linux” because the Node.js 6.2 image is based on Debian Linux and, from the perspective of the Node.js process, it’s running on Linux. All the system dependencies required to run Node.js 6.2 are isolated inside of the container image.
While running one-line scripts is useful in limited cases, most applications have many external dependencies. Using commands specified in a
Dockerfile, it’s possible to create a Docker image for a typical Node.js application that requires modules using the node package manager (npm). This Dockerfile example also creates a special non-root user to run the app since, by default, Docker containers execute commands as the root user:
Using this Dockerfile you can build an image for a Node.js application that starts from an index.js file—in this example we’ll create a simple HTTP server that outputs ASCII cows using an npm module. Following standard conventions, we namespace the image with a username or organization name, the name of the image, and the version of the application, and run the
docker build command in the root of the Node.js project directory:
Once the image is successfully built, we can run the container in the root directory of the project. Several command-line options are needed that tell Docker to run the image as a daemon, map port 3000 to the host operating system’s port 3000, mount directories that exist on the host (the actual application code) inside the container, and give it a friendly name, “cow-service”:$ docker run -ti -d --name cow-service -p 3000:3000 -v $(pwd):/home/app -v /home/app/node_modules csmith/cow-service:v1.0.0
If the container is successfully running (a quick
docker ps can verify this), an HTTP request to localhost:3000 will output a cow:
Using Docker for Mac and the official Node.js image, a simple Node.js web service is now running in a container. If changes are made to the application code, restarting the container by providing the name,
docker restart cow-service, will pick them up. According to a recent post by Dave Kerr, if you’re using code watching tools like nodemon, the new Docker for Mac software will now correctly pick up changes. However, if npm dependencies change, you will need to rebuild the image using the
docker build command given the structure of this Dockerfile.
With Kitematic, restarting and viewing the logs and volumes can be managed in a graphical interface:
The path to Docker containers in production
At this point, it’s reasonable to wonder if the additional complexity of installing Docker, defining a Dockerfile, and running a series of commands to build an image and run a container is worth it for such a simple application. The key is that all of the dependencies needed to run Node.js—the correct version of Node.js, npm dependencies, and npm itself—are completely isolated from the host operating system and packaged into a read-only image.
That means after going through this process, the app is wrapped in a container image that is a static, versioned artifact. It can be shared with other team members, used in continuous integration environments to run tests, and eventually deployed to a production environment. Notably, running the Node.js application inside a container didn’t require any code changes to the app or to macOS itself—the only file that was created in the root of the application directory was a Dockerfile.
Docker is increasingly useful for a variety of developer workflows, even when Docker isn’t running in production. As you use Docker to create less fragile development and production environments that support faster changes and more frequent deployments, you’ll want to check out New Relic APM, which is built to help software teams understand how changes affect app performance and reliability.
You can learn more about New Relic’s own multi-year experience running and monitoring Docker applications in production in From Zero to Docker: Migrating to the Whale, How New Relic Used Docker to Solve Thorny Deployment Issues, and How Containers Helped New Relic to Scale [Webinar]. And you can find out more about New Relic’s Docker monitoring capabilities here.Additional Resources
- Docker for Mac beta
- Node.js + Docker Best Practices
- Node.js Guide to Using Docker
- Lessons Learned from Building a Node App With Docker
- Docker on the New Relic blog
Senior Technical Marketing Engineer Adam Larson contributed to this post with invaluable suggestions and technical feedback.
About the Author
Clay Smith is a Developer Advocate at New Relic in San Francisco. He previously has worked at early stage software companies as a senior software engineer, including founding the mobile engineering team at PagerDuty and shipping one of the first iOS apps written in Swift. View posts by Clay Smith.