As far as I know, Memached is not available as package for Mac OS X. I am still on Snow Leopard (10.6.8), and I have installed XCode and all development tools. I have use the article 'Installing memcached 1.4.1 on Mac OS X 10.6 Snow Leopard' from wincent.com. For simplicity reason I have duplicate the content and updated to the latest releases. The 2020.5.137 version of Redis Desktop Manager for Mac is available as a free download on our software library. This free Mac application was originally developed by Igor Malinovskiy. The file size of the latest downloadable installation package is 9.2 MB. Our antivirus scan shows that this Mac. Running the above simple script will provide this figures in my Mac OS X system, running over the loopback interface, where pipelining will provide the smallest improvement as the RTT is already pretty low: without pipelining 1.185238 seconds with pipelining 0.250783 seconds.
I still see a lot of people asking 'what's the best MongoDB client for Mac OS X' (besides the mongo
shell console), so I think it would be only fair to share my experience.
I've been using MongoDB a lot and for some reason I haven't been too comfortable using the shell console. I mean, you need an integrated code editor to fiddle with those somewhat verbose JSON-formatted queries...
So I was constantly looking for alternatives; 18 months ago there was none I could find and learn to like, really, but now I really favour Robomongo. It gives you 'the full power of MongoDB shell', but in the same time you can easily save, load, edit your queries, do copy/paste, view your results as JSON/trees/tables... and yes, you do get a decent autocompletion :-).
It's also cross-platform, free and open-source (GitHub repo here).
Fotonauts' fork of MongoHub is another interesting alternative to keep an eye on; it has a more 'native' OS X feel but IMO it does lack a better query editor...
2015-05-29
This tutorial introduces how to deploy a web app, Redis, Postgres and Nginx with Docker on the same server. In this tutorial, the web app is a node.js(express) app. We use Redis as a cache store, Postgres as the database, and Nginx as the reverse proxy server. You can get all source code at https://github.com/vinceyuan/DockerizingWebAppTutorial.
Why Docker
Docker is a virtualization technology. The key feature I like most is it provides resource isolation. The traditional way of building a (low-traffic) website is we install the web app, cache, database, Nginx directly on a server. It's not easy to change the settings or the content a lot, because they are in the same environment. Changing one may impact others. With Docker, we can put each service in a container. It keeps the host server very clean. We can easily create/delete/change/re-create containers.
Install Docker on the host
Docker runs on a 64-bit Linux OS only. If your Linux is 32-bit, you have to re-install the 64-bit version. My original OS was 32-bit CentOS. Now I am using 64-bit Debian 8. The main reason I choose Debian is its distribution size is small and Docker recommends it in Best Practices(it's ridiculous that almost all examples at docker.com use ubuntu). Actually the host's OS can be different to the container's OS. I choose Debian instead of 64-bit CentOS because I don't want to spend any time on the differences. For example, the package management tools on Debian and CentOS are different. One is apt, the other is yum.
Currently, Docker's official installation on Debian 8 does not work. You need to run the following commands as root. theuser is the user of host OS.
Prepare
The folder /DockerizingWebAppTutorial
contains all we need. mynodeapp is a very simple node.js (express) app. It just reads a number from Redis, and gets a query result from Postgres. There are several Dockerfiles in the dockerfiles folder. We will use them to build images.
Create folders:
Let's run the first container.
Redis
We use the official Redis image. Run it directly with this command:
-v /mydata/redis_data:/data
means we mount a folder /mydata/redis_data
of the host as a volume /data
in a container. Nginx will save dump.rdb
at /mydata/redis_data
in the host. If we don't mount a volume, Nginx will save dump.rdb
in the container. When this container is deleted, dump.rdb
will be deleted too. So we should always mount a volume for the important data e.g. database file, logs.--name myredis
means we name this container myredis--restart=always
means the container will restart after it quits unexpectedly. It also makes the container start automatically after the server reboots.
That command outputs:
It downloads redis:latest
image from Docker Hub. Let's check if myredis container is running.
We can see myredis is running.
We need to run redis-cli
in this container to set a value in Redis.
Postgres
We use the official Postgres image too. Just run it directly.
-e POSTGRES_PASSWORD=postgres
means we set the environment variable POSTGRES_PASSWORD
to postgres.
-v /mydata/postgres_data:/var/lib/postgresql/data
means we mount /mydata/postgres_data
as a volume. This is very important. It's safe to keep database files in the host.
Create mynodeappdb:
We can see mypostgres and myredis are running.
Redis client and Postgres client
The Dockerfile for redis client:
It's based on debian:7
. It actually installs both redis server and client. But we only need the client. So it stops redis-server.
Build it:
The Dockerfile for Postgres client:
It's based on myredisclient
, because our web app needs to access both redis and postgres. The annoying thing is the default postgresql-client in Debian apt is a very old version (pg_dump
will not work, because the version does not match the server's version). This Dockerfile installs the latest version (currently 9.4).
Build it
We can see there are 5 images in the host.
Node.js
Let's build a Node.js image. In the Dockerfile for mynodejs image, we install node.js, express, forever and then set NODE_ENV
production. In this example, I am not using the latest version.
Build it.
mynodeapp
Then we build an image for mynodeapp. In Dockerfile, we run npm install, and use forever to run the node.js app. We don't use forever start, because we don't run it as a daemon (otherwise, the container will quit immediately).
Build it
Actually we can merge these 4 Dockerfiles into one to create one image. I build 4 images for re-using images. For example, if we want to build an image for another node.js app, we can write a Dockerfile based on mynodejs image. If we want to replace node.js with Go, we can write a Dockerfile based on myredispgclient.
The core code of mynodeapp:
There is a problem. We are using localhost
or 127.0.0.1
for redis and postgres' host address. It works only when they are installed on the same server. But now they are in different containers. Even if we use --link
, we still cannot access them via localhost
and 127.0.0.1
. We can use the following code to get correct host and port.
REDIS_PORT_6379_TCP_ADDR
is created by Docker if you run a container with --link myredis:redis
. You can get Postgres user account, password, port from the environment variables too.
Run a container based on mynodeapp image. We also name the container mynodeapp. You can rename it whatever you like.
By default, each container is isolated. --link
allows a container access another container. --link mypostgres:postgres
means we can access mypostgres container with the alias postgres
just like localhost
for 127.0.0.1
.-v /mydata/log_mynodeapp:/log
mounts a volume. We want to keep logs in the host.-p 3000:3000
maps host's port 3000 to container's port 3000. It is not mandatory. But with it, we can use curl localhost:3000
in the host to check if mynodeapp container runs correctly.
The web app runs correctly in the container.
Nginx
Now we install Nginx. In the Dockerfile, we make directory /mynodeapp/public
. A folder in the host will be mounted here.
In nginx-docker.conf, we use mynodeapp for the server address, because it is linked.
-v /mydata/postgres_data:/var/lib/postgresql/data
means we mount /mydata/postgres_data
as a volume. This is very important. It's safe to keep database files in the host.
Create mynodeappdb:
We can see mypostgres and myredis are running.
Redis client and Postgres client
The Dockerfile for redis client:
It's based on debian:7
. It actually installs both redis server and client. But we only need the client. So it stops redis-server.
Build it:
The Dockerfile for Postgres client:
It's based on myredisclient
, because our web app needs to access both redis and postgres. The annoying thing is the default postgresql-client in Debian apt is a very old version (pg_dump
will not work, because the version does not match the server's version). This Dockerfile installs the latest version (currently 9.4).
Build it
We can see there are 5 images in the host.
Node.js
Let's build a Node.js image. In the Dockerfile for mynodejs image, we install node.js, express, forever and then set NODE_ENV
production. In this example, I am not using the latest version.
Build it.
mynodeapp
Then we build an image for mynodeapp. In Dockerfile, we run npm install, and use forever to run the node.js app. We don't use forever start, because we don't run it as a daemon (otherwise, the container will quit immediately).
Build it
Actually we can merge these 4 Dockerfiles into one to create one image. I build 4 images for re-using images. For example, if we want to build an image for another node.js app, we can write a Dockerfile based on mynodejs image. If we want to replace node.js with Go, we can write a Dockerfile based on myredispgclient.
The core code of mynodeapp:
There is a problem. We are using localhost
or 127.0.0.1
for redis and postgres' host address. It works only when they are installed on the same server. But now they are in different containers. Even if we use --link
, we still cannot access them via localhost
and 127.0.0.1
. We can use the following code to get correct host and port.
REDIS_PORT_6379_TCP_ADDR
is created by Docker if you run a container with --link myredis:redis
. You can get Postgres user account, password, port from the environment variables too.
Run a container based on mynodeapp image. We also name the container mynodeapp. You can rename it whatever you like.
By default, each container is isolated. --link
allows a container access another container. --link mypostgres:postgres
means we can access mypostgres container with the alias postgres
just like localhost
for 127.0.0.1
.-v /mydata/log_mynodeapp:/log
mounts a volume. We want to keep logs in the host.-p 3000:3000
maps host's port 3000 to container's port 3000. It is not mandatory. But with it, we can use curl localhost:3000
in the host to check if mynodeapp container runs correctly.
The web app runs correctly in the container.
Nginx
Now we install Nginx. In the Dockerfile, we make directory /mynodeapp/public
. A folder in the host will be mounted here.
In nginx-docker.conf, we use mynodeapp for the server address, because it is linked.
Build the image and run the container.
Run mynginx container.
--link mynodeapp:mynodeapp
means we link mynodeapp container to mynginx container. We don't link myredis and mypostgres because mynginx does not access them directly.We also mount 2 folders for logging.-p 443:443
is for https. However, this example does not provide ssl certificate files.
Redis Client Mac Os X High Sierra
Run curl localhost
and curl localhost/stylesheets/style.css
to check if mynginx runs correctly.
Now we finished deploying a web app, Redis, Postgres and Nginx with Docker. It took me a lot of time to really deploy my real app with Docker. Luckily I tested in a VirtualBox VM. I can delete/create images/containers back and forth easily with Docker.
Mac Redis Client
An important part is missing. That's restoring and backing up database. I will show you in another tutorial. Here are some tips about Docker.