Running AWS Cloud Services Locally with LocalStack

Being a remote developer for well over a decade now, I’m a big fan of having the entire development environment running locally and avoiding as many external dependencies as possible. I managed to make that happen with the local dev setup at a previous company but then, later, S3 storage came into play and we (developers) had to share that resource to keep costs low. That meant our local databases wouldn’t be in sync with the contents of our shared dev S3 storage. Moreover, self-cleaning routines that would determine orphaned objects (i.e. files that existed in S3 storage but did not exist in the database) were a bit tough to test without stepping on each other or building in if-dev-else logic (which I’m not a big fan of).

Recently, I was made aware of a fully functional cloud stack called Localstack that advertises, among MANY other AWS services, the ability to run a local S3 instance for development purposes. Without gettin into too much more about what it is or why it’s cool, below are the steps you can take to set up and use Localstack to run a local S3 instance and details about how to run many of the other services it offers.

For the record, I am doing all of this on macOS Big Sur but Docker is doing all of the heavy lifting so the commands should translate to other operating systems and shells.

First things first, you will need to get AWS-CLI or some other tool for communicating with or viewing your AWS services. If you are also on macOS and use HomeBrew (and why wouldn’t you?), the easiest way to install the AWS Command Line Interface is to just run the following command.

brew install awscli

Otherwise, check the AWS Command Line Interface User Guide for other installation options and other operating systems.

Once aws-cli is installed, you can open your favorite terminal and run the ‘aws’ command to see usage details.
aws-cli usage

Next, all you have to do is spin up Localstack with a simple docker command. I won’t get into what Docker is or how to set that up here. If it is new to you, just search for Docker Desktop, get that installed and return to this step.

docker run --rm -it -p 4566:4566 -p 4571:4571 localstack/localstack

The above docker command will pull he latest official Localstack docker image from Dockerhub, spin up a new container using that image and expose two ports for communicating with the services within… 4566 and 4571. This is a bare minimum example of spinning up an instance of Localstack (or any image/container for that matter). There is no persistence to this container, once you stop it, everything within it is lost. If you want a more persistent S3 solution, you’ll want to create a volume for the container to use when storing its data.

Also note that the above example has no limits on which supported AWS services are started within the Localstack container. If you would like to only start S3 (or a smaller subset of services), you can provide a comma delimited list of desired services to the container via an environment variable called “SERVICES”. For example, the following would start the same image but only start the S3 service.

docker run --rm -it -p 4566:4566 -p 4571:4571 -e “SERVICES=s3” localstack/localstack

The following is what you can expect as a result of the above command.
Localstack docker container output

Since the container was started in interactive mode (-it), the terminal will remain attached to the container and display the container logs as they grow. CTRL+C will destroy the container but, again, note that doing so will lose any data you've put into the container so far (buckets, files, etc) unless you've setup a persistent volume and attached it to the container.

I’d suggest switching to docker-compose to do much more customization than that. It will allow you to setup a persistent volume, define various environment variables, port mappings, etc. but be much easier to read and maintain than one long/growing command line. There is an example docker-compose.yml file in the Localstack GitHub Readme file.

Once your S3 instance is up and running, you can verify it the same way you would any other S3 instance. For example, you can hit the health check URL using the following: http://localhost:4566/health. The output should be similar to the following. Depending on the services you started, the “services” section will vary. In my case, I only started S3.

"services": {

"features": {

Now that the S3 instance is up and running, you will want to connect to manage buckets and files but there is still authentication to consider. It appears that Localstack (at least for S3) does not care what credentials access key and secret you provide, just that they are provided (or that you explicitly instruct it to ignore them). For example, you can run the following to create a new bucket without providing an access key/secret.

 aws --no-sign-request --endpoint-url=http://localhost:4566 s3 mb s3://acoderslife 

The following will list all buckets in the s3 instance:
 aws --no-sign-request --endpoint-url=http://localhost:4566 s3 ls s3://acoderslife 

You can upload a full to the bucket with the following example:
 aws --no-sign-request --endpoint-url=http://localhost:4566 s3 cp test.txt s3://acoderslife 

And, finally, you can list the bucket’s contents to see that uploaded file using the following:
aws --no-sign-request --endpoint-url=http://localhost:4566 s3 ls s3://acoderslife

Note that in all of those examples, the request was not signed which means no credentials were provided. If you do want to sign your requests (and I’d expect you do), you can just run the “aws configure” command to set them up. Again, it does not matter to LocalStack S3 what access key or secret you provide, just that you provide them (when you are signing the request). Below is an example output of the configure command.
aws-cli configure output
Once you’ve setup your key and secret, all of the commands above will work without the --no-sign-request option.

Most Recent Photos