By Akshit Pratiush and Anish Singh Walia

DigitalOcean Spaces is an S3-compatible object storage service that supports detailed access logging. Spaces can automatically generate access logs for buckets and store the logs in a bucket you specify. Access logs include:
These access logs capture reads, writes, and deletions of objects within the bucket, requests made to origin endpoints, and accesses to the bucket’s CDN endpoints if CDN is enabled.
However, there’s a gap.
While Spaces provides the logs, it does not natively offer a built-in analytics or visualization layer. This is where tools like GoAccess come in, allowing teams to transform raw log files into meaningful, interactive dashboards. This tutorial takes raw, S3-style access logs from Spaces and turns them into a human-readable, real-time dashboard using lightweight, open-source tooling.
In this tutorial, you will learn how to:
This guide is especially useful for developers, DevOps engineers, and platform teams who want better visibility into Spaces usage.
Before you begin, make sure you have:
Create a new Droplet using Ubuntu. A basic Droplet (1 vCPU / 1 GB RAM) is sufficient for moderate log volumes. Once created, SSH into the Droplet.
If you are new to Spaces setup, review How to Create a Spaces Bucket before continuing.
GoAccess is a real-time web log analyzer and interactive viewer. We’ll use it to visualize our Spaces access logs in a browser-friendly HTML report.
Install GoAccess:
This updates your package list and installs GoAccess on your server:
sudo apt update
sudo apt install -y goaccess
Verify the installation:
Check that GoAccess was installed correctly by printing its version:
goaccess --version
This should output the installed version number, confirming it’s ready to use.
The AWS CLI v2 is required to interact with DigitalOcean Spaces, which uses S3-compatible APIs. With this tool, you’ll be able to download your access logs programmatically.
Install AWS CLI v2:
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o /tmp/awscliv2.zip
sudo apt install -y unzip
unzip /tmp/awscliv2.zip -d /tmp
sudo /tmp/aws/install
aws --version
aws --version confirms the CLI is installed and accessible.You’ll need to provide credentials so the AWS CLI can access your Spaces bucket.
Set up the credentials folder and open the credentials file:
mkdir -p ~/.aws
nano ~/.aws/credentials
mkdir -p ~/.aws ensures the configuration directory exists.nano ~/.aws/credentials creates or opens the credentials file for editing.Add your Spaces keys:
[default]
aws_access_key_id = YOUR_SPACES_ACCESS_KEY
aws_secret_access_key = YOUR_SPACES_SECRET_KEY
Replace YOUR_SPACES_ACCESS_KEY and YOUR_SPACES_SECRET_KEY with your actual credentials from DigitalOcean.
Configure the region and output format:
nano ~/.aws/config
[default]
region = sgp1
output = json
The region should match your Spaces region (e.g., sgp1 for Singapore); output = json sets the default AWS CLI output format.
Now, you’ll pull the access logs from your Spaces bucket down to your Droplet.
Set variables for your log bucket and prefix, and endpoint:
LOG_BUCKET="your-log-bucket-name"
LOG_PREFIX="spaces-logs-prefix/" # Prefix path where logs are written
ENDPOINT="https://sgp1.digitaloceanspaces.com"
Change LOG_BUCKET and LOG_PREFIX to match your setup.
Create a local directory for logs and sync them:
mkdir -p /var/log/spaces-logs
cd /var/log/spaces-logs
# Copy all logs in the prefix to local storage
aws --endpoint-url "$ENDPOINT" s3 cp "s3://$LOG_BUCKET/$LOG_PREFIX" ./ --recursive
mkdir and cd commands set up a working directory for your logs.aws s3 cp command recursively copies log files from Spaces to your local directory.Example variable values:
LOG_BUCKET="akshit-logs"
LOG_PREFIX="logs/"
ENDPOINT="https://sgp1.digitaloceanspaces.com"
Use these as a template for your own values.
.gz filesAccess log files from Spaces are typically compressed with gzip to save space.
To decompress all .gz files to plain .log files:
find . -type f -name "*.gz" -exec gunzip -kf {} \;
.gz files and decompresses them in place.-k option keeps the original .gz files, and -f forces overwrite if needed.To simplify analysis, combine all the decompressed log files into a single file:
find . -type f -name "*.log" -print0 | xargs -0 cat -- > combined.log
.log file and concatenates them into combined.log.Sanity check:
wc -l combined.log
combined.log, so you can confirm that data has been collected.Now, use GoAccess to analyze the combined log file and create your dashboard.
Recommended GoAccess command:
goaccess /var/log/spaces-logs/combined.log \
--log-format='%^ %^ [%d:%t %^] %h %^ %^ %^ %^ "%r" %s %^ %b %^ %^ %^ "%R" "%u" %^ %^ %^ %^ %^ %^' \
--date-format=%d/%b/%Y \
--time-format=%T \
-a -o /var/www/html/spaces_report.html
-a enables analytics for all log entries.-o defines the output path for the HTML dashboard.If you are working with CDN logs in CloudFront format, use instead:
goaccess /var/log/spaces-logs/combined.log \
--log-format=CLOUDFRONT \
-a -o /var/www/html/spaces_report.html
CLOUDFRONT format covers standard CDN logs.The log format string tells GoAccess how to parse each field in the S3 log entry:
%d/%b/%Y and %T match date and time formatting, e.g., [04/May/2016:12:11:43 +0000].%^ skips fields you don’t want to process.%h grabs the IP address."%r" captures the HTTP request line in quotes.%s, %b, "%R", and "%u" capture response code, bytes sent, referrer, and user agent, respectively.Quick validation for report generation: Before deploying, you can quickly generate a preview report to ensure parsing works:
goaccess /var/log/spaces-logs/combined.log \
--log-format='%^ %^ [%d:%t %^] %h %^ %^ %^ %^ "%r" %s %^ %b %^ %^ %^ "%R" "%u" %^ %^ %^ %^ %^ %^' \
--date-format=%d/%b/%Y \
--time-format=%T \
--no-global-config \
-o /tmp/spaces_report_test.html
This creates a test dashboard in /tmp. Check that data displays as expected before publishing live.
Nginx is a lightweight web server used to serve your GoAccess HTML dashboard.
Install Nginx:
sudo apt update
sudo apt install -y nginx
Set permissions so the web server can read your report:
sudo chown -R www-data:www-data /var/www/html
sudo chmod -R 755 /var/www/html
Start and enable Nginx:
sudo systemctl enable --now nginx
Now, open http://YOUR_DROPLET_IP/spaces_report.html in your web browser.
You will see an interactive dashboard visualizing your Spaces access logs—enabling quick insights into storage activity, requests, and usage patterns.
Use the bucket Settings tab in the control panel or run aws s3api get-bucket-logging against the Spaces endpoint. The response includes LoggingEnabled when logging is active.
Spaces stores logs in the destination bucket and prefix you set in the logging configuration. Keep destination and source buckets separate, and use the same region for best performance.
Yes, GoAccess can process S3-compatible logs when you provide a matching --log-format, --date-format, and --time-format. For continuously updated dashboards, run sync and generation on a schedule.
The most common causes are mismatched log format tokens, an empty combined.log, or delays in asynchronous log delivery. Validate with wc -l combined.log and test parsing to /tmp/spaces_report_test.html.
Access logs are delivered asynchronously. In most cases logs appear within about an hour, but delays can be longer during peak periods.
In this tutorial, you configured DigitalOcean Spaces access logging, synced logs locally using the S3-compatible AWS CLI, and transformed raw log data into an interactive HTML dashboard using GoAccess. This workflow provides a lightweight and cost-effective way to gain visibility into your storage usage without relying on heavy observability stacks.
By combining Spaces’ built-in logging with open-source tooling, you now have a practical foundation for monitoring traffic patterns, debugging issues, and understanding how your data is being accessed.
To extend this setup, review the Spaces documentation and test DigitalOcean Spaces with your production logging workflow. You can also connect your broader infrastructure with DigitalOcean Kubernetes when you are ready to scale analytics pipelines.
Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.
I help Businesses scale with AI x SEO x (authentic) Content that revives traffic and keeps leads flowing | 3,000,000+ Average monthly readers on Medium | Sr Technical Writer(Team Lead) @ DigitalOcean | Ex-Cloud Consultant @ AMEX | Ex-Site Reliability Engineer(DevOps)@Nutanix
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.
Full documentation for every DigitalOcean product.
The Wave has everything you need to know about building a business, from raising funding to marketing your product.
Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.
New accounts only. By submitting your email you agree to our Privacy Policy
Scale up as you grow — whether you're running one virtual machine or ten thousand.
From GPU-powered inference and Kubernetes to managed databases and storage, get everything you need to build, scale, and deploy intelligent applications.