Manga Kotoba, Part 4: Going Live — Heroku, RDS, S3 and the Ephemeral Filesystem
🚢 Three days.
That’s how long
“just deploying
it” actually took.
I was done. Or so I thought.
The API was humming along in Docker. The admin panel worked. The scraper was pulling manga metadata with surgical precision, all images landing neatly in public/uploads/. I had written tests. I had reviewed those tests. The Symfony profiler showed clean queries. It was beautiful.
Then I typed git push heroku main.
Three days later — eyes bloodshot, surrounded by empty coffee cups and a browser history consisting entirely of Heroku docs, AWS forums, and very specific StackOverflow questions about PHP PDO SSL constants — I had a working production deployment.
This is the story of those three days.
The Production Stack
📐 Every component
here cost me at least
one unexpected
error in production.
Before I get into the war stories, let me show you what the final architecture looks like. Click any component to learn what it does and how it connects to the rest of the system.
Production architecture — click any component
The dyno is the brains of the operation — it runs both Nginx and PHP-FPM as sibling processes, started by a Procfile. The database lives in Amazon RDS (not Heroku’s own add-ons, more on that), and images bypass the dyno’s filesystem entirely and go straight to S3.
The Procfile and the Heroku Way
🐳 Locally I useddocker-compose.yml.
On Heroku the sameDockerfile runs,
but Procfile
takes over process
management.
The shift from Docker Compose to Heroku is conceptually simple, but it trips people up the first time. Locally, docker-compose.yml manages how services start. On Heroku, a Procfile in the repo root does the job instead. My final Procfile looks like this:
release: php bin/console doctrine:migrations:migrate --no-interaction web: heroku-php-nginx -C nginx_app.conf public/
Two lines. release runs once before traffic is routed to the new dyno — it’s the right place for migrations (more on that shortly). web declares the actual server process using Heroku’s official PHP buildpack helper that wires up Nginx and PHP-FPM together.
The docker/entrypoint.sh I used locally does the same job, but differently — it polls until the DB is ready, runs migrations, warms the cache, then execs PHP-FPM:
#!/bin/sh set -e echo "Waiting for DB to be ready..." until php bin/console doctrine:migrations:status --no-interaction > /dev/null 2>&1; do echo " DB not ready yet, retrying..." sleep 3 done echo "Running migrations..." php bin/console doctrine:migrations:migrate --no-interaction --allow-no-migration || true echo "Clearing cache..." php bin/console cache:warmup --no-interaction || true echo "Starting PHP-FPM..." exec php-fpm
The polling loop makes sense with Docker Compose because the DB container might not be ready when the app container starts. On Heroku, RDS is always up — no polling needed.
The Ephemeral Filesystem: The Lesson That Hurts
💾 “Ephemeral” is a
polite word for
“everything you
wrote to disk is
gone on the next
deploy.”
Here is the gotcha that almost every developer runs into the first time they deploy a file-upload feature to Heroku:
The filesystem does not persist between deploys.
Every time you push to Heroku, the entire dyno is rebuilt from the Docker image. Any files written to disk during the previous dyno’s lifetime — uploaded images, generated PDFs, cached thumbnails — simply do not exist anymore.
I discovered this at 11pm on day one, after spending the afternoon building a very pretty image upload UI. I deployed. I uploaded a manga cover. I visited the page. The image was there. I deployed again (fixing a typo in a CSS class). I visited the page. The image was gone.
The simulation below shows exactly what happened, and then the fix:
🎬 Ephemeral filesystem simulator
Upload files, then deploy — and watch what happens.
public/uploads/
manga-kotoba/covers/
The fix, once you understand the problem, is obvious: don’t use the local filesystem. Use a proper object store — in this case, Amazon S3.
Migrating Images to S3
☁️ The league/flysystem
abstraction is great
for this, but I went
with a thin custom
wrapper so I could
control the URL
format exactly.
Flysystem is the canonical PHP approach to filesystem abstraction. You configure adapters (local, S3, GCS, SFTP) and swap them without changing application code. I ended up writing a lighter-weight S3StorageService instead — it gives me more control over the public URL format and the fallback behaviour.
First, the dependency:
composer require aws/aws-sdk-php
Then the service. The key design decision: isConfigured() returns false when the env vars are empty, so local development keeps working without any AWS setup:
final class S3StorageService { private ?S3Client $client = null; public function __construct( private readonly string $bucket, private readonly string $region, private readonly string $accessKeyId, private readonly string $secretAccessKey, ) {} public function isConfigured(): bool { return $this->bucket !== '' && $this->region !== ''; } public function upload(mixed $source, string $key, string $mimeType = 'application/octet-stream'): string { $body = is_string($source) ? fopen($source, 'rb') : $source; $this->getClient()->putObject([ 'Bucket' => $this->bucket, 'Key' => $key, 'Body' => $body, 'ContentType' => $mimeType, ]); return $this->publicUrl($key); } public function publicUrl(string $key): string { // Virtual-hosted-style: https://bucket.s3.region.amazonaws.com/key return sprintf('https://%s.s3.%s.amazonaws.com/%s', $this->bucket, $this->region, $key); } }
The ImageDownloaderService (used by the scraper to mirror cover images) changed from “save to public/uploads/” to “upload to S3 or fall back to disk”:
/** * Downloads remote images to the local public/uploads/ directory * and returns a server-relative URL stored in the database. */ class ImageDownloaderService { public function __construct( private readonly HttpClientInterface $httpClient, private readonly string $projectDir, ) {} public function download(string $remoteUrl, string $subfolder = 'covers'): ?string { $ext = pathinfo(...); $basename = hash('sha256', $remoteUrl) . '.' . $ext; $localPath = $this->projectDir . '/public/uploads/' . $subfolder . '/' . $basename; // ... write to local filesystem ... return '/uploads/' . $subfolder . '/' . $basename; // server-relative URL } }
/** * Downloads remote images and stores them in S3 (when configured) * or on local disk (development fallback). */ class ImageDownloaderService { public function __construct( private readonly HttpClientInterface $httpClient, private readonly string $projectDir, private readonly S3StorageService $s3, // new ) {} public function download(string $remoteUrl, string $subfolder = 'covers'): ?string { $ext = pathinfo(...); $basename = hash('sha256', $remoteUrl) . '.' . $ext; $key = $subfolder . '/' . $basename; // ... fetch $bytes from $remoteUrl ... if ($this->s3->isConfigured()) { return $this->s3->upload($stream, $key, $mimeType); // full HTTPS URL } // Local fallback: write to public/uploads/ file_put_contents($localPath, $bytes); return '/uploads/' . $key; } }
The ACL Gotcha
7f89234 was a one-line fix that took 45 minutes to diagnose.
The original upload() method included 'ACL' => 'public-read' in the putObject call. AWS recently changed the default for new S3 buckets to Object Ownership: Bucket owner enforced — which means ACLs are disabled at the bucket level. If you try to set an ACL on an object in such a bucket, you get:
The fix: remove the ACL line entirely. Public access is now controlled via the bucket policy instead, which is actually cleaner. One line deleted:
$this->getClient()->putObject([ 'Bucket' => $this->bucket, 'Key' => $key, 'Body' => $body, 'ContentType' => $mimeType, - 'ACL' => 'public-read', // ← removed; bucket has ACLs disabled ]);
Amazon RDS: The SSL Nightmare
🔐 The global-bundle.pem
file from AWS is
a common stumbling
block — there are
3 different ways
to configure it
and 2 of them
don’t work on
Heroku.
The original plan was to use Heroku’s JawsDB MySQL add-on. It works, but I wanted more control: larger instances, RDS-level backups, the ability to run in the same AWS region as my S3 bucket. So I switched to Amazon RDS.
That decision kicked off a 36-hour SSL certificate saga.
Play through the deployment log below to see every step of the journey:
Press "Play" to replay the deployment log
The key insight: PDO’s MYSQL_ATTR_SSL_CA (constant 1009) and MYSQL_ATTR_SSL_VERIFY_SERVER_CERT (constant 1014) need to be set using their integer values in doctrine.yaml because Symfony’s YAML parser doesn’t resolve PHP constants.
Here is the final working doctrine.yaml after commits 5ed1ee0 and 72c8e22:
doctrine:
dbal:
url: '%env(resolve:DATABASE_URL)%'
server_version: '8.0.42'
profiling_collect_backtrace: '%kernel.debug%'
options:
# PDO::MYSQL_ATTR_SSL_CA = 1009
1009: '%kernel.project_dir%/global-bundle.pem'
# PDO::MYSQL_ATTR_SSL_VERIFY_SERVER_CERT = 1014
# Disable strict chain verification (Heroku OpenSSL vs multi-cert bundle).
# Connection is still fully encrypted via TLS.
1014: false
orm:
naming_strategy: doctrine.orm.naming_strategy.underscore_number_aware
auto_mapping: true
MYSQL_ATTR_SSL_VERIFY_SERVER_CERT = false means we don't validate the RDS certificate chain. The connection is still encrypted, but you are theoretically vulnerable to a man-in-the-middle attack from inside AWS's network. For a hobby project this is acceptable. For anything handling sensitive data, spend the time to get the cert path working correctly.
Database Migrations in Production
🚀 A release phase inProcfile is one of
the simplest and
most reliable ways
to handle database
migrations in Heroku
deployments.
Running database migrations in production is one of those things that looks simple and isn’t. The naive approach is to run doctrine:migrations:migrate in the Docker entrypoint — but that means migrations run in parallel with the old app still serving traffic.
If migration A drops a column that the old app is still reading, you have a bad time.
Heroku’s release phase solves this cleanly. Declare a release process in Procfile:
release: php bin/console doctrine:migrations:migrate --no-interaction web: heroku-php-nginx -C nginx_app.conf public/
Heroku’s behaviour:
- Builds the new Docker image
- Runs the
releasecommand in a one-off container using the new image - Only if
releaseexits 0, routes traffic to the newwebdynos - If
releasefails, the old dynos keep running — no downtime
This means migrations run atomically before the new code is live. No concurrent access to the new schema from the old code. It’s not a full zero-downtime migration strategy (you still need backward-compatible migrations for that), but it’s a huge improvement over running migrations in the entrypoint.
The 400 on Admin Login
🕵️ Heroku’s reverse proxy
changes the HTTP
environment in subtle
ways. Always test
your auth flows
in a staging env
before going live.
After the SSL saga, I thought I was done. I opened the admin panel. Entered my credentials. Hit Submit.
400 Bad Request.
Locally it worked fine. On Heroku, 400. No other information. No logs. Just 400.
After an hour of adding dump() calls everywhere, I traced it to Symfony’s form_login authenticator. The issue: Symfony’s form login looks for specific request parameter names (_username and _password by default). My login form was sending email and password. Locally, some quirk of the request handling let this slide. On Heroku, with its reverse proxy adding headers and potentially changing Content-Type negotiation, it broke.
The fix was to explicitly tell Symfony which field names to use d6dbd2c:
security:
firewalls:
main:
form_login:
login_path: app_login
check_path: app_login
enable_csrf: true
default_target_path: admin_index
+ username_parameter: email
+ password_parameter: password
logout:
path: app_logout
target: app_login
Environment Variables
🔑 Never commit .env
with real credentials
to git. Useheroku config:set
and keep your.env file for
local defaults only.
One of the most important things to get right in any Heroku deployment: environment variable hygiene. The app needs a surprising number of them. Click each variable to see where it comes from and how to set it:
⚙️ Required Heroku config vars
The N+1 Query Bomb
📊 The max_questions
limit on RDS free tier
is surprisingly low.
One N+1 query in
a list endpoint
can hit it instantly.
When I switched from JawsDB to RDS free tier I hit a new problem almost immediately: the API started returning 500 errors under minimal load.
The logs showed: SQLSTATE[HY000]: General error: 1226 User 'kotoba_user' has exceeded the 'max_questions' resource
The culprit was a classic N+1 query in the manga listing endpoint. For each manga, the endpoint was executing a separate query to count its volumes. With 500 mangas in the DB, one page request fired 501 queries. Commit 86dcf38 killed it with an eager JOIN:
// For each manga, a separate COUNT query — O(n) database calls foreach ($mangas as $manga) { $volumeCount = $this->volumeRepository->count(['manga' => $manga]); }
// One query with LEFT JOIN + GROUP BY — O(1) database calls $qb->select('m, COUNT(v.id) AS HIDDEN volumeCount') ->leftJoin('m.volumes', 'v') ->groupBy('m.id') ->addOrderBy('volumeCount', 'DESC');
Production Readiness Checklist
✅ I went through this
list at 2am before
sharing the app URL.
It’s not exhaustive,
but it covers the
things that bit me
most in this deploy.
Tick off each item as you go. Progress is tracked in the bar at the top.
🚀 Pre-launch checklist
What’s Next: Part 5
📱 The iOS side of
the project was
where the real
magic happened —
swipe cards,
furigana rendering,
StoreKit paywall.
With the backend deployed and battle-tested, it was time to build the thing users actually touch: the iOS app.
Part 5 covers the SwiftUI deep-dive:
- Onboarding flow — welcome screens, permissions, initial manga selection
- The swipe card UI — the
DragGestureimplementation that makes vocabulary review feel satisfying - Furigana rendering — combining
AttributedStringand a custom layout to show readings above kanji - StoreKit 2 paywall — subscription management, purchase restoration, receipt validation
The backend is just plumbing. The iOS app is where the product lives.