fredpalas

Posted on Nov 26

My Journey Testing FrankenPHP: From Curiosity to Production-Ready Setup

By Adrián Pastén Lagos - Software Engineer @ Apiumhub

A few months ago, I kept seeing FrankenPHP mentioned everywhere in the PHP community. The promise was compelling: a modern, Go-powered PHP application server with worker mode that could dramatically improve performance. But as someone who's been burned by hype before, I wanted to see it for myself.

So I decided to build something real and put it to the test.

Starting With a Real Application

I didn't want to test FrankenPHP with a "hello world" app or synthetic benchmarks. Those numbers look great in marketing material but tell you nothing about real-world performance. Instead, I built an actual blog application using Symfony—the kind of thing you'd actually deploy to production.

The requirements were simple but realistic:

  • Display a list of blog posts from a PostgreSQL database
  • Render HTML templates with Twig
  • Handle database queries with Doctrine ORM
  • Serve static assets
  • Log requests and errors

Nothing fancy, but enough complexity to reveal real performance characteristics.

The PHP-FPM Baseline

First, I needed a baseline. I deployed the blog with the traditional PHP-FPM setup that most of us know and trust.

Results:

  • Throughput: ~600 requests/sec
  • Response time: 50-75ms average

It worked. Requests were served. Users would be happy enough. But "okay" isn't what we're aiming for anymore, is it?

Enter FrankenPHP

I switched to FrankenPHP, first in normal mode, then enabling worker mode. The journey looked like this:

Configuration Throughput Response Time
PHP-FPM (baseline) 600 r/s 50-75ms
FrankenPHP normal 800 r/s 40-60ms
FrankenPHP worker 1,000 r/s 30-50ms

Better... but still not the dramatic improvement I was hoping for. Something was missing.

Debugging with OpenTelemetry

To find the bottleneck, I instrumented everything with OpenTelemetry. Traces, spans, the full observability stack with Grafana, Prometheus, Loki, and Tempo.

The traces revealed the problem: database connection overhead. Every request was creating a new connection to PostgreSQL, even in worker mode.

The Fix: Persistent Connections

The solution was simple: enable persistent PDO connections for PostgreSQL.

doctrine:
    dbal:
        options:
            !php/const PDO::ATTR_PERSISTENT: true

Combined with worker mode, individual request times dropped to ~4ms.

The Production Stack

For realistic testing, I built a complete production-grade Kubernetes setup:

  • GitOps: ArgoCD for declarative deployments
  • Auto-scaling: KEDA with traffic-based scaling (not just CPU/Memory)
  • Ingress: Caddy
  • Database: PostgreSQL with CNPG operator
  • Observability: Prometheus + OpenTelemetry + Grafana

The magic metric for auto-scaling:

sum(increase(
  caddy_http_requests_total{
    job="php-barcelona", 
    handler="php"
  }[1m]
))

Threshold: 3,000 requests/min triggers scaling. This is traffic-based, not reactive CPU guessing.

Plot Twist: The OTEL Overhead

Everything was looking great until I ran load tests. Same code, same infrastructure, but dramatically different results depending on one setting.

Metric With OTEL Without OTEL Difference
Throughput 1,400 r/s 2,519 r/s +80%
p95 Latency 1,000ms 569ms -43%

OpenTelemetry auto-instrumentation was consuming 44% of my capacity.

The observability tool I added to find bottlenecks... became the bottleneck.

The Trade-off

This isn't to say "don't use OTEL." It's invaluable for debugging and development. But in production:

  • Development/Debugging: Full auto-instrumentation enabled
  • Production: OTEL disabled or sampled at 1-5%

Know when you need it, know when to turn it off.

Final Results

With the optimized configuration (Worker Mode + Persistent PDO + OTEL disabled):

Metric Value
Throughput 2,378 req/sec (CI/CD realistic test)
Improvement 4x vs PHP-FPM
Total Requests 452K in 3 minutes
Success Rate 100% (0 failures)
Pods 4 → 15 auto-scaled

Testing from different locations showed consistent results:

  • Local Docker: 3,778 r/s (pure FrankenPHP performance)
  • CI/CD Pipeline (inside cluster): 2,378 r/s (most realistic)
  • Worker Node SSH: 2,519 r/s (burst test)

All 4-6x faster than the PHP-FPM baseline.

Key Learnings

  1. Worker mode alone isn't magic – you need persistent connections too
  2. OTEL overhead is real – 44% capacity loss in production (sample or disable)
  3. KEDA + custom metrics > standard HPA – scale on traffic, not CPU spikes
  4. Measure everything – the bottleneck isn't always where you think
  5. Modern PHP is fast when properly configured 🚀

When to Use FrankenPHP

Great for:

  • Modern PHP apps (8.2+)
  • High-traffic sites
  • Kubernetes deployments
  • APIs & microservices
  • When you want HTTP/2, HTTP/3

Consider alternatives if:

  • Legacy PHP < 8.2
  • Heavy Apache module dependencies
  • Existing highly-tuned PHP-FPM setup

Bonus: Control Your Variables 🎵

While testing locally with Docker, I got inconsistent results:

  • Run 1: 3,778 r/s ✓
  • Run 2: 3,256 r/s 🤔
  • Run 3: 3,700 r/s 🤷

The culprit? Spotify playing in the background.

Moral: Performance testing needs consistent environments. Never benchmark while listening to metal 🤘


Resources

About the Author

Adrián Pastén Lagos is a Software Engineer at Apiumhub with 10+ years of PHP experience. He organizes Barcelona PHP Talks and volunteers with Software Crafters Barcelona.


This talk was presented at Barcelona PHP Talks #6. Join our community at php-barcelona.es

#PHP #FrankenPHP #Kubernetes #Performance #OpenTelemetry #KEDA #Symfony