Skip to main content

DBA Learning Rails: Everything Felt Wrong. SQL Was Better. Then Maintenance Hell Changed My Mind.

Why database admins hate ActiveRecord initially—until they see Rails apps scale and maintain themselves

A
Raza Hussain
· 8 min read · 51
DBA Learning Rails: Everything Felt Wrong. SQL Was Better. Then Maintenance Hell Changed My Mind.

“Raw SQL is faster, cleaner, more controllable.” I said that for years—until a week of schema changes forced me to touch 200 hand‑rolled queries and blew two sprints. That was the day ActiveRecord won.

What rubbed my DBA brain the wrong way

Coming from years of PostgreSQL, the Rails ORM looked like training wheels. I wanted EXPLAIN, ANALYZE, window functions, WITH clauses, and careful type casting. A method chain like User.active.order("last_login DESC") felt like hiding the important parts: join order, predicates, index usage.

Then we hit production scale—50K monthly active users—and the cost of “control at every call site” showed up on the invoice:

  • Maintenance time per release went from ~45 minutes to 6+ hours any time a column moved or a predicate changed.
  • N+1 queries crept into pages I hadn’t reviewed: 42 queries for one dashboard; after fixing with eager loading it was 3.
  • p95 response on our heaviest endpoint dropped from 480ms to 170ms when we replaced hand SQL fragments with scopes + includes and let the ORM batch work predictably.

Those numbers weren’t about raw speed. They were about consistency, safety, and refactorability.

The day maintenance beat raw SQL

We renamed orders.status to orders.state and normalized it to state_id (lookup table). I had sprinkled SQL like this across services and jobs:

# BEFORE: Hand-written SQL scattered in controllers & workers
# Why it's a problem: every string hard-codes column names, so schema changes fan out to many files.
Order.find_by_sql(["SELECT id FROM orders WHERE status IN (?) AND created_at > ? ORDER BY created_at DESC LIMIT 100", %w[pending paid], 30.days.ago])

That week I touched 200+ occurrences across Sidekiq workers, mailers, and analytics scripts. We shipped late and introduced two subtle bugs (wrong timezone cast; missed paid_at backfill) because strings can’t help you refactor.

Here’s the version we kept:

# AFTER: Intent modeled with scopes; refactors are localized
# Why it works: renaming columns or changing predicates happens in one place.
class Order < ApplicationRecord
  scope :recent, -> { where("orders.created_at > ?", 30.days.ago) }
  scope :in_states, ->(states) { joins(:state).where(states: { name: states }) }
end

Order.in_states(%w[pending paid]).recent.order(created_at: :desc).limit(100)

When we later added archived_at and changed “recent” to mean “30 days or not archived,” the diff was a single scope edit. Maintenance time per release dropped from ~6 hours to 45 minutes on average.

Re‑mapping DBA instincts to Rails: patterns that work

You don’t lose your SQL tools—you just drive them through Rails constructs that keep code consistent.

1) N+1: teach the ORM to batch

# WHY: eager loading eliminates per-row fetch; p95 latency falls under load.
orders = Order.includes(:customer, :line_items) # .preload if you don't need conditions on the join

# Render
orders.each do |order|
  puts [order.id, order.customer.name, order.line_items.size].join(", ")
end

Before: 1 query for orders + 2 per row (customer, line_items) → ~42 queries. After: 3 total queries (orders, customers, line_items) → 3 queries; dashboard p95 improved 480ms → 170ms with 50K MAUs behind Pagy pagination.

Use the Bullet gem in development to surface N+1s early, and Rack::MiniProfiler to watch query counts on hot paths. PgHero is great in production for index usage and slow query trends.

2) Keep SQL superpowers with Arel when intent gets gnarly

ActiveRecord handles 90% of predicates; the next 9% belong to Arel. Keep the relation composable and testable.

# WHY: complex condition stays typed and composable; avoids brittle string interpolation
orders = Order.arel_table
recent   = orders[:created_at].gt(30.days.ago)
revenue  = orders[:total_cents].gteq(500_00)
coalesced_state = Arel.coalesce([orders[:state_id], 0])

q = Order.where(recent).where(revenue).where(coalesced_state.not_eq(0))
q = q.order(orders[:created_at].desc).limit(100)

q.to_a # executes as needed

Expected SQL (simplified):

SELECT "orders".*
FROM   "orders"
WHERE  "orders"."created_at" > ?
  AND  "orders"."total_cents" >= 50000
  AND  COALESCE("orders"."state_id", 0) <> 0
ORDER BY "orders"."created_at" DESC
LIMIT 100;

3) Still need CTEs and window functions? Use targeted raw SQL at the boundary

Keep raw SQL encapsulated with POROs or read models so the blast radius is tiny during refactors.

# WHY: window functions remain expressible while callers stay on a Relation-like API
TopCustomers = Struct.new(:limit) do
  def call
    sql = <<~SQL
      WITH ranked AS (
        SELECT c.id, c.name,
               SUM(o.total_cents) AS revenue,
               RANK() OVER (ORDER BY SUM(o.total_cents) DESC) AS rnk
        FROM customers c
        JOIN orders o ON o.customer_id = c.id
        WHERE o.created_at > NOW() - INTERVAL '30 days'
        GROUP BY c.id
      )
      SELECT * FROM ranked WHERE rnk <= $1
    SQL
    Customer.find_by_sql([sql, limit])
  end
end

TopCustomers.new(10).call

Trade‑off: you own the SQL. Keep it in one place; add tests around expected columns and shapes.

4) Parameterization and safety are defaults, not chores

# WHY: binds prevent injection and keep plans reusable; Brakeman will complain if you interpolate strings
unsafe = "name LIKE '%#{params[:q]}%'"           # DON'T
safe   = ["name ILIKE ?", "%#{params[:q]}%"]    # DO

User.where(safe).limit(20)

With binds, we kept a single plan in PostgreSQL’s cache; combined with an index on LOWER(name), search stayed under 40ms p95.

5) Use to_sql and explain to keep your DBA eyes

# WHY: you can still inspect plans and regressions just like psql
relation = Order.where(created_at: 7.days.ago..Time.current).order(created_at: :desc).limit(50)
puts relation.to_sql      # log the exact SQL during review
puts relation.explain     # shows EXPLAIN plan so you can spot seq scans

Typical output includes the exact SQL and a plan tree; pair this with PgHero to alert on sequential scans after deploys.

When to use ActiveRecord vs Arel vs Raw SQL

ActiveRecord (default): everyday filters, joins, associations, pagination with Pagy, and updates. You get refactorability, testability, and guardrails from tools like StandardRB and Brakeman.

Arel (power moves): conditional expressions, grouping with HAVING, safe SQL fragments that need type awareness. Use when method chains become unreadable but you still want a Relation.

Raw SQL (surgical): CTEs, window functions, vendor features (e.g., UNLOGGED tables, ON CONFLICT upserts beyond what insert_all covers). Keep these in one class per use‑case; return plain Ruby objects or readonly AR models.

Rule of thumb: 90% ActiveRecord, 9% Arel, 1% raw SQL. The more callers a query has, the higher the value of modeling it as a scope.

Trade‑offs in plain numbers

  • Development speed: ActiveRecord + scopes saved us ~5.5 hours per release during schema changes.
  • Runtime: Eager loading and scoped predicates reduced DB calls from 42 → 3 on a key page and p95 from 480ms → 170ms. Window‑function reports stayed raw SQL with no ORM overhead where it mattered.
  • Cognitive load: onboarding a new engineer dropped from 3 days of “where is this SQL?” to 1 day with discoverable scopes and tests.

When not to lean on ActiveRecord:

  • Cross‑database queries or analytics pipelines where SQL shape matters more than domain modeling.
  • Heavy reporting with nested CTEs and custom aggregates—better served by dedicated views/materialized views or a reporting service.
  • Queries that must match a DBA‑audited SQL contract (e.g., vendor certification) where generated SQL variability is a risk.

A production mistake I won’t repeat

I once replaced this:

# BAD WHY: builds huge Ruby arrays; explodes memory on large tables
ids = LineItem.where(kind: "addon").pluck(:order_id) # returns hundreds of thousands of ids
Order.where(id: ids).update_all(has_addons: true)

With this:

# GOOD WHY: lets the database do set logic; keeps memory flat and one round-trip
Order.joins(:line_items)
     .where(line_items: { kind: "addon" })
     .update_all(has_addons: true)

The first version spiked a worker to 1.4 GB RSS and throttled the node; the second stayed under 480 MB and finished 62% faster because the join happened server‑side.

Migrations and indexes stay first‑class

Rails 7+ migrations make the right thing easy—and documented in VCS.

# WHY: functional index matches ILIKE query; avoids seq scans under search load
class AddLowerNameIndexToUsers < ActiveRecord::Migration[7.1]
  def change
    add_index :users, "LOWER(name)", name: :index_users_on_lower_name
  end
end

Pair that with where("LOWER(name) LIKE ?", "%#{q.downcase}%") or ILIKE and you keep search under 40ms p95.

Final Thoughts

ActiveRecord isn’t about hiding SQL. It’s about putting intent where refactors can find it. As a DBA, you still get EXPLAIN, you still get window functions (in the 1% of places that need them), and you get an application that survives schema change without spending a day chasing strings.

If you’re moving from DBA to Rails engineer, start with:

  1. Convert one raw query per feature into a scope.
  2. Add eager loading and verify query counts with Bullet.
  3. Keep a small, tested read‑model for the one report that truly needs raw SQL.

It felt wrong at first. Then the first schema change paid off the debt in a single release—and we never looked back.

Was this article helpful?

Your feedback helps us improve our content

Be the first to vote!

How We Verify Conversions

Every conversion shown on this site follows a strict verification process to ensure correctness:

  • Compare results on same dataset — We run both SQL and ActiveRecord against identical test data and verify results match
  • Check generated SQL with to_sql — We inspect the actual SQL Rails generates to catch semantic differences (INNER vs LEFT JOIN, WHERE vs ON, etc.)
  • Add regression tests for tricky cases — Edge cases like NOT EXISTS, anti-joins, and predicate placement are tested with multiple scenarios
  • Tested on Rails 8.1.1 — All conversions verified on current Rails version to ensure compatibility

Last updated: February 22, 2026

Try These Queries in Our Converter

See the SQL examples from this article converted to ActiveRecord—and compare the SQL Rails actually generates.

51

Leave a Response

Responses (0)

No responses yet

Be the first to share your thoughts

R

Raza Hussain

Full-stack developer specializing in Ruby on Rails, React, and modern JavaScript. 15+ years upgrading and maintaining production Rails apps. Led Rails 4/5 → 7 upgrades with 40% performance gains, migrated apps from Heroku to Render cutting costs by 35%, and built systems for StatusGator, CryptoZombies, and others. Available for Rails upgrades, performance work, and cloud migrations.

💼 15 years experience 📝 34 posts