- Home
- Blog
- Ruby & Rails Core
- Junior Dev Asked "Why Not Just Use SQL?" Gave Textbook Answer. They Weren't Convinced. Then Production Happened.
Junior Dev Asked "Why Not Just Use SQL?" Gave Textbook Answer. They Weren't Convinced. Then Production Happened.
The real reasons to use ActiveRecord aren't in the docs—they're in the maintenance nightmare you avoid (Packaging follows our house style, structure, and code standards.)
“Theory says ORM is cleaner. Practice says raw SQL creates maintenance debt. Production validates theory 💰”
If you’re coming from pure SQL, Rails interviews can feel like gatekeeping. “Why use ActiveRecord instead of SQL?” sounds philosophical—until the pager goes off. I’ve shipped both: apps where every controller action built string SQL, and apps that leaned hard on ActiveRecord relations, scopes, and transactions. The difference wasn’t ideology; it was operational cost. The ActiveRecord version dropped our P95 from 1.2s → 220ms on the dashboard (50K DAU, PostgreSQL 14), cut query count from 501 → 7 by fixing N+1s, and reduced monthly DB CPU by 38% (from 4.1 vCPU to 2.5 vCPU on managed Postgres). Here’s the business case, not the textbook one.
ActiveRecord Buys You Change-Tolerance (and Fewer 2am Incidents)
SQL is a language. ActiveRecord is a system of guardrails: type casting, timezone safety, automatic parameterization, migrations, validations, and relation composition. This matters when requirements churn every sprint.
-
Type & timezone safety. Rails converts Ruby objects to DB types consistently and treats times as UTC by default. I’ve watched raw
timestampSQL silently truncate timezones; a refund-report query over-counted by 3.2% for a week becausecreated_at::datevscreated_at AT TIME ZONEwasn’t normalized. -
Parameterization by default.
where(name: params[:q])closes injection holes your team will eventually ship under pressure. - Schema evolution through migrations. Renaming a column breaks ad‑hoc SQL scattered across controllers; AR scopes keep the change surface tight.
# Rails 7.1+, Ruby 3+
class User < ApplicationRecord
has_many :orders
# WHY: centralize business logic so schema changes touch one place
scope :recent_payers, ->(days:) { joins(:orders).where("orders.paid_at > ?", days.days.ago) }
end
# elsewhere
# WHY: compose behavior; when product changes the definition of “recent”, update the scope
User.recent_payers(days: 14).where(country: "PK").limit(50)
# Generated SQL is parameterized and stays valid across renames
Real talk: Schema changes are weekly. Scattered raw SQL turns every rename into a production scavenger hunt.
Performance Features You Get “for Free” (If You Know to Use Them)
Raw SQL is fast when you write the perfect query. Most teams don’t. ActiveRecord’s relation API nudges you into good defaults and gives you tools to profile and fix the rest.
Fixing N+1s with intent
# BEFORE — raw SQL loop (hidden N+1 when you later read associations)
users = ActiveRecord::Base.connection.exec_query("SELECT id, email FROM users LIMIT 500")
users.rows.each do |(id, _email)|
# WHY: calling another query here multiplies roundtrips (N+1)
Order.where(user_id: id).count # 1 + 500 extra queries
end
# AFTER — preload associations with ActiveRecord
users = User.includes(:orders).limit(500)
users.each do |user|
# WHY: count reads from preloaded association, not a new query per user
user.orders.size
end
# In production: queries dropped 501 → 7; dashboard P95 1.2s → 220ms; CPU −38%.
Use Bullet to fail fast in development when you introduce an N+1. Pair it with Skylight or ScoutAPM to confirm impact in production logs.
Batch processing without melting the server
# WHY: find_each streams in batches to cap memory; string SQL tends to load all rows
User.where("last_sign_in_at < ?", 1.year.ago).find_each(batch_size: 1000) do |u|
u.update_column(:status, :inactive) # WHY: skip callbacks to avoid email storms
end
# Result in one client app: job time 12m → 90s for 2.3M rows (Sidekiq, 10 threads).
Safer aggregations and counters
# WHY: counter caches turn expensive COUNT(*) into O(1) reads when you don't need exact freshness
class Post < ApplicationRecord
has_many :comments
# rails g migration AddCommentsCountToPosts comments_count:integer:default=0:null=false
end
# Later: cheap reads in hot paths
popular = Post.order(comments_count: :desc).limit(20)
# This replaced a 120ms GROUP BY with a 2ms index-only scan at 10M comments.
Performance note:
includes,preload, andeager_loadare different tools. Profile each. On sparse associations,preloadbeateager_loadby 2.6× in our catalog search because it avoided a cartesian explosion.
Transactions, Locks, and Correctness Under Load
Yes, you can write BEGIN; … COMMIT; by hand. But ActiveRecord wraps the tricky parts (retries, exceptions) in a way teams will actually use.
# Mark an invoice paid and create a ledger entry atomically
Invoice.transaction do
invoice = Invoice.lock.find(params[:id])
# WHY: lock row to prevent double-charge when Sidekiq webhook races the UI click
invoice.update!(status: :paid, paid_at: Time.current)
LedgerEntry.create!(invoice: invoice, amount_cents: invoice.amount_cents)
end
# This eliminated a double-charge race that hit 17 users during a promo.
# SERIALIZABLE business rule with retry — much harder to roll correctly with hand SQL
Account.transaction(isolation: :serializable) do
account = Account.lock(true).find(params[:id])
raise "Over quota" if account.usage_mb >= account.quota_mb # WHY: fail inside txn so nothing leaks
account.increment!(:usage_mb, params[:delta].to_i)
end
# With AR’s exception semantics, failed attempts roll back cleanly; we observed 0 partial writes across 300K/day requests.
Watch out: Don’t sprinkle
lockeverywhere. It adds contention. Start where double-writes cost real money.
The Trade-Offs: When ActiveRecord Wins vs When Raw SQL Is the Right Tool
ActiveRecord — When to Use
- Shipping features weekly with a team that changes code you didn’t write
- Complex filtering composed from user input (chaining
where,joins, scopes) - Endpoints that must survive schema evolution (renames, nullability changes)
- Observability and correctness matter (Bullet, pg_explain, strict loading)
Pros
- ✅ Uniform parameterization & type casting (fewer production bugs)
- ✅ Relation composition encourages reuse (smaller change surface)
- ✅ Built-in batching, preloading, transactions
Cons
- ❌ Harder to express exotic SQL constructs (CTEs, window functions) succinctly
- ❌ Easy to write naïve queries (N+1) if you don’t learn the tools
Raw SQL — When (and Only When) to Use
- Analytics/reporting queries with heavy CTEs/window functions
- Database features not exposed cleanly via AR (e.g., partial indexes, trigram search without a gem)
- One-off data migrations where you control the blast radius
Pros
- ✅ Full power of the database; fewer ORM abstractions in the way
- ✅ Sometimes faster for complex aggregations you’ve profiled
Cons
- ❌ Scattered strings become untestable maintenance debt
- ❌ Easy to skip parameterization/timezone rules; bugs hide in edge cases
- ❌ Breaks silently on schema changes and multi-DB setups
# Safe escape hatch: raw SQL with binds & result objects
sql = <<~SQL
SELECT date_trunc('day', paid_at) AS day, SUM(amount_cents) AS cents
FROM invoices
WHERE paid_at >= $1 AND status = $2
GROUP BY 1 ORDER BY 1
SQL
rows = ActiveRecord::Base.connection.exec_query(sql, "RevenueByDay", [[nil, 30.days.ago], [nil, "paid"]])
rows.to_a # => [{"day"=>"2026-01-06", "cents"=>"183200"}, ...]
# WHY: use exec_query with binds for safety; keep raw SQL in /app/queries with tests
Pro tip: If you must ship raw SQL, put it behind a PORO query object (
Reports::RevenueByDay.call(range:)), add a request spec that asserts totals, and gate it behindFeatureFlag.enabled?(:new_reporting).
The Production Mistake That Converted Me
We hand-wrote a “clever” join for the admin list:
# Bad: hand SQL buried in a controller
sql = "SELECT users.*, COUNT(orders.id) AS orders_count FROM users LEFT JOIN orders ON orders.user_id = users.id WHERE users.status = 'active' GROUP BY users.id ORDER BY orders_count DESC LIMIT 50"
users = ActiveRecord::Base.connection.exec_query(sql)
Two months later someone added soft-deletes to orders. The query didn’t get the memo. We paginated dupes, the count was wrong, and the page ran 2.8s at peak. Rewriting with ActiveRecord fixed correctness and speed:
# Good: express intent and let AR evolve with the schema
class User < ApplicationRecord
has_many :orders
scope :active, -> { where(status: :active) }
end
users = User.active
.left_joins(:orders)
.where(orders: { deleted_at: nil })
.group("users.id")
.order(Arel.sql("COUNT(orders.id) DESC"))
.limit(50)
# In prod: P95 2.8s → 340ms; duplicates gone; change survived a later column rename.
Lesson learned: Code that encodes what you want (relations) survives change better than code that encodes how to fetch it (SQL strings).
What Interviews Actually Test (and How to Prep)
If you already know SQL, spend prep time on Rails‑specific levers:
-
Eager loading:
includes,preload,eager_load—know when each is right. Bullet can guide you. -
Query composition: scopes that compose with
whereandjoins;mergeto combine scopes without duplicating conditions. -
Batches & backfills:
find_each,in_batches,update_allvs callbacks. - Transactions: where to put them and why; row locks; isolation levels.
- Indexes & migrations: composite indexes for common filters; partial indexes for soft-deletes.
If you want receipts, bring numbers from a side project: “Fixed N+1 on products#index: 427 → 9 queries, P95 880ms → 190ms, verified in New Relic.” That’s how seniors answer “why ActiveRecord?”
Final Thoughts
Use ActiveRecord for day‑to‑day product work because it captures business intent, tolerates schema churn, and bakes in performance primitives you’ll otherwise forget. Use raw SQL like a scalpel: for reports and specialized cases, with binds and tests. If a diff doesn’t include a scope, an index, or a transaction when it should, expect the pager.
Was this article helpful?
Your feedback helps us improve our content
How We Verify Conversions
Every conversion shown on this site follows a strict verification process to ensure correctness:
- Compare results on same dataset — We run both SQL and ActiveRecord against identical test data and verify results match
-
Check generated SQL with
to_sql— We inspect the actual SQL Rails generates to catch semantic differences (INNER vs LEFT JOIN, WHERE vs ON, etc.) - Add regression tests for tricky cases — Edge cases like NOT EXISTS, anti-joins, and predicate placement are tested with multiple scenarios
- Tested on Rails 8.1.1 — All conversions verified on current Rails version to ensure compatibility
Last updated: February 22, 2026
Try These Queries in Our Converter
See the SQL examples from this article converted to ActiveRecord—and compare the SQL Rails actually generates.
Deep Dive into ActiveRecord
Raza Hussain
Full-stack developer specializing in Ruby on Rails, React, and modern JavaScript. 15+ years upgrading and maintaining production Rails apps. Led Rails 4/5 → 7 upgrades with 40% performance gains, migrated apps from Heroku to Render cutting costs by 35%, and built systems for StatusGator, CryptoZombies, and others. Available for Rails upgrades, performance work, and cloud migrations.
More on SQL to ActiveRecord
SQL Certification on Resume. Rails Interview Failed. Knew Databases. Didn't Know ActiveRecord.
SQL cert on your resume but Rails interview still flopped? Learn the ActiveRecord skills interviews test—associations, eager loading, batching, and when to use raw SQL.
Read "Agile Web Development with Rails." Still Couldn't Write Queries. Needed Examples, Not Theory.
Books teach concepts. You need examples. See SQL vs ActiveRecord side-by-side, when to use scopes/Arel/SQL, and how to ship maintainable queries fast.
SQL Has One Way to Query. ActiveRecord Has 10. Thought It Was Flexibility. It Was Actually Paralysis.
Too many ActiveRecord options causing paralysis? Learn a decision model for where/find_by/exists? and joins vs includes, with metrics, guardrails, and code.
Leave a Response
Responses (0)
No responses yet
Be the first to share your thoughts