- Home
- Blog
- Performance & Query Debugging
- Database Indexes Existed. ActiveRecord Ignored Them.
Database Indexes Existed. ActiveRecord Ignored Them.
When ORM query patterns break index usage—and writing ActiveRecord queries that actually use your indexes
I spent three hours debugging a slow Rails query. The database had perfect indexes on user_id, status, and created_at. Running the raw SQL in psql took 12ms. But the ActiveRecord version? 4,200ms. The problem wasn’t the database indexes not used by ActiveRecord—it was how I wrote the query. Let me show you the four patterns that break index usage and how to rewrite them so Rails actually uses your indexes.
The Function Call That Kills Indexes
Here’s what I tried first on a search feature:
SQL version (fast):
SELECT * FROM posts
WHERE LOWER(title) LIKE LOWER('%rails%');
ActiveRecord version (what I wrote):
Post.where("LOWER(title) LIKE LOWER(?)", "%#{params[:query]}%")
Why this broke:
The database had an index on posts.title, but LOWER(title) is a function call. PostgreSQL can’t use the index when you wrap the column in a function—it has to scan every row, convert the title to lowercase, then check the pattern.
Production impact: On our blog with 50K posts, this query took 1,800ms. The index sat unused while Postgres scanned all 50,000 rows.
The fix that actually uses indexes:
# Create a functional index in a migration
class AddLowerTitleIndexToPosts < ActiveRecord::Migration[7.1]
def change
# Index the lowercase version explicitly
# Now LOWER(title) queries can use this index
add_index :posts, "LOWER(title)", name: 'index_posts_on_lower_title'
end
end
# Now this query uses the functional index
Post.where("LOWER(title) LIKE ?", "%#{params[:query].downcase}%")
Production result:
Query time dropped from 1,800ms to 45ms. The functional index made PostgreSQL treat LOWER(title) as an indexed column.
Watch out: Functional indexes are PostgreSQL-specific. MySQL uses different syntax. SQLite doesn’t support them at all—you’d need to store a lowercase version in a separate column.
OR Queries That Ignore Composite Indexes
I added a composite index on (user_id, status) to speed up the admin dashboard. Made sense—we filter by both columns constantly.
The index I created:
# Migration
add_index :orders, [:user_id, :status], name: 'index_orders_on_user_and_status'
ActiveRecord query that ignored it:
# Admin dashboard: show pending OR processing orders for a user
Order.where(user_id: current_user.id)
.where("status = ? OR status = ?", 'pending', 'processing')
Why the index wasn’t used:
Composite indexes on (user_id, status) work when you filter by user_id AND status. But OR conditions break the index usage because PostgreSQL can’t efficiently use a composite index when one column has multiple possible values in an OR clause.
Query plan showed:
-- EXPLAIN output (simplified)
Seq Scan on orders (cost=0.00..2500.00 rows=500)
Filter: ((user_id = 123) AND ((status = 'pending') OR (status = 'processing')))
No index. Full table scan. On 200K orders, this took 340ms per request.
The fix using IN (index-friendly):
# Rewrite OR as IN—PostgreSQL can use the composite index
Order.where(user_id: current_user.id, status: ['pending', 'processing'])
Generated SQL that DOES use the index:
SELECT * FROM orders
WHERE user_id = 123 AND status IN ('pending', 'processing')
Production result:
Query time: 18ms (down from 340ms). The composite index on (user_id, status) kicked in because IN is index-compatible while OR isn’t.
Pro tip: Use
EXPLAIN ANALYZEin your Rails console to verify index usage:Order.where(...).explain. Look for “Index Scan” not “Seq Scan”. If you see sequential scans, your query isn’t using indexes even if they exist.
The Join Pattern That Forces Sequential Scans
I built a feature to show users with recent activity. We have indexes on both users.id and posts.user_id. Should be fast, right?
ActiveRecord version (slow):
User.joins(:posts)
.where("posts.created_at > ?", 7.days.ago)
.distinct
What broke:
The join was fine. But filtering by posts.created_at without including user_id in the WHERE clause meant PostgreSQL couldn’t use the posts.user_id index efficiently. It scanned the posts table for recent posts, then joined to users—backwards from optimal.
Query plan disaster:
-- EXPLAIN showed this order of operations:
1. Seq Scan on posts (filtering by created_at)
2. Hash Join to users
Total: 890ms on 200K posts, 50K users
The index on posts(user_id) sat unused because we didn’t filter by user_id first.
The index-friendly rewrite:
# Add compound index on posts
class AddCreatedAtUserIdIndexToPosts < ActiveRecord::Migration[7.1]
def change
# Index both columns used in the join + filter
add_index :posts, [:user_id, :created_at], name: 'index_posts_on_user_and_created'
end
end
# Now query both columns in the right order
User.joins(:posts)
.where(posts: { created_at: 7.days.ago.. })
.distinct
Why this works:
The compound index on (user_id, created_at) lets PostgreSQL scan the index for user_id matches, THEN filter by created_at within each user’s posts. Much faster than scanning all recent posts.
Production result: Query time: 65ms (was 890ms). The 200K post table now uses the compound index instead of a full table scan.
Real talk: Compound index column order matters.
index(user_id, created_at)is fast for queries filtering by user_id first. It’s useless for queries filtering ONLY by created_at. I learned this after creating the wrong index order and seeing zero performance improvement.
The NOT Condition That Breaks Everything
Here’s the sneakiest one. We have an index on orders.status. This query should be instant:
ActiveRecord version (looks fine):
Order.where.not(status: 'canceled')
What broke:
Indexes are optimized for finding specific values (WHERE status = ‘pending’). But NOT conditions flip this—PostgreSQL has to find everything EXCEPT canceled orders. On a table with 95% non-canceled orders, the index becomes useless. It’s faster to scan the whole table than check the index for every non-canceled status.
Query plan on 100K orders:
-- EXPLAIN output:
Seq Scan on orders (cost=0.00..3200.00 rows=95000)
Filter: (status <> 'canceled')
Full table scan. 280ms. Index unused.
The fix depends on your data distribution:
# If most orders are NOT canceled (typical case):
# Rewrite as positive conditions (use IN with specific statuses)
Order.where(status: ['pending', 'processing', 'shipped', 'delivered'])
# If you genuinely need NOT and the percentage is low:
# Create a partial index (PostgreSQL only)
class AddPartialIndexToOrders < ActiveRecord::Migration[7.1]
def change
# Index only non-canceled orders
# Much smaller index, faster lookups
add_index :orders, :status,
where: "status != 'canceled'",
name: 'index_orders_on_active_status'
end
end
Order.where.not(status: 'canceled')
# Now uses the partial index
Production result:
Positive condition (IN): 8ms (was 280ms). Partial index approach: 22ms (acceptable, works with NOT).
Performance note: Partial indexes are PostgreSQL-specific and massively reduce index size. Our full
statusindex was 4MB. The partial index excluding canceled orders? 400KB. Smaller indexes = faster lookups = better cache hit rates.
When ActiveRecord Picks the Wrong Index
Sometimes you have multiple indexes and ActiveRecord chooses poorly. I saw this on a multi-tenant app:
The setup:
# We had TWO indexes:
add_index :documents, :account_id
add_index :documents, :created_at
Query that chose wrong:
Document.where(account_id: params[:account_id])
.where("created_at > ?", 1.year.ago)
.order(created_at: :desc)
What PostgreSQL picked:
It used the created_at index to filter by date, then scanned those results for the account_id. Backwards—we had 2M documents but only 500 per account. It should filter by account_id FIRST.
The fix using compound index:
# Create compound index so PostgreSQL has one optimal choice
class AddAccountCreatedIndexToDocuments < ActiveRecord::Migration[7.1]
def change
add_index :documents, [:account_id, :created_at], name: 'index_docs_on_account_created'
end
end
# Now ActiveRecord picks the right index automatically
Document.where(account_id: params[:account_id])
.where("created_at > ?", 1.year.ago)
.order(created_at: :desc)
Production result: With compound index: 12ms (was 450ms). PostgreSQL now has one obvious best choice instead of two mediocre options.
Watch out: You can have too many indexes. Every index slows down writes (INSERT, UPDATE, DELETE). I once added 15 indexes to a table and our import jobs slowed by 40%. Profile queries with
EXPLAIN, add indexes for your slowest queries, stop when you’ve fixed the P95 latency.
The LIKE Pattern Position Problem
This one’s subtle but common in search features:
Three LIKE patterns with different index behavior:
# Pattern 1: Prefix search (CAN use index)
Post.where("title LIKE ?", "Rails%")
# PostgreSQL uses index_posts_on_title
# Pattern 2: Suffix search (CANNOT use index)
Post.where("title LIKE ?", "%Rails")
# Sequential scan—no index helps here
# Pattern 3: Contains search (CANNOT use index)
Post.where("title LIKE ?", "%Rails%")
# Sequential scan—standard indexes useless
Why pattern matters: B-tree indexes (the default) store values in sorted order. They can efficiently find “starts with Rails” by jumping to that section of the sorted index. But “ends with Rails” or “contains Rails” requires scanning every value—indexes don’t help.
Production numbers on 50K posts:
- Prefix search (
Rails%): 8ms with index - Contains search (
%Rails%): 1,200ms without index - Full-text search (see below): 35ms with proper index
The fix for contains searches:
# Option 1: PostgreSQL full-text search (best for natural language)
class AddSearchIndexToPosts < ActiveRecord::Migration[7.1]
def change
# Add tsvector column for full-text search
add_column :posts, :searchable, :tsvector
# Create GIN index for fast full-text queries
add_index :posts, :searchable, using: :gin
# Trigger to keep searchable column updated
execute <<-SQL
CREATE TRIGGER posts_searchable_update BEFORE INSERT OR UPDATE
ON posts FOR EACH ROW EXECUTE PROCEDURE
tsvector_update_trigger(searchable, 'pg_catalog.english', title, body);
SQL
end
end
# Query using full-text search
Post.where("searchable @@ to_tsquery('english', ?)", 'Rails')
# Uses GIN index: 35ms on 50K posts
# Option 2: pg_trgm for simple LIKE queries (easier setup)
class EnablePgTrgm < ActiveRecord::Migration[7.1]
def change
enable_extension :pg_trgm
# Create trigram index for LIKE/ILIKE queries
add_index :posts, :title, using: :gin, opclass: :gin_trgm_ops
end
end
# Now ILIKE queries can use the trigram index
Post.where("title ILIKE ?", "%Rails%")
# Uses GIN trigram index: 60ms on 50K posts
Trade-offs:
- B-tree index: Fast prefix searches, useless for contains
- Full-text search: Best for natural language, complex to set up
- Trigram index: Works with LIKE/ILIKE, simpler than full-text, slower on huge datasets
I use pg_trgm for simple search bars (user names, titles) and full-text search for article content where ranking matters.
Final Thoughts
Database indexes not used by ActiveRecord is almost always a query structure problem, not an ORM problem. The fix is rewriting queries to match how indexes work: avoid functions on indexed columns (or add functional indexes), use IN instead of OR, create compound indexes for multi-column filters, and rethink NOT conditions as positive matches. Profile every slow query with EXPLAIN ANALYZE before adding indexes—I’ve wasted days optimizing queries that ran once per hour. Fix your P95 latency first, ignore the rest.
What’s the weirdest index usage problem you’ve debugged? Drop it in the comments—I’m curious if anyone’s hit the partial index + CTEs edge case I found last month. To understand activerecord fundamentals, see Rails WHERE Clause in ActiveRecord: Conditions, Multiple Clauses, and OR Logic.
Was this article helpful?
Your feedback helps us improve our content
How We Verify Conversions
Every conversion shown on this site follows a strict verification process to ensure correctness:
- Compare results on same dataset — We run both SQL and ActiveRecord against identical test data and verify results match
-
Check generated SQL with
to_sql— We inspect the actual SQL Rails generates to catch semantic differences (INNER vs LEFT JOIN, WHERE vs ON, etc.) - Add regression tests for tricky cases — Edge cases like NOT EXISTS, anti-joins, and predicate placement are tested with multiple scenarios
- Tested on Rails 8.1.1 — All conversions verified on current Rails version to ensure compatibility
Last updated: March 25, 2026
Try These Queries in Our Converter
See the SQL examples from this article converted to ActiveRecord—and compare the SQL Rails actually generates.
Deep Dive into ActiveRecord
Raza Hussain
Full-stack developer specializing in Ruby on Rails, React, and modern JavaScript. 15+ years upgrading and maintaining production Rails apps. Led Rails 4/5 → 7 upgrades with 40% performance gains, migrated apps from Heroku to Render cutting costs by 35%, and built systems for StatusGator, CryptoZombies, and others. Available for Rails upgrades, performance work, and cloud migrations.
More on Performance & Query Debugging
Rails WHERE Clause in ActiveRecord: Conditions, Multiple Clauses, and OR Logic
Master Rails WHERE clause with multiple conditions, OR logic, and SQL strings. Learn WHERE vs HAVING and how to verify generated SQL with real examples.
5-Table ActiveRecord Join: 8s → 200ms With Subquery Rewrite
Rewrote 5-table ActiveRecord join from 8s to 200ms with subqueries. Learn when PostgreSQL query planner fails and how to fix multi-table joins in Rails.
ActiveRecord Ran 47 Identical Queries—Bullet Gem Found the Pattern
Discover how Bullet gem caught 47 duplicate queries slowing dashboard to 3.8s. Fix Rails query duplication with instance variables and monitoring tools.
Leave a Response
Responses (0)
No responses yet
Be the first to share your thoughts