Docs / Databases / Optimizing MongoDB Indexes for Performance

Optimizing MongoDB Indexes for Performance

By Admin · Mar 15, 2026 · Updated Apr 23, 2026 · 715 views · 4 min read

Proper indexing is the single most impactful optimization for MongoDB query performance. Without appropriate indexes, MongoDB performs collection scans — reading every document to find matches. This guide covers index types, creation strategies, analysis tools, and optimization techniques for production MongoDB deployments.

Index Fundamentals

MongoDB uses B-tree indexes (similar to relational databases). Every collection has a default _id index. Additional indexes should be created based on your query patterns.

Index Types

Single Field Index

// Index on a single field
db.orders.createIndex({ customer_id: 1 })     // Ascending
db.orders.createIndex({ created_at: -1 })     // Descending
db.users.createIndex({ email: 1 }, { unique: true })  // Unique index

Compound Index

// Multi-field index (order matters!)
db.orders.createIndex({ customer_id: 1, status: 1, created_at: -1 })

// This index supports queries on:
// { customer_id: ... }                    ✓ (prefix)
// { customer_id: ..., status: ... }       ✓ (prefix)
// { customer_id: ..., status: ..., created_at: ... }  ✓ (full)
// { status: ... }                         ✗ (not a prefix)
// { created_at: ... }                     ✗ (not a prefix)

Multikey Index (Arrays)

// Automatically created when indexing array fields
db.products.createIndex({ tags: 1 })
// Supports: db.products.find({ tags: "electronics" })

Text Index

// Full-text search index
db.articles.createIndex({ title: "text", body: "text" }, {
    weights: { title: 10, body: 1 },
    default_language: "english"
})
// Usage: db.articles.find({ $text: { $search: "mongodb performance" } })

Wildcard Index

// Index all fields in a subdocument (for flexible schemas)
db.events.createIndex({ "metadata.$**": 1 })
// Supports queries on any field within metadata

Partial Index

// Index only documents matching a filter (saves space and write overhead)
db.orders.createIndex(
    { created_at: -1 },
    { partialFilterExpression: { status: { $in: ["pending", "processing"] } } }
)
// Only indexes non-completed orders — much smaller than a full index

TTL Index

// Automatic document expiration
db.sessions.createIndex({ expires_at: 1 }, { expireAfterSeconds: 0 })
db.logs.createIndex({ created_at: 1 }, { expireAfterSeconds: 2592000 }) // 30 days

Analyzing Query Performance

Using explain()

// Basic explain
db.orders.find({ customer_id: 42, status: "pending" }).explain("executionStats")

// Key fields to examine:
// executionStats.totalDocsExamined — documents scanned
// executionStats.totalKeysExamined — index entries scanned
// executionStats.nReturned — documents returned
// executionStats.executionTimeMillis — query time

// Ideal: totalDocsExamined ≈ nReturned (minimal wasted reads)

// winningPlan.stage should be:
// "IXSCAN" — using an index (good)
// "COLLSCAN" — full collection scan (bad, needs index)
// "FETCH" — retrieving documents after index scan (normal)
// "COVERED" — answered entirely from index (optimal)

Covered Queries

// A covered query returns only indexed fields (no document fetch needed)
db.orders.createIndex({ customer_id: 1, total: 1, status: 1 })

// This query is covered (projection only includes indexed fields):
db.orders.find(
    { customer_id: 42 },
    { _id: 0, total: 1, status: 1 }  // Must exclude _id
).explain("executionStats")
// stage should be "PROJECTION_COVERED" — fastest possible

The ESR Rule for Compound Indexes

When designing compound indexes, follow the Equality-Sort-Range order:

  1. Equality — fields tested with exact match ({ field: value })
  2. Sort — fields used in .sort()
  3. Range — fields tested with range operators ($gt, $lt, $in)
// Query: find active orders for a customer, sorted by date, in a price range
db.orders.find({
    customer_id: 42,          // Equality
    status: "active",         // Equality
    total: { $gte: 100 }      // Range
}).sort({ created_at: -1 })   // Sort

// Optimal index following ESR:
db.orders.createIndex({
    customer_id: 1,    // E - Equality
    status: 1,         // E - Equality
    created_at: -1,    // S - Sort
    total: 1           // R - Range
})

Finding Missing Indexes

// Enable profiler to capture slow queries
db.setProfilingLevel(1, { slowms: 100 })

// Find queries doing collection scans
db.system.profile.find({
    "planSummary": "COLLSCAN",
    "millis": { $gt: 100 }
}).sort({ ts: -1 }).limit(20)

// Analyze index usage
db.orders.aggregate([{ $indexStats: {} }])
// Shows how often each index is used — drop unused ones

Removing Unused Indexes

// Check index usage statistics
db.orders.aggregate([{ $indexStats: {} }]).forEach(idx => {
    print(idx.name + ": " + idx.accesses.ops + " accesses since " + idx.accesses.since)
})

// Indexes with 0 accesses over weeks/months are candidates for removal
db.orders.dropIndex("unused_index_name")

// Hide an index first to test impact (MongoDB 4.4+)
db.orders.hideIndex("suspect_index_name")
// Monitor for degraded queries, then drop or unhide

Index Management Best Practices

// Build indexes in the background (default in MongoDB 4.2+)
db.large_collection.createIndex({ field: 1 })

// Check index build progress
db.currentOp({ "command.createIndexes": { $exists: true } })

// Get index sizes
db.orders.stats().indexSizes

// Total index size for all collections
db.getCollectionNames().forEach(c => {
    var stats = db[c].stats();
    print(c + " - indexes: " + (stats.totalIndexSize / 1024 / 1024).toFixed(2) + " MB");
})

Best Practices Summary

  • Follow the ESR (Equality-Sort-Range) rule for compound index field ordering
  • Create indexes based on your actual query patterns, not on fields that seem important
  • Use explain("executionStats") to verify index usage for critical queries
  • Remove unused indexes — they consume memory, disk, and slow down writes
  • Use partial indexes for queries that only access a subset of documents (e.g., active orders)
  • Ensure indexes fit in RAM — check with db.collection.stats().totalIndexSize
  • Use hideIndex() to test removal impact before permanently dropping indexes
  • Limit indexes to 5-7 per collection to avoid excessive write overhead
  • Use covered queries when possible for maximum read performance

Was this article helpful?