How to manually Populating Exadata Smart Flash Cache

By default, Exadata Smart Flash Cache automatically caches data when read from disk. This works well for most applications. However, in cases where applications are sensitive to the initial disk read latency, administrators can manually populate the cache to ensure critical data is available in flash before workloads run.

Why Manual Population?

  • Avoids delays caused by initial disk reads.
  • Ensures critical tables, indexes, or partitions are cached ahead of time.
  • Useful in consolidated environments where cache space is shared across multiple databases.

Step 1: Check Cache Availability

Compare allocated cache with total effective cache size:

# Check allocated cache
$ dcli -g cell_group cellcli -e list metriccurrent where name="FC_BY_ALLOCATED"
# Check total cache size
$ dcli -g cell_group cellcli -e list flashcache attributes effectiveCacheSize detail

Step 2: Confirm Data is Cached

Run queries to check optimized vs unoptimized reads:

SELECT name, value
FROM v$statname n, v$mystat s
WHERE s.statistic# = n.statistic#
AND name IN ('physical read IO requests','physical read requests optimized')
ORDER BY name;

After repeating the workload, if physical read IO requests = physical read requests optimized, the data is fully cached.

📊 Step 3: Use IORM flashCacheMin (When Cache is Full)

In consolidated environments, you can reserve cache space for a specific database:

# Check current allocation
$ dcli -g cell_group cellcli -e list metriccurrent where name="DB_FC_BY_ALLOCATED"
# Increase allocation for DBCDB1 to 2 TB
$ dcli -g cell_group cellcli -e alter iormplan dbplan=((name=DBCDB1, flashCacheMin=2T))

Note: Increasing allocation for one database reduces cache space for others.

📊 Step 4: Use CELL_FLASH_CACHE KEEP

Starting with Exadata System Software 24.1.0 and Oracle Database 23ai, you can force caching of specific segments:

-- Create a new table with KEEP
CREATE TABLE t1 (c1 NUMBER, c2 VARCHAR2(200)) STORAGE (CELL_FLASH_CACHE KEEP);
-- Alter existing table to KEEP
ALTER TABLE t2 STORAGE (CELL_FLASH_CACHE KEEP);
-- Reset to default after caching
ALTER TABLE t1 STORAGE (CELL_FLASH_CACHE DEFAULT);
ALTER TABLE t2 STORAGE (CELL_FLASH_CACHE DEFAULT);

KEEP ensures the segment is cached ahead of non-KEEP objects. Always reset to DEFAULT after loading to avoid impacting other workloads.

📊 Step 5: Identify Objects for Manual Population

  • Use AWR report (Segments by UnOptimized Reads).
  • Compare physical vs optimized reads in:
SELECT * FROM v$segment_statistics;
SELECT * FROM v$segstat;

Step 6: Other Considerations

  • Small Segments: May populate buffer cache instead of flash. Use parallel queries to force direct path reads.
  • Indexes: Use INDEX FAST FULL SCAN to populate index blocks into cache.
  • Multiple Scans: May be required to fully populate both tables and indexes.

Best Practices

  • Use flashCacheMin for guaranteed space allocation in multi-database environments.
  • Use CELL_FLASH_CACHE KEEP carefully—reset to DEFAULT after caching.
  • Monitor cache usage with:
CellCLI> LIST FLASHCACHECONTENT
  • Always verify caching with session statistics before running critical workloads.

🏁 Conclusion

Manual population of Exadata Smart Flash Cache ensures critical data is cached before workloads run, reducing latency and improving performance. By combining cache monitoring, IORM allocation, and segment-level KEEP options, administrators can fine-tune cache usage for both OLTP and analytical workloads.

Leave a Reply