Ceph Minimal Resource ceph.conf

Ceph Minimal Resource ceph.conf

Tags :

Category : Supporting Scripts

Get Social!

The below file content should be added to your ceph.conf file to reduce the resource footprint for low powered machines.

The file may need to be tweaked and tested, as with any configuration, but pay particular attention to osd journal size. As with many data storage systems, Ceph creates a journal file of content that’s waiting to be committed to ‘proper’ storage. The osd journal size sets the the maximum amount of data that can be stored in the journal.

It should be calculated as follows:

2 * (T * filestore max sync interval)

T in this scenario is the lowest maximum throughput that’s expected through the network or on the disk. For example, a standard mechanical hard disk writes at roughly 100MB/ s. A 1GBPS network has a maximum throughput of 125 MB/s and therefore 100MB is the value of T. The parameter filestore max sync interval is 5 by default.

Therefore, 2 * (100 * 5 ) = 1000.

  # Disable in-memory logs
  debug_lockdep = 0/0
  debug_context = 0/0
  debug_crush = 0/0
  debug_buffer = 0/0
  debug_timer = 0/0
  debug_filer = 0/0
  debug_objecter = 0/0
  debug_rados = 0/0
  debug_rbd = 0/0
  debug_journaler = 0/0
  debug_objectcatcher = 0/0
  debug_client = 0/0
  debug_osd = 0/0
  debug_optracker = 0/0
  debug_objclass = 0/0
  debug_filestore = 0/0
  debug_journal = 0/0
  debug_ms = 0/0
  debug_monc = 0/0
  debug_tp = 0/0
  debug_auth = 0/0
  debug_finisher = 0/0
  debug_heartbeatmap = 0/0
  debug_perfcounter = 0/0
  debug_asok = 0/0
  debug_throttle = 0/0
  debug_mon = 0/0
  debug_paxos = 0/0
  debug_rgw = 0/0
  osd heartbeat grace = 8

[mon]
  mon compact on start = true
  mon osd down out subtree_limit = host

[osd]
  # Filesystem Optimizations
  osd mkfs type = btrfs
  osd journal size = 512

  # Performance tuning
  max open files = 327680
  osd op threads = 2
  filestore op threads = 2
  
  #Capacity Tuning
  osd backfill full ratio = 0.95
  mon osd nearfull ratio = 0.90
  mon osd full ratio = 0.95

  # Recovery tuning
  osd recovery max active = 1
  osd recovery max single start = 1
  osd max backfills = 1
  osd recovery op priority = 1

  # Optimize Filestore Merge and Split
  filestore merge threshold = 40
  filestore split multiple = 8

With thanks to Bryan Apperson for the config.


2 Comments

Matt

28-Apr-2016 at 11:07 pm

I appreciate the ceph articles. Recently took the ceph plunge for my home server. Putting the journals on an SD keeps the throughput high and allows me to use cheap spinners.

Would never go back to lvm, mergefs, zfs, raid, etc……

    james.coyle

    29-Apr-2016 at 7:44 am

    It’s the future, Matt!

Leave a Reply

Visit our advertisers

Quick Poll

Do you use ZFS on Linux?

Visit our advertisers