A New Version Of Ceph Has Been Released, Ceph Jewel

A New Version Of Ceph Has Been Released, Ceph Jewel

Category : Tech News

Get Social!

ceph-logoThe latest version of Ceph has been released codenamed Jewel with version number 10.2.0. Ceph Jewel has been released as a long term support (LTS) version and will be retired in November 2017.

The Ceph Jewel release marks the first stable release of CephFS. Whilst some have been using CephFS for some time, this is the first release it’s officially marked as stable and production ready.

Various other improvements have been made with the Jewel release:

  • CephFS:
    • This is the first release in which CephFS is declared stable! Several features are disabled by default, including snapshots and multiple active MDS servers.
    • The repair and disaster recovery tools are now feature-complete.
    • A new cephfs-volume-manager module is included that provides a high-level interface for creating “shares” for OpenStack Manila and similar projects.
    • There is now experimental support for multiple CephFS file systems within a single cluster.
  • RGW:
    • The multisite feature has been almost completely rearchitected and rewritten to support any number of clusters/sites, bidirectional fail-over, and active/active configurations.
    • You can now access radosgw buckets via NFS (experimental).
    • The AWS4 authentication protocol is now supported.
    • There is now support for S3 request payer buckets.
    • The new multitenancy infrastructure improves compatibility with Swift, which provides a separate container namespace for each user/tenant.
    • The OpenStack Keystone v3 API is now supported. There are a range of other small Swift API features and compatibility improvements as well, including bulk delete and SLO (static large objects).
  • RBD:
    • There is new support for mirroring (asynchronous replication) of RBD images across clusters. This is implemented as a per-RBD image journal that can be streamed across a WAN to another site, and a new rbd-mirror daemon that performs the cross-cluster replication.
    • The exclusive-lock, object-map, fast-diff, and journaling features can be enabled or disabled dynamically. The deep-flatten features can be disabled dynamically but not re-enabled.
    • The RBD CLI has been rewritten to provide command-specific help and full bash completion support.
    • RBD snapshots can now be renamed.
  • RADOS:
    • BlueStore, a new OSD backend, is included as an experimental feature. The plan is for it to become the default backend in the K or L release.
    • The OSD now persists scrub results and provides a librados API to query results in detail.
    • We have revised our documentation to recommend against using ext4 as the underlying filesystem for Ceph OSD daemons due to problems supporting our long object name handling.

Taken from release notes

 


Leave a Reply

Visit our advertisers

Quick Poll

How many Proxmox servers do you work with?

Visit our advertisers