Erinevus lehekülje "Ceph" redaktsioonide vahel
21. rida: | 21. rida: | ||
RGW ja RBD puhul on vaja ainult OSD ja MON deemoneid, kuna metadata (MDS) on vajalik vait CephFS jaoks (mitme metadata serveri kasutamine on alles arendamisel ja eksperimentaalne) | RGW ja RBD puhul on vaja ainult OSD ja MON deemoneid, kuna metadata (MDS) on vajalik vait CephFS jaoks (mitme metadata serveri kasutamine on alles arendamisel ja eksperimentaalne) | ||
+ | |||
+ | [[Pilt:Ceph-topo.jpg]] | ||
'''Lingid''' | '''Lingid''' |
Redaktsioon: 26. august 2014, kell 18:18
Ceph koosneb järgnevatest komponentidest
RADOS: Reliable Autonomic Distributed Object Store is an object storage. RADOS takes care of distributing the objects across the whole storage cluster and replicating them for fault tolerance. It is built with 3 major components:
- Object Storage Daemon (OSD): the storage daemon - RADOS service, the location of your data. You must have this daemon running on each server of your cluster. For each OSD you can have an associated hard drive disks. For performance purpose it’s usually better to pool your hard drive disk with raid arrays, LVM or btrfs pooling. With that, for one server your will have one daemon running. By default, three pools are created: data, metadata and RBD.
- Meta-Data Server (MDS): this is where the metadata are stored. MDSs build POSIX file system on top of objects for Ceph clients. However if you are not using the Ceph File System, you do not need a meta data server.
- Monitor (MON): this lightweight daemon handles all the communications with the external applications and the clients. It also provides a consensus for distributed decision making in a Ceph/RADOS cluster. For instance when you mount a Ceph shared on a client you point to the address of a MON server. It checks the state and the consistency of the data. In an ideal setup you will at least run 3 ceph-mon daemons on separate servers. Quorum decisions and calculs are elected by a majority vote, we expressly need odd number.
- Ceph OSD ( Object Storage Daemons ) salvestab andmeobjekte, juhib replikatsiooni, taastamist jne. Soovitatav on omada 1 OSD iga füüsilise ketta kohta.
- Ceph MON ( Monitors ) maintains overall health of cluster by keeping cluster map state including Monitor map , OSD map , Placement Group ( PG ) map , and CRUSH map. Monitors receives state information from other components to maintain maps and circulate these maps to other Monitor and OSD nodes.
- MDSs: A Ceph Metadata Server (MDS) stores metadata on behalf of the Ceph Filesystem (i.e., Ceph Block Devices and Ceph Object Storage do not use MDS). Ceph Metadata Servers make it feasible for POSIX file system users to execute basic commands like ls, find, etc. without placing an enormous burden on the Ceph Storage Cluster.
Transporditeenused.
- Ceph RGW ( Object Gateway / Rados Gateway ) API liides, ühilduv Amazon S3 ja openstacki swiftiga.
- Ceph RBD ( Raw Block Device ) Blokkseadmed virtuaalmasinatele, sisaldab snapshottimist, provisioneerimist ja pakkimist.
- CephFS ( File System ) hajus POSIX NAS storage. Mountimine käib üle fuse.
Produktsioon süsteemi jaoks on soovitatav kasutada viite füüsilist või virtuaalset serverit. üks server andmetega (OSD), üks server metadata jaoks (MDS) ja kaks server-monitori ja admin server (esimene klient)
RGW ja RBD puhul on vaja ainult OSD ja MON deemoneid, kuna metadata (MDS) on vajalik vait CephFS jaoks (mitme metadata serveri kasutamine on alles arendamisel ja eksperimentaalne)
Lingid
Testclustri ehitamine http://ceph.com/docs/master/start/quick-ceph-deploy/
Veel ühe clustri ehitamine http://www.server-world.info/en/note?os=CentOS_6&p=ceph
Jõudlustestid https://software.intel.com/en-us/blogs/2013/10/25/measure-ceph-rbd-performance-in-a-quantitative-way-part-i
Ilusad skeemid aga võõras keeles http://wiki.zionetrix.net/informatique:systeme:ha:ceph
Veel üks võimalik kombinatsioon http://www.openclouddesign.org/articles/vdi-storage/ceph-highly-scalable-open-source-distributed-file-system