Garage: Open-Source Distributed Object Storage
518 points by n3t 11 months ago | 141 comments- makkesk8 11 months agoWe moved over to garage after running minio in production with about ~2PB after about 2 years of headache. Minio does not deal with small files very well, rightfully so, since they don't keep a separate index of the files other than straight on disk. While ssd's can mask this issue to some extent, spinning rust, not so much. And speaking of replication, this just works... Minio's approach even with synchronous mode turned on, tends to fall behind, and again small files will pretty much break it all together.
We saw about 20-30x performance gain overall after moving to garage for our specific use case.
- sandGorgon 11 months agoquick question for advice - we have been evaluating minio for a in-house deployed storage for ML data. this is financial data which we have to comply on a crap ton of regulations.
so we wanted lots of compliance features - like access logs, access approvals, short lived (time bound) accesses, etc etc.
how would you compare garage vs minio on that front ?
- withinboredom 11 months agoYou will probably put a proxy in front of it, so do your audit logging there (nginx ingress mirror mode works pretty good for that)
- mdaniel 11 months agoAs a competing theory, since both Minio and Garage are open source, if it were my stack I'd patch them to log with the granularity one wished since in my mental model the system of record will always have more information than a simple HTTP proxy in front of them
Plus, in the spirit of open source, it's very likely that if one person has this need then others have this need, too, and thus the whole ecosystem grows versus everyone having one more point of failure in the HTTP traversal
- mdaniel 11 months ago
- withinboredom 11 months ago
- zimbatm 11 months agoThat's very cool; I didn't expect Garage to scale that well while being so young.
Are there other details you are willing/allowed to share, like the number of objects in the store and the number of servers you are balancing them on?
- sandGorgon 11 months ago
- j-pb 11 months agoWhat I'm really missing in this space is something like this for content addressed blob storage.
I feel like a lot of complexity and performance overhead could be reduced if you only store immutable blobs under their hash (e.g Blake3). Combined with a soft delete this would make all operations idempotent, blobs trivially cacheable, and all state a CRDT/monotonically mergeable/coordination free.
There is stuff like IPFS in the large, but I want this for local deployments as a S3 replacement, when the metadata is stored elsewhere like git or a database.
- amluto 11 months agoI would settle for first-class support for object hashes. Let an object have metadata, available in the inventory, that gives zero or more hashes of the data. SHA256, some Blake family hash, and at least one decent tree hash should be supported. There should be a way to ask the store to add a hash to an existing object, and it should work on multipart objects.
IOW I would settle for content verification even without content addressing.
S3 has an extremely half-hearted implementation of this for “integrity”.
- ianopolous 11 months agoThat's how we use S3 in Peergos (built on IPFS). You can get S3 to verify the sha256 of a block on write and reject the write if it doesn't match. This means many mutually untrusting users can all write to the same bucket at the same time with no possibility for conflict. We talk about this more here:
- the_duke 11 months agoGarage splis the data into chunks for deduplication, so it basically already does content addressed storage under the hood..
They probably don't expose it publicly though.
- j-pb 11 months agoYeah, and as far as I understood they use the key hash to address the overall object descriptor. So in theory using the hash of the file instead of the hash of the key should be a simple-ish change.
Tbh I'm not sure if content aware chunking isn't a sirens call:
- It sounds great on paper, but once you start storing encrypted (which you have to do if you want e2e encryption) or compressed blobs (e.g. images) it won't work anymore. - Ideally you would store things with enough fine grained blobs that blob-level deduplication would suffice. - Storing a blob across your cluster has additional compute, lookup, bookkeeping, and communication overhead, resulting in worse latency. Storing an object as a contiguous unit makes the cache/storage hierarchies happy and allows for optimisations like using `sendfile`. - Storing the blobs as a unit makes computational storage easier to implement, where instead of reading the blob and processing it, you would send a small WASM program to the storage server (or drive? https://semiconductor.samsung.com/us/ssd/smart-ssd/) and only receive the computation result back.
- j-pb 11 months ago
- od0 11 months agoTake a look at https://github.com/n0-computer/iroh
Open source project written in Rust that uses BLAKE3 (and QUIC, which you mentioned in another comment)
- j-pb 11 months agoIt certainly has a lot of overlap and is a very interesting project, but like most projects in this space, I feel like it's already doing too much. I think that might be because many of these systems also try to be user facing?
E.g. it tries to solve the "mutability problem" (having human readable identifiers point to changing blobs); there are blobs and collections and documents; there is a whole resolver system with their ticket stuff
All of these things are interesting problems, that I'd definitely like to see solved some day, but I'd be more than happy with an "S3 for blobs" :D.
- j-pb 11 months ago
- khimaros 11 months agoyou might be interested in https://github.com/perkeep/perkeep
- skinkestek 11 months agoPerkeep has (at least until last I checked it) the very interesting property of being completely impossible for me to make heads or tails of while also looking extremely interesting and useful.
So in the hope of triggering someone to give me the missing link (maybe even a hyperlink) for me to understand it, here is a the situation:
I'm a SW dev that also have done a lot of sysadmin work. Yes, I have managed to install it. And that is about it. There seems to be so many features there but I really really don't understand how I am supposed to use the product or the documentation for that matter.
I could start an import of Twitter or something else an it kind of shows up. Same with anything else: photos etc.
It clearly does something but it was impossible to understand what I am supposed to do next, both from the ui and also from the docs.
- breakingcups 11 months agoPerkeep is such a cool, interesting concept, but it seems like it's on life-support.
If I'm not mistaken, it used to be funded by creator Brad Fitz, who could afford to hire a full-time developer on his Google salary, but that time has sadly passed.
It suffers from having so many cool use-cases that it struggles to find a balance in presentation.
- mdaniel 11 months agoI was curious to see if I could help, and I wondered if you saw their mailing list? It seems to have some folks complaining about things they wish it did, which strangely enough is often a good indication of what it currently does
There's also "Show Parkeep"-ish posts like this one <https://groups.google.com/g/perkeep/c/mHoUUcBz2Yw> where the user made their own Pocket implementation complete with original page snapshotting
The thing that most stood out to me was the number of folks who wanted to use Parkeep to manage its own content AND serve as the metadata system of record for external content (think: an existing MP3 library owned by an inflexible media player such as iTunes). So between that and your "import Twitter" comment, it seems one of its current hurdles is that the use case one might have for a system like this needs to be "all in" otherwise it becomes the same problem as a removable USB drive for storing stuff: "oh, damn, is that on my computer or on the external drive?"
- tgulacsi 11 months agoBeside personal photo store, I use the storage part for file store at work (basically, indexing is off), with a simplifying wrapper for upload/download: github.com/tgulacsi/camproxy
With the adaptive block hashing (varying block sizes), it beats gzip for compression.
- lockyc 11 months agoI agree 100%
- breakingcups 11 months ago
- didntcheck 11 months agoOr some even older prior art (which I recall a Perkeep dev citing as an influence in a conference talk)
- j-pb 11 months agoYeah, there are pleanty of dead and abandoned projects in this space. Maybe the concept is worthless without a tool for metadata management? Also I should probably have specified that by "missing" I mean, "there is nothing well maintained and production grade" ^^'
- j-pb 11 months ago
- j-pb 11 months agoYeah I've been following it on and off since it was camli-store. Maybe it tried to do too much at once and didn't focus on just the blob part enough, but I feel like it never really reached a coherent state and story.
- skinkestek 11 months ago
- BageDevimo 11 months agoHave you seen https://github.com/willbryant/verm?
- j-pb 11 months agoYeah, the subdirectories and mime-type seemed like an unnecessary complication. Also looks pretty dead.
- j-pb 11 months ago
- jiggawatts 11 months agoSomething related that I've been thinking about is that there aren't many popular data storage systems out there that use HTTP/3 and/or gRPC for the lower latency. I don't just mean object storage, but database servers too.
Recently I benchmarked the latency to some popular RPC, cache, and DB platforms and was shocked at how high the latency was. Every still talks about 1 ms as the latency floor, when it should be the ceiling.
- j-pb 11 months agoYeah QUIC would probably be a good protocol for such a system. Roundtrips are also expensive, ideally your client library would probably cache as much data as the local disk can hold.
- j-pb 11 months ago
- singinwhale 11 months agoSounds a little like Kademlia, the DHT implementation that BitTorrent uses.
It's a distributed hash table where the value mapped to a hash is immutable after it is STOREd (at least in the implementations that I know)
- j-pb 11 months agoKademlia could certainly be a part of a solution to this, but it's a long road from the algorithm to the binary that you can start on a bunch of machines to get the service, e.g. something like SeaweedFS. BitTorrent might actually be the closest thing we have to this, but it is at the opposite spectrum of the latency -distributed axis.
- j-pb 11 months ago
- rakoo 11 months agoBut you don't really handle blobs in real life: they can't really be handled, they don't have memorable name (by design). So you need an abstractly layer on top of it. You can use zfs that will deduplicate similar blobs. You can use restic for backups that will also deduplicate similar parts of a file also in an idempotent way. And you can use git that will deduplicate files based on their hash
- compressedgas 11 months agoYou might also be interested in Tahoe-LAFS https://www.tahoe-lafs.org/
- j-pb 11 months agoI get a
> Trac detected an internal error:
> IOError: [Errno 28] No space left on device
So it looks like it is pretty dead like most projects in this space?
- diggan 11 months agoBecause the website seems to have a temporary issue, the project must be dead?
Tahoe-LAFS seems alive and continues development, although it seems to not have seen as many updates in 2024 as previous years: https://github.com/tahoe-lafs/tahoe-lafs/graphs/contributors
- diggan 11 months ago
- j-pb 11 months ago
- snthpy 11 months agoHave a look at LakeFS (https://docs.lakefs.io/understand/architecture.html).
Files are stored by hash on S3. Metadata is stored in a database. I run it locally and access it just like an S3 store. Metadata is in a Postgres DB.
- ramses0 11 months agoCheck also SeaweedFS, it has some interesting tradeoffs made, but I hear you with wanting some of the properties you're looking for.
- tempest_ 11 months agoI am using seaweed for a project right now. Some things to consider with seaweed.
- It works pretty well, at least up to the 15B objects I am using it for. Running on 2 machines with about 300TB, (500 raw) storage on each.
- The documentation, specifically with regards to operations like how to backup things, or different failure modes of the components can be sparse.
- One example of the above is I spun up a second filer instance (which is supposed to sync automatically) which caused the master server to emit an error while it was syncing. The only way to know if it was working was watching the new filers storage slowly grow.
- Seaweed has a pretty high bus factor, though the dev is pretty responsive and seems to accept PRs at a steady rate.
- SOLAR_FIELDS 11 months agoI use seaweed as well. It has some warts as well as some feature incompleteness but I think the simplicity of the project itself is a pretty nice feature. It’s grokkable mostly pretty quickly since it’s only one dev and the codebase is pretty small
- SOLAR_FIELDS 11 months ago
- tempest_ 11 months ago
- rkunnamp 11 months agoIPFS like "coordination free" local S3 replacement! Yes. That is badly needed.
- lima 11 months agoThe RADOS K/V store is pretty close. Ceph is built on top of it but you can also use it as a standalone database.
- yencabulator 11 months agoNothing content-addressed in RADOS. It's just a key-value store with more powerful operations that get/put, and more in the strong consensus camp than the parents' request for coordination free things.
(Disclaimer: ex-Ceph employee.)
- SOLAR_FIELDS 11 months agoCan you point me towards resources that help me understand the trade offs being implied here? I feel like there is a ton of knowledge behind your statement that flies right past me because I don’t know the background behind why the things you are saying are important.
- SOLAR_FIELDS 11 months ago
- yencabulator 11 months ago
- amluto 11 months ago
- computerfan494 11 months agoI have used Garage for a long time. It's great, but the AWS sigv4 protocol for accessing it is just frustrating. Why can't I just send my API key as a header? I don't need the full AWS SDK to get and put files, and the AWS sigv4 is a ton of extra complexity to add to my projects. I don't care about the "security benefits" of AWS sigv4. I hope the authors consider a different authentication scheme so I can recommend Garage more readily.
- dopylitty 11 months agoI read that curl recently added sigv4 for what that’s worth[0]
- zipping1549 11 months agoOf course curl has it
- zipping1549 11 months ago
- 6LLvveMx2koXfwn 11 months agoImplementing v4 on the server side also requires the service to keep the token as plain text. If it's a persistent password, rather than an ephemeral key, that opens up another whole host of security issues around password storage. And on the flip side requiring the client to hit an endpoint to receive a session based token is even more crippling from a performance perspective.
- ianopolous 11 months agoYou can implement S3 V4 signatures in a few hundred lines of code.
https://github.com/Peergos/Peergos/blob/master/src/peergos/s...
- computerfan494 11 months agoI have done this for my purposes, but it's slow and unnecessary bloat I wish I didn't have to have.
- ianopolous 11 months ago5 hmac-sha256's per signature are slow?
- ianopolous 11 months ago
- computerfan494 11 months ago
- surfingdino 11 months agoIt makes sense to tap into the existing ecosystem of AWS S3-compatible clients.
- otabdeveloper4 11 months agoPlain HTTP (as in curl without any extra headers) is already an S3-compatible client.
If this 'Garage' doesn't support the plain HTTP use case then it isn't S3 compatible.
- nulld3v 11 months agoOnly if you are not doing auth right? If you need to auth then you need to send a request with headers.
- nulld3v 11 months ago
- otabdeveloper4 11 months ago
- neon_me 11 months agoCheck something like PicoS3 or https://github.com/sentienhq/ultralight-s3
There is a few "very minimal" sigv4 implementations ...
- klysm 11 months agoSending your api key in the header is equivalent to basic auth.
- computerfan494 11 months agoYep, and that's fine with me. I don't have a problem with basic auth.
- vineyardmike 11 months agoThis is not intended for commercial services. Realistically, this software was made for people who keep servers in their basement. The security profile of LAN users is very different than public AWS.
- anonzzzies 11 months agoThe site says it was made (initially) and used for a commercial French hoster.
- iscoelho 11 months agoYou know FOSS software runs most of the internet right? (and, if you'll believe it, AWS internally)
I would find it completely unsurprising to see Garage used in some capacity by a Fortune 500 by the end of the year (not that they'd publicly say it).
- anonzzzies 11 months ago
- computerfan494 11 months ago
- dopylitty 11 months ago
- TechDebtDevin 11 months agoSeaweedFS is great as well.
- n_ary 11 months agoTried this for my own homelab, either I misconfigured it or it consumes x2(linearly) memory(working) of the stored data. So, for example, if I put 1GB of data, seaweed would immediately consume 2GB of memory constantly!
Edit: memory = RAM
- TechDebtDevin 11 months agoThat is odd. It likely has something to do with the index caching and how many replication volumes you configured. By default it indexes all file metadata in RAM (I think) but that wouldn't justify that type of memory usage. I've always used mostly default configurations in Docker Swarm, similar to this:
https://github.com/cycneuramus/seaweedfs-docker-swarm/blob/m...
- crest 11 months agoAre you claiming that SeaweedFS requires twice as much RAM as the sum of the sizes of the stored objects?
- n_ary 11 months agoCorrect. I experimented by varying the data volume, it was linearly correlated by x2 of data volume.
- n_ary 11 months ago
- TechDebtDevin 11 months ago
- evanjrowley 11 months agoLooks awesome. Been looking for some flexible self-hosted WebDAV solutions and SeaweedFS would be an interesting choice.
- genewitch 11 months agodepending on what you need it for nextcloud has WebDAV (clients can interact with it, and windows can mount your home folder directly, i just tried it out a couple days ago.) I've never used webdav before so i'm unsure of what other use cases there are, but the nextcloud implementation (whatever it may be) was friction-free - everything just worked.
- genewitch 11 months ago
- n_ary 11 months ago
- fijiaarone 11 months agoI don’t understand why everyone wants to replicate AWS APIs for things that are not AWS.
S3 is a horrible interface with a terrible lack of features. It’s just file storage without any of the benefits of a file syste - no metadata, no directory structures, no ability to search, sort, or filter.
Combine that with high latency network file access and an overly verbose API. You literally have a bucket for storing files, when you used to have a toolbox with drawers, folders, and labels.
Replicating a real file system is not that hard, and when you lose the original reason for using a bucket —- because your were stuck in the swamp with nothing else to carry your files in — why keep using it when you’re out of the mud?
- vineyardmike 11 months agoDoes your file system have search? Mine doesn’t. Instead I have software that implements search on top of it. Does it support filtering? Mine uses software on top again. Which an S3 api totally supports.
Does your remote file server magically avoid network latency? Mine doesn’t.
In case you didn’t know, inside the bucket you can use a full path for S3 files. So you can have directories or folders or whatever.
Some benefits of this system (KV style access) is to support concurrent usage better. Not every system needs it, but if you’re using an object store you might.
- psychoslave 11 months agoBe OS FS at least has this
- dddw 11 months agoSo haiku has it?
- dddw 11 months ago
- psychoslave 11 months ago
- acdha 11 months ago> Replicating a real file system is not that hard
What personal experience do you have in this area? In particular, how have you handled greater than single-server scale, storage-level corruption, network partitions, and atomicity under concurrent access?
- nh2 11 months agoI use CephFS.
Blob storage is easier than POSIX file systems:
You have server-client state. The concept of opened files, directories, and their states. Locks. The ability for multiple writers to write to the same file while still providing POSIX guarantees.
All of those need to correctly handle failure of both the client and the server.
CephFS implements that with a Metadata server that has lots of logica and needs plenty of RAM.
A distributed file system like CephFS is more convenient than S3 in multiple ways, and I agree it's preferable for most use cases. But it's undoubtedly more complex to build.
- crabbone 11 months agoIt's a legitimate question and I'm glad you asked! (I'm not the author of Garage and have no affiliation).
Filesystems impose a lot of constraints on data-consistency that make things go slow. In particular, when it comes to mutating directory structure. There's also another set of consistency constraints when it comes to dealing with file's contents. Object stores relax or remove these constraints, which allows them to "go faster". You should, however, carefully consider if the constraints are really unnecessary for your case. The typical use-case for object stores is something like storing volume snapshots, VM images, layers of layered filesystems etc. They would perform poorly if you wanted to use them to store the files of your programming project, for example.
- favadi 11 months ago> S3 is a horrible interface with a terrible lack of features.
Because turn out that most applications do not require that many features when it comes to persistent storage.
- duskwuff 11 months ago> I don’t understand why everyone wants to replicate AWS APIs for things that are not AWS.
It's mostly just S3, really. You don't see anywhere near as many "clones" of other AWS services like EC2, for instance.
And there's a ton of value on being able to develop against a S3 clone like Garage or Minio and deploy against S3 - or being able to retarget an existing application which expected S3 to one of those clones.
- Scaevolus 11 months agoS3 exposes effectively all the metadata that POSIX APIs do, in addition to all the custom metadata headers you can add.
Implementing a filesystem versus an object store involves severe tradeoffs in scalability and complexity that are rarely worth it for users that just want a giant bucket to dump things in.
The API doesn't matter that much, but everything already supports S3, so why not save time on client libraries and implement it? It's not like some alternative PUT/GET/DELETE API will be much simpler-- though naturally LIST could be implemented myriad ways.
- nh2 11 months agoThere are many POSIX APIs that S3 does not cover. For example directories, and thus efficient renames and atomic moves of sub hierarchies.
- senderista 11 months agoS3 Express supports efficient directory-level operations.
- senderista 11 months ago
- nh2 11 months ago
- didntcheck 11 months agoYou wouldn't want your "interactive" user filesystem on S3, no, but as the storage backend for a server application it makes sense. In those cases you very often are just storing everything in a single flat folder with all the associated metadata in your application's DB instead
By reducing the API surface (to essentially just GET, PUT, DELETE), it increases the flexibility of the backend. It's almost trivial to do a union mount with object storage, where half the files go to one server and half go to another (based on a hash of the name). This can and is done with POSIX filesystems too, but it requires more work to fully satisfy the semantics. One of the biggest complications is having to support file modification and mmap. With S3 you can instead only modify a file by fully replacing it with PUT. Which again might be unacceptable for a desktop OS filesystem, but many server applications already satisfy this constraint by default
- klysm 11 months ago> Replicating a real file system is not that hard
Ummmm what? Replicating a file system is insanely hard
- Nathanba 11 months agoit's because many other cloud services offer sending to S3, that's pretty much it
- TheColorYellow 11 months agoBecause at this point it's a well known API. I bet people want to recreate AWS without the Amazon part, and so this is for them.
Which, to your point, makes no sense because as you rightly point out, people use S3 because of the Amazon services and ecosystem it is integrated with - not at all because it is "good tech"
- acdha 11 months agoS3 was the second AWS service, behind SQS, and saw rapid adoption which cannot be explained by integration with services introduced later.
- vlovich123 11 months agoStorage is generally sticky but I wouldn’t be so quick to dismiss that reason because it might explain why anything would fail to displace it; a bunch of software is written against S3 and the entire ecosystem around it is quite rich. It doesn’t explain the initial popularity but does explain stickiness. Initial popularity was because it was the first good REST API to do cloud storage AND the price was super reasonable.
- vlovich123 11 months ago
- otabdeveloper4 11 months agoS3 is just HTTP. There isn't really an ecosystem for S3, unless you just mean all the existing http clients.
- acdha 11 months ago
- vineyardmike 11 months ago
- neon_me 11 months agoWhats the motivation behind project like this one?
We got ceph, minio, seaweedfs ... and a dozen of others. I am genuinly curious what is the goal here?
- rakoo 11 months agoI can only answer for Garage and not others. Garage is the result of the desired organization of the collective behind it: deuxfleurs. The model is that of people willing to establish a horizontal governance, with none being forced to do anything because it all works by consensus. The idea is to have an infrastructure serving the collective, not a self hosted thing that everyone has to maintain, not something in a data center because it has clear ecological impacts, but something in-between. Something that can be hosted on secon-hand machines, at home, but taking the low reliability of machines/electricity/residential internet into account. Some kind of cluster, but not the kind you find in the cloud where machines are supposed to be kind of always on, linked with high-bandwidth, low-latency network: quite the opposite actually.
deuxfleurs thought long and hard about the kind of infra this would translate to. The base came fast enough: some kind of storage, based on a standard (even de-facto only is good because it means it is proven), that would tolerate some nodes go down. The decision of doing a Dynamo-like thing to be accessed through S3 with eventual consistency made sense
So Garage is not "simply" a S3 storage system: it is a system to store blobs in an unreliable but still trusted coonsumer-grade network of passable machines.
- koito17 11 months agoMinio assumes each node has identical hardware. Garage is designed for use-cases like self-hosting, where nodes are not expected to have identical hardware.
- otabdeveloper4 11 months agoMinio doesn't, it has bucket replication and it works okay.
- otabdeveloper4 11 months ago
- WhereIsTheTruth 11 months agoperformance, therefore cheaper
- iscoelho 11 months agonot just about cost! improved performance/latency can make workloads that previously required a local SSD/NVME to be actually able run to run on distributed storage or an object store.
it can not be understated how slow Ceph/Minio/etc can be compared to local NVME. there is plenty of room for improvement.
- iscoelho 11 months ago
- rakoo 11 months ago
- Daviey 11 months agoLast time I looked at Garage it only supported paired storage replication, such that if I had a 10GB disk in location A and a 1TB disk is location 2 and 3, it would only support "RAID1-esq" mirroring, so my storage would be limited to 10GB
- leansensei 11 months agoThat's a deliberate design decision.
- leansensei 11 months ago
- sunshine-o 11 months agoI really appreciate the low memory usage of Garage compared to Minio.
The only thing I am missing is the ability to automatically replicate some buckets on AWS S3 for backup.
- storagenerd 11 months agoCheck out this one - https://github.com/NVIDIA/aistore
It is an object storage system and more..
- dtag00 11 months agoA bit of an off-topic question: I would like to programmatically generate S3 credentials that allow only read access or r/w access to only a certain set of prefixes. Imagine something like "Dropbox": You have a set of users, each user has his own prefix, but also users want to be able to share certain prefixes with other users. (Users are managed externally in a Postgres DB - MinIO does currently not know about them).
I found this really difficult to achieve with MinIO, since this appears to require an AssumeRole request, which is almost not documented in any way and I did not find a Typescript example. Additionally, there's a weird set of restrictions in place for MinIO (and also AWS) that makes this really difficult to do, e.g. the size of policies is limited, which effectively limits the number of prefixes a user can share. I found this really difficult to work around.
Can anyone suggest a way to do this? Can garage do this? Am I just approaching this from the wrong side?
Thanks
- arcanemachiner 11 months agoGitHub mirror: https://github.com/deuxfleurs-org/garage
- icy 11 months agoI've been running this on K3s at home (for my website and file server) and it's been very well behaved: https://git.icyphox.sh/infra/tree/master/apps/garage
I find it interesting that they chose CRDTs over Raft for distributed consensus.
- iscoelho 11 months agofrom an operations point of view, I am surprised anyone likes Raft. I have yet to see any application implement Raft in a way that does not spectacularly fail in production and require manual intervention to resolve.
CRDTs do not have the same failure scenarios and favor uptime over consistency.
- iscoelho 11 months ago
- moffkalast 11 months agoFinally one can launch startups from their own Garage again.
- MoodyMoon 11 months agoApache Ozone is an alternative for an object store running on top of Hadoop. Maybe someone who has experience running this in a production environment can comment on it.
- CyberDildonics 11 months agoWhat is the difference between a "distributed object storage" and a file system?
- vineyardmike 11 months agoIt’s an S3 api compatible object store that supports distributed storage across different servers.
Object store = store blobs of bytes. Usually by bucket + key accessible over HTTP. No POSIX expectation.
Distributed = works spread across multiple servers in different locations.
- CyberDildonics 11 months agostore blobs of bytes
Files
by bucket
Directories
key accessible
File names
over HTTP
Web server
- CyberDildonics 11 months ago
- crest 11 months agoFiles are normally stored hierarchically (e.g. atomically move directories), and updated in place. Objects are normally considered to exist in a flat namespace and are written/replaced atomically. Object storage requires less expensive (in a distributed system) metadata operations. This means it's both easier and faster to scale out object storage.
- crabbone 11 months agoThere are few.
From the perspective of consistency guarantees, object storage gives fewer of such guarantees (this is seen as allowing implementations to be faster than typical file-systems). For example, since there isn't a concept of directories in object store, the implementation doesn't need to deal with the problems that arise while copying or moving directories with files open in those directories.
There are some non-storage functions that are performed only by filesystems, but not object storage. For example, suid bits.
It's also much more common to use object stores for larger chunks of data s.a. whole disk snapshots, VM images etc. While filesystems aim for the middle-size (small being RDBMs) s.a. text files you'd open in a text editor. Subsequently, they are optimized for these objectives. Filesystems care a lot about what happens when random small incremental and possibly overlapping updates happen to the same file, while object stores care about performance of sequential reads and writes the most.
This excludes the notion of "distributed" as both can be distributed (and in different ways). I suppose you meant to ask about the difference between "distributed object storage" and "distributed filesystem".
- vineyardmike 11 months ago
- surfingdino 11 months agoThere's also OpenStack Swift.
- giulivo 11 months agoI believe OpenStack Swift in particular is known to work well in some large organizations [1], NVIDIA is one of those and also invested in its maintenance [2].
1. https://www.youtube.com/watch?v=H1DunJM1zoc 2. https://platform.swiftstack.com/docs/
- giulivo 11 months ago
- seaghost 11 months agoI want something very simple to run locally that has s3 compatibility just for the dev work and testing. Any recommendations?
- zmj 11 months ago
- rlonstein 11 months ago
- zX41ZdbW 11 months agoMinio is fairly easy to setup locally or in CI.
We use it for CI in ClickHouse, for example: https://github.com/ClickHouse/ClickHouse/blob/master/docker/...
- zmj 11 months ago
- thecleaner 11 months agoIst this formally verified by any chance ? I feel like there's space where formal designs could be expressed in TLA+ such that its easier for the community to keep track of the design.
- halfa 11 months agoThere is formal proof for some parts of garage layout system, see https://git.deuxfleurs.fr/Deuxfleurs/garage/src/branch/main/...
- halfa 11 months ago
- 11 months ago
- comvidyarthi 11 months agoIs this open source ?
- kevlened 11 months ago
- kevlened 11 months ago
- anonzzzies 11 months agoNLNet sponsored a lot of nice things.
- lifty 11 months agoThe EU, but yeah. NLNet are the ones that judged the applications and disbursed the funds.
- lifty 11 months ago
- bluepuma77 11 months agoCan it be easily deployed with old-school Docker Swarm?
- TristanBall 11 months agoI don't think I have ever personally felt older than having someone describe anything docker related as "old-school"
- bluepuma77 11 months agoThat’s was not my intention!
Docker is young and fashionable, every windows script kiddy uses it nowadays!
And then comes to the Docker forum complaining about strange issues, not realizing Docker Desktop is a different product, it uses a Linux VM to run the Docker engine, which was build for Linux ;-)
I explicitly wrote "old-school Docker Swarm", as that is missing love for years and everyone with 2 IT FTEs seems to be moving to k8s.
- CoolCold 11 months agoI was insulted as well - luckily just mental insult, assuming my age and the context of what's old and what's not :)
- bluepuma77 11 months ago
- TristanBall 11 months ago
- ajbfbasvbv 11 months ago[flagged]
- ajbfbasvbv 11 months ago[flagged]
- tgjk 11 months ago[flagged]