×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Red Hat Acquiring Cloud Storage Company Gluster

Soulskill posted more than 2 years ago | from the investing-in-intangibles dept.

Cloud 34

Julie188 writes "One of the more interesting aspects of Red Hat's acquisition of virtual storage vendor Gluster on Tuesday is how it drags Red Hat into bed with its cloud competitor OpenStack. Red Hat made waves over the summer in the open source community when one of its executives threw punches at OpenStack's community, saying the community amounted to not much more than a bunch of press releases. In July, Gluster contributed its Connector for OpenStack. It enables features such as live migration of VMs, instant boot of VMs, and movement of VMs between clouds on a GlusterFS environment. While Fedora has already said that its upcoming Fedora 16 would support OpenStack, Fedora is a community distro and not beholden to Red Hat. However, Red Hat today promised that it would continue to support and maintain Gluster's contribution to OpenStack. It didn't, however, to promise to quit the smack talk."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

34 comments

In other news. (-1)

Anonymous Coward | more than 2 years ago | (#37607724)

Not a shit was given today about anything.

Re:In other news. (0)

Anonymous Coward | more than 2 years ago | (#37607756)

Not a shit was given today about anything.

No shit. So this guy is an employee of RH and has a personal disdain of OpenStack. So fucking what? I haven't seen evidence that everyone else at RH agrees with him. I haven't even seen evidence that this is official company policy, all employees must comform or be fired, and this is more than one man's feelings. Hey world, did you know that different people involved in any large industry might have different opinions about different projects? Wow, this is real breaking news. This is earth-shattering. This is amazing and unprecedented in every way.

Re:In other news. (-1, Offtopic)

cultiv8 (1660093) | more than 2 years ago | (#37608188)

I recently decided to stop posting comments and articles to /. Some type of change happened after cmdrtaco left, others claim this happened years ago. You can see it with this comment:

It didn't, however, to promise to quit the smack talk.

There're more examples I could pull out, many on the current homepage, but I'm lazy. The point is I no longer feel this is news for nerds and stuff that matters, it's more like it's DRUDGE Report for geeks with a smattering of PR and marketing hype. Are you deliberately looking for geeky-type news and slanting it for a libertarian audience? Is /. destined to become another news outlet where articles and comments are posted by PR firms?

Have you noticed how the number of comments has decreased? How the number of articles per day has increased? Granted my account is a 7 digit, but still, something is different. I still visit daily, but I don't plan to post. I'm sure this will be modded as troll, which I'm definitely not trying to be. This is my dear john letter.

Anyways /. it's been fun.

Re:In other news. (-1)

Anonymous Coward | more than 2 years ago | (#37608214)

You to get of off your fucktardedly high hore.

Re:In other news. (-1)

Anonymous Coward | more than 2 years ago | (#37608296)

I recently decided to stop posting comments and articles to /. Some type of change happened after cmdrtaco left, others claim this happened years ago. You can see it with this comment:

You're posting now, retard.

Anyways /. it's been fun.

Don't let the door hit your ass on the way out, punk.

Re:In other news. (-1)

Anonymous Coward | more than 2 years ago | (#37608448)

So quit. I did, with my 4-digit id and all. Randomized my registered email, scrambled my password. I occasionally read when I'm bored is all.

Re:In other news. (0)

mister_playboy (1474163) | more than 2 years ago | (#37608926)

Bye, n00b. :)

Re:In other news. (-1)

Anonymous Coward | more than 2 years ago | (#37609188)

Anybody with any sense stopped taking Slashdot seriously about 7 years ago. The only reason to glance in and post since is because it's like visiting the retard zoo.

Awesome (2)

Crothers (1288120) | more than 2 years ago | (#37607760)

This is great news, Redhat will keep it open source. I'm glad Oracle didn't get their hands on it and commercialize it like they did MySQL (The commercial plugins in 5.5.16 is what I'm referencing). I much prefer Redhat's approach.

Re:Awesome (2)

Sadsfae (242195) | more than 2 years ago | (#37610876)

This is great news, Redhat will keep it open source. I'm glad Oracle didn't get their hands on it and commercialize it like they did MySQL (The commercial plugins in 5.5.16 is what I'm referencing).

I much prefer Redhat's approach.

I couldn't agree more, they have a track record for doing the right thing.

The best part (-1)

Anonymous Coward | more than 2 years ago | (#37607768)

Best part of acquisition: Gluster fsck

Re:The best part (3, Informative)

bobinabottle (819829) | more than 2 years ago | (#37608618)

Best part of acquisition: Gluster fsck

Unfortunately not it would seem according to this. [gluster.com]

As your volume size grows beyond 32TBs, fsck (filesystem check) downtime becomes a huge problem. GlusterFS has no fsck. It heals itself transparently with very little impact on performance.

Re:The best part (0)

Anonymous Coward | more than 2 years ago | (#37616402)

Best part of acquisition: Gluster fsck

Unfortunately not it would seem according to this. [gluster.com]

As your volume size grows beyond 32TBs, fsck (filesystem check) downtime becomes a huge problem. GlusterFS has no fsck. It heals itself transparently with very little impact on performance.

It was meant as a joke: Gluster rhymes with cluster, fsck is often used by Slashdotters for f*ck

Pot calling kettle (-1)

Anonymous Coward | more than 2 years ago | (#37607826)

RHEV Manager is an ActiveX Control that runs in Internet Explorer only! A Linux-based virtualization manager? RedHat doesn't even have press releases about it. I don't know OpenStack, but I'd rather have nothing more than feelings than require my customers to buy my competitor's OS and use a very specific ugly feature of that OS to claim I had something.

Re:Pot calling kettle (0)

Anonymous Coward | more than 2 years ago | (#37607946)

RHEV Manager is an ActiveX Control that runs in Internet Explorer only! A Linux-based virtualization manager? RedHat doesn't even have press releases about it. I don't know OpenStack, but I'd rather have nothing more than feelings than require my customers to buy my competitor's OS and use a very specific ugly feature of that OS to claim I had something.

Ummm. Not Mr Current are you?

RHEV 3 is java based and runs on Linux/Windows/Solaris etc

Re:Pot calling kettle (2)

atomic-penguin (100835) | more than 2 years ago | (#37608126)

So OpenStack is a hypervisor independent private cloud API. Its corporate backers include Rackspace, NASA, and Dell. There is a similar competing product called CloudStack, by Citrix. The Citrix CloudStack team has integrated a number of OpenStack components into their own product, and have contributed code back to OpenStack as well.

As far as I know, RHEV does not compete with either of those products head on. RHEV is for managing kvm, and maybe xen, hypervisor(s). It is primarily a management frontend for RedHat's supported hypervisors. While CloudStack and OpenStack are Amazon-like private cloud APIs which support a number of different vendors' hypervisors.

Re:Pot calling kettle (1)

eric_herm (1231134) | more than 2 years ago | (#37616814)

There is also deltacloud ( aeolus, etc ). Deltacloud aim to manage "clouds" with different backend, like libvirt for xen, kvm, lxc, vmware, etc.

Re:Pot calling kettle (1)

Anonymous Coward | more than 2 years ago | (#37609182)

Not for long... http://ovirt.org/. Kick-off workshop in November.

Re:Pot calling kettle (0)

Anonymous Coward | more than 2 years ago | (#37609938)

Kick-off workshop in November.

How exciting.

But wait! OpenStack already exists and is being used in live, people-are-using-it, Cloud platforms. Guess RedHat can save themselves the trouble and cancel that kick-off workshop. Phew, glad we caught that one in time!

Summary not asinine enough (1)

Anonymous Coward | more than 2 years ago | (#37608162)

Could you try a little harder to gin up some phony controversy around Fedora?

Pankaj Saraf (-1)

Anonymous Coward | more than 2 years ago | (#37608200)

Nice Post. Thanks for sharing this information.

Less insane support? (2, Interesting)

Anonymous Coward | more than 2 years ago | (#37608334)

Maybe it will become part of the RHEL distro now, instead of the insane support contracts they had, at $800/node per year for 5 email support calls. For a FS that works better on more nodes... we quickly went running when they told us the costs. That kind of support doesn't work well on a cluster.

Re:Less insane support? (1)

gbr (31010) | more than 2 years ago | (#37610468)

We were quoted on two Gluster servers, replicated. The answer was 'no support on Ubuntu', we'd have to switch to their ISO install, and $8500/yr for support.

Re:Less insane support? (1)

GPLHost-Thomas (1330431) | more than 2 years ago | (#37610696)

Truth is, RedHat seems only to be pissed of that OpenStack is an Ubuntu product. Currently, as much as I know, there's only a SUSE RPM repository. Fedora guys might want to try to port, but that wont ever be the upstream distribution of choice. At least 3 core developers in OpenStack are old Canonical employees, and they still have write access to the Ubuntu repositories. OpenStack has a real open source spirit, with even open source governance. There's lots of companies that already do support for it. We'd be glad to see RedHat getting involved in OpenStack, but it's not with repeating bullshits they heard 1 year ago that it's going to happen. Yes, one year ago, OpenStack was only a bunch of press releases. But since Cactus (OpenStack 2011.2), then Diablo (Openstack 2011.3), things have changed quite a lot. So much that it's hard to keep tracks and understand all the new features.

By the way, I feel quite alone working on the Debian port (from Ubuntu) of Openstack. I wouldn't be against some help here. Volunteers would be more than welcome. Please just register the pkg-openstack project on alioth.debian.org if you want to join and contribute. Mainly, the work is testing the rebuild, and making sure that what works in Ubuntu also works in Debian, plus the rewrite of init scripts (since Ubuntu uses upstart and Debian uses insserv) and management/packaging of all dependencies (like python-novaclient, python-webob, python-eventlet, euca2ools, etc.).

Re:Less insane support? (0)

Anonymous Coward | more than 2 years ago | (#37616264)

Meh, every time I've ever used GlusterFS I've never even considered support, but then I've always used it as an alternative to DRBD for small amounts of data. The one time I did try to use it for "real" data (2TB of user home directories) it ended in disaster, so maybe there's something in that $8500/yr that might have helped, there?

Excuse Me. Driver? (0)

Anonymous Coward | more than 2 years ago | (#37611156)

Excuse me. Driver? I know you're busy driving this speeding bus towards the cliff, but could you slow down a little bit and explain to me where we are going?

Seriously, where are we going with this cloud based computing and storage spaghetti. I know that Google uses massively scaled and distributed systems to great success in the cloud but, most of us aren't building search engines. In fact, in business, most of us aren't building anything to do with the web.

So, how does spreading out the processing and now the storage all over creation benefit the average enterprise that is trying to make a unique subset of proprietary and typically legacy applications faster and more available? All the virtualization and distribution that I have seen to date effectively robs us of performance, stealing CPU cycles and increasing disk I/O latency. Stand up a virtualized server on all new super high performance clusters and cloudy SANs next to a three years old dedicated server and the performance is slightly less than the old installation. How is this progress?

Spreading out the storage not only slows it down but makes true resiliency insanely expensive for less than massive data centers and I won't even bother with the compliance headaches that willy nilly distribution brings to the table. I'd like a for-realzy answer to this question, not some ethereal hand waving or declarations that everything must be rewritten in RubyJava.Net and shipped to the cloud or I'll look like a fool.

Re:Excuse Me. Driver? (1)

Courageous (228506) | more than 2 years ago | (#37611862)

I can appreciate the resistance on the vague "cloud" subject, but the criticism of virtualization is strange. You're talking about virtualization robbing the enterprise of CPU cycles when, in today's world of servers starting at 8 cores and going up, the average CPU utilization is something like 2% or less. So it's the bare metal servers that are robbing the enterprise, by using budget to by 98% of something that they don't need (or seldom need). This is disregarding the major boon of virtualization to end users: decreased deployment time often 100 times. Yeah, that number is real, and sometimes it's more like a thousand times or better. I'm not making this shit up.

C//
 

Re:Excuse Me. Driver? (0)

Anonymous Coward | more than 2 years ago | (#37615982)

First, let me be clear that I am after performance first. 100% utilization of the processor for cost purposes is secondary and many times diametrically opposed to primary performance goal. Virtualization is intoxicating. As you said, the ability to turn up a new server in minutes rather than hours, with no wait time for hardware acquisition etc, is awesome. I'll agree on that count.

But, when it comes to getting a transaction inputted, processed and turned around, getting real work done as fast as possible, virtualization adds a cost. Virtualization adds latency. Virtualization adds an excuse that vendors readily leverage to explain away the slowness of their latest bloated iteration in .JavaNet.

My point is that, as with anything, virtualization and distributed storage has its place. Using the right tool for the right job etc, etc. But, there is an irrational stampede towards virtualization and cloudiness that does not make sense. Neither virtualization nor cloud are the "one ring to rule them all" solutions that everyone seems to be proclaiming.

In all honesty, I bought into the all in mentality as well. But, reality is beating me down. Performance is important and virtualization and cloudiness robs performance.

Re:Excuse Me. Driver? (1)

Courageous (228506) | more than 2 years ago | (#37616786)

Well, this is all true. You have to know your workload.

There are workloads where performance -15% is still twice the performance the workload needs. Each year going buy, the number of workloads for which that is true grows. Alot. It's not that hard to make a database do 30,000 IOPS in a VMware environment, presupposing the right network and storage to support that. 30,000 IOPS covers a hella-lot of workloads (the vast majority of all corporate workloads), but certainly not all workloads, and let's be honest; you're not running Citibank's transaction enterprise on VMware yet.

But you know? There are programmed trading companies who care a VERY GREAT DEAL about latency running virtualized workloads on Infiniband. Not many, but they are trying. See where this is going? I look at this whole big picture from the perspective of trend analysis. The trend is; virtualization eats all (standard) workloads. Eventually.

I can see that writing on the wall very clearly.

C//

Re:Excuse Me. Driver? (1)

NoseyNick (19946) | more than 2 years ago | (#37624466)

You're definitely not making it up.
Our physical hardware deployment time, from ordering? probably measured in months. A VM? Minutes.
Virtualisation these days robs you of less than 1% CPU and not much RAM, 50 VMs take a lot less hardware/space/power/cooling than 50 physical hosts, and in fact caching advantages mean they'll usually perform a lot better than 50 physical hosts too.

Re:Excuse Me. Driver? (1)

Courageous (228506) | more than 2 years ago | (#37625696)

Yes; you can start doing things like using fusionio as a storage cache aggregation point; that would be prohibitively expensive to do on 50 physical hosts, but if you do it on just one (or two) virtualization hosts, it hardly costs anything. Cached read IOPS can jump into the 100,000 range.

Likewise with IO fabric. When 40gig Ethernet fabric comes out, we will be able to upgrade a few fat hosts affordably enough, but that's nonsense talk for the 50 physical hosts use case.

We're already 100% 10GE to all our ESX hosts.

GlusterFS? What an unfortunate name. (0)

Anonymous Coward | more than 2 years ago | (#37613774)

For some reason, I read that as Cluster F*ck... Twice.

Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...