×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

PostgreSQL 9.1 Released

Soulskill posted more than 2 years ago | from the new-and-shiny dept.

Databases 148

With his first posted submission, sega_sai writes "The new version of open source database PostgreSQL was released. This version provides many important and interesting features such as synchronous replication, serializable snapshot isolation, support of per-column collations, K-nearest neighbor indexing, foreign data wrappers, support of SELinux permission controls and many others. The complete list of changes is available here"

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

148 comments

no comments?! (0)

Anonymous Coward | more than 2 years ago | (#37379682)

I'm kinda shocked. This used to be a nice neighborhood.

Re:no comments?! (0)

Anonymous Coward | more than 2 years ago | (#37379724)

GPL vs. BSD!
Open Source vs. Free Software!

No need to thank me.

Why does there always have to be an update?? (-1, Flamebait)

xTantrum (919048) | more than 2 years ago | (#37379976)

For once I'd like to see software released with no changes, just a README file that says "Why bother, it works fine doesn't it?" So sick of developers adding verbose, usually needless crap to good enough software.

Hate to say it (1)

bigsexyjoe (581721) | more than 2 years ago | (#37380602)

I work with Postgres. I'm "for" Postgres. I think it's great. But you know what? I barely understand what most of these features are. And of the ones I understand, I have no plans of using.

Hate to be a troll, but databases are boring. And Postgres is boring. The database people I know are boring. The guy I know with the most Postgres knowledge I know, is someone I respect as a worker, but is also the most passionless person I have ever met. No one wants an exciting data storage product. They want it to be "mature" and "reliable" and "predictable." I bet that the better Postgres gets, the more boring it'll get. And these exciting "noSQL" databases will all die.

Re:Hate to say it (3, Insightful)

X0563511 (793323) | more than 2 years ago | (#37380618)

What's your point?

Are you going to tell me word processors are boring, spreadsheets put you to sleep, and calculators suck the life out of the party?

Re:Hate to say it (1)

bigsexyjoe (581721) | more than 2 years ago | (#37380986)

My point is that he shouldn't be surprised by the lack of comments. It just isn't the type of announcement people get excited about.

Re:Hate to say it (1)

roman_mir (125474) | more than 2 years ago | (#37381204)

It's just not true.

I really actually enjoy using PostgreSQL, really, much more than I should probably and much more than I have ever enjoyed using Oracle or DB2 or MSSQL (or progress 4GL aaaaaaa, kill me) in years that I had to use them.

Re:Hate to say it (1)

holdme (2454486) | more than 2 years ago | (#37381214)

Are you sure you are talking about a database and not about a blow up doll you named postgres?

Re:Hate to say it (0)

Anonymous Coward | more than 2 years ago | (#37381218)

Clearly rubbish. You can write "BOOBIES" upside down on a calculator.

vs Oracle? (4, Interesting)

pak9rabid (1011935) | more than 2 years ago | (#37379684)

So...how does PostgreSQL compete with Oracle nowadays as far as features go (specifically, spatial, and data-guard-like replication)? Anybody here tried making the switch?

Re:vs Oracle? (1)

iamvego (785090) | more than 2 years ago | (#37379754)

Well it's always been able to do basic things like LIMIT, multi-row insert statements and correctly treats empty strings as not being null... which Oracle can't do for some dumb reason.

Re:vs Oracle? (2)

Talderas (1212466) | more than 2 years ago | (#37379832)

That dumb reason is backwards compatibility for their customers due to Oracle being older than the SQL standard.

Re:vs Oracle? (1)

h4rr4r (612664) | more than 2 years ago | (#37379896)

So why not make the older behavior optional, rather than continuing this broken functionality?

Re:vs Oracle? (2)

Talderas (1212466) | more than 2 years ago | (#37380130)

Under the SQL standard a NULL string is different from a zero length string (""). It's subtle but it's a difference.

Oracle does not differentiate between the two. A zero-length string behaves identically to a NULL. If a customer wrote an application where ("") equated to a NULL then fixing Oracle to differentiate between the two would break those applications.

If they make it an optional feature Oracle needs to make it an opt-out feature in order to maintain backward compatibility while still allowing their customers the ability to update.

In otherwise, it's easier to ignore the standard on this one thing than screw with things for their customers.

Re:vs Oracle? (5, Insightful)

h4rr4r (612664) | more than 2 years ago | (#37380178)

It is easier, it is still wrong. It should be opt-in to prevent people from using it for new installations.

It is not a subtle fucking difference. It is a huge big honking difference. Either you don't know the information which is null or you know it to be "".

Re:vs Oracle? (1)

Outtascope (972222) | more than 2 years ago | (#37381020)

It's subtle in that people using Oracle coming from a standards compliant database usually have this one bite them in the ass big time without any idea that it was coming. I hate this behavior, and the us of NVs all over the damn place. Outside of cost, it is one of my biggest beefs moving to Oracle. It significantly increased the complexity of moving a large application from MySQL to Oracle. Every single query had to be re-written looking for places where this would get you. It is just plain dumb behavior.

Re:vs Oracle? (2)

Bengie (1121981) | more than 2 years ago | (#37380644)

It's about as different as a null pointer vs a pointer pointing to an empty string object.

Re:vs Oracle? (1)

fuzzytv (2108482) | more than 2 years ago | (#37381078)

Because then it would be a bit easier to port the applications to other databases?

Plus this particular "feature" is used on so many places of the current code base (and that's a huge amount of PL/SQL code) that it's almost impossible to fix. Plus it's actually a bit funnier, because the exact behaviour depends whether you use CHAR or VARCHAR2 and if you're in SQL or PL/SQL. And it's not the only funny feature in Oracle.

Sometimes I have nightmares about the reasons that led the developers to implement it this way.

Re:vs Oracle? (2)

amorsen (7485) | more than 2 years ago | (#37379996)

Backwards compatibility does not prevent Oracle from supporting FETCH FIRST. See Limiting result sets [arvin.dk]. Row value constructors wouldn't be a problem to handle either, especially since Oracle already has the functionality just with a silly syntax.

Re:vs Oracle? (2)

dkleinsc (563838) | more than 2 years ago | (#37380248)

With regards to LIMIT, Oracle does have a couple of equivalents:
1. WHERE rownum < end

2. select * from (select S1.*, ROWNUM rn FROM ( real query ) S1 WHERE ROWNUM <= end) WHERE rn >= start

Obviously, the second syntax is a bit painful, but it works, and it has the important behavior that it doesn't calculate any of the rows that aren't returned (as Postgres says it does for LIMIT...OFFSET in 8.1, see the docs [postgresql.org]). This is important when you're selecting items 4990-5000 of a 10,000 row result set.

I've used both PostGres and Oracle - they're both pretty good at their jobs, both have their quirks, upsides, and downsides.

Re:vs Oracle? (1)

Bengie (1121981) | more than 2 years ago | (#37380744)

You complain about minor things. I have to import data from Oracle servers and for some reason, Oracle loves to dump out rows with different column counts.

I have to deal with a lot of customers with a lot of different versions of Oracle, and they all seems to have had CSV files that I could not important because the header says 14 columns with their names, but some rows have 5 columns and some rows have 20 columns. Pulled those numbers out of my ass, but that's what happens.. A LOT.. F'n annoying as hell.

Even running select statements against the DB gives me issues like that.. WTF?!

I don't admin Oracle boxes, I just request exports from Oracle DBs from customers and this is what happens quite often when customers try to send us data.

Always fun wasting a week of back-and-forth with a customer just trying to get consistent row schemas.

Re:vs Oracle? (2)

fuzzytv (2108482) | more than 2 years ago | (#37381146)

I doubt that's an Oracle issue, my guess is they're using a custom-developed tool to export the data and it's buggy. I'm dealing with a lots of data exported from Oracle (CSV, columnar, ...) and I've never had this problem. External tables actually made exporting even easier.

So while I'm a PostgreSQL fan, let's not blame Oracle for the mistakes of others.

Re:vs Oracle? (1)

rasherbuyer (225625) | more than 2 years ago | (#37381166)

let me get this straight you're saying that:

select col1,col2,col3
from table;

won't always return 3 columns? What are you smoking?

Re:vs Oracle? (1)

jellomizer (103300) | more than 2 years ago | (#37379766)

I expect that most wouldn't because they have already spend all the money on an Oracle License.
However my experience is most shops (Including Oracle Shops) Don't use nearly close to all those cool features they provide.

They just want a SELECT * FROM TABLE WHERE VALUE='TEXT'

Re:vs Oracle? (0)

Anonymous Coward | more than 2 years ago | (#37379858)

Actually, several years ago I assisted one company to switch from Oracle to PostrgeSQL. They saved a bunch of money because they would have needed upgrades & additional licensing while they dumped their older slower systems. At the same time they also switched from Solaris to Linux.

Re:vs Oracle? (0)

Anonymous Coward | more than 2 years ago | (#37379888)

Frankly, if you have already bought Oracle, SQL Server, or DB2, you are not likely to switch to PostgreSQL. But if you are using SQLITE, MySQL or similar and need to scale up, then PostgreSQL may be good enough solution rather than going up to Oracle or the other big SQL servers.

Then again Oracle wants users of MySQL to switch to Oracle DB ;)

Re:vs Oracle? (1)

trcollinson (1331857) | more than 2 years ago | (#37380002)

That makes a lot of really big assumptions. For example, in the case of my company which may switch away from Oracle, we have ongoing licensing costs which means we haven't "bought" Oracle, we are "buying" Oracle, and continuing to do so over and over again, every year.

Also, as another person mentioned, we use only a small percentage of the actual features that Oracle provides. For us, and I am assuming a lot of others who are paying up the ying yang for licenses, switching to a PostgreSQL solution makes a lot of sense. Really all we want and need is a stable and cost effective environment. (Now, I must say we are looking at enterprise PostgreSQL support which isn't cheap, and far from free. But still a significant savings over Oracles licensing fee).

Not Sqlite (2)

wandazulu (265281) | more than 2 years ago | (#37380020)

Just one small nitpick...sqlite is really meant as an embedded database into an application, it's not a full-fledged database like any of the others mentioned (it doesn't have networking, for example). I suppose you could be scaling up from an embedded sqlite db, but that suggests your application has gotten so big that an external database is necessary.

It's also one of the backing store options for Apple's Core Data framework.

Re:vs Oracle? (1)

greg1104 (461138) | more than 2 years ago | (#37380508)

There's a sunk cost fallacy here. I regularly convert companies from Oracle to PostgreSQL, financing the project out of savings in the recurring annual maintenance/support costs that Oracle applies. Just because you've already spent a lot of money on a commercial database, that doesn't mean you can't cost justify it based on the recurring overhead.

Re:vs Oracle? (1)

fuzzytv (2108482) | more than 2 years ago | (#37381296)

Most people don't realize that commercial software is usually licensed, not sold. That's why they don't see the consequences (and it's not just about costs).

Re:vs Oracle? (1)

Cutting_Crew (708624) | more than 2 years ago | (#37380552)

if you are coming to the end of your yearly license with Oracle and why would you not switch to postgres and save some money? Sure you have to spend time and money switching the tables over to a different database and testing that but i am guessing that will cheaper than paying for another yearly license for Oracle and another year...etc etc.

Re:vs Oracle? (1)

X0563511 (793323) | more than 2 years ago | (#37380594)

If you're using MySQL and need to scale further for some reason, then just use MySQL Cluster. There's no need to change entirely.

Re:vs Oracle? (1)

fuzzytv (2108482) | more than 2 years ago | (#37381168)

Frankly, if you have bought Oracle, you're more than aware about their licensing fees. And how ridiculous that gets once you need to use VM, or when you need more CPUs etc. My experience is that the businesses that were already hit by an Oracle sale are looking for other solutions - and PostgreSQL is very popular among them.

Re:vs Oracle? (0)

Anonymous Coward | more than 2 years ago | (#37379936)

It varies from individual feature to feature.

I'm not particularly familiar with data guard and its relative merits compared to Postgres' hot standby but in terms of spatial I'd have said Postgres is far ahead of Locator, about on a par with Spatial.

One area that I significantly prefer Postgres are you are not limited to PL/SQL and Java for writing SPs. For instance the ability to write in R is a godsend for me compared to struggling with the pretty poor data mining features Oracle has built in.

Re:vs Oracle? (5, Informative)

discord5 (798235) | more than 2 years ago | (#37380046)

So...how does PostgreSQL compete with Oracle nowadays as far as features go (specifically, spatial, and data-guard-like replication)?

I can't speak for Oracle, but if you're interested in spatial stuff you should have a look at PostGIS [refractions.net]. We've recently been using it to store tons (magnitude of several million) of points and polygons, and we're very happy with it. We've got about hundred simultaneous users connecting to the WFS in peak hours, and it bears the load pretty well if you properly index your tables. I can't speak much for updates, since our database updates in bursts (we import new data every X weeks). I can't go too much into detail about the type of data other than that it's polygons, points, and mostly distance calculations and intersections.

We briefly looked at Oracle Spatial for a while, looked at the pricetag and the project budget and made the decision to try the PostgreSQL+PostGIS combination and see how far it'd get us. We were pleasantly surprised. I had some experience with PostgreSQL before in the 7.X releases in a previous lifetime but in the end wasn't all that pleased with it, especially on busy servers. Nowadays, I'm running 9.0 and I'm pretty much content about it. Replication wise we've got a PITR setup up and running which is more than enough for our purposes. It's pretty well documented, but be sure to test everything, etc etc etc... It doesn't quite hold your hand when you're setting it up, so double check everything.

I'm sure that there will be people on here that have more extensive experience with PostgreSQL (and Oracle) to fill you in on the juicy details, but in general I'm pretty pleased with it so far. It scratches my particular itch, and does so without all too much headaches.

Re:vs Oracle? (1)

pak9rabid (1011935) | more than 2 years ago | (#37380224)

Thank you! Your response was very helpful. I've been eye-balling PostGIS and was wondering: 1.) is it something somebody would bring up if it wasn't specifically mentioned in a question regarding spatial, and b.) how it performs. Based on your response, it sounds like it's pretty usable.

Re:vs Oracle? (1)

Anonymous Coward | more than 2 years ago | (#37380892)

We've been using it at my org for a lot of spatial data as well, though we're still stuck on MS SQL for production.

One of the big advantages it appears to have over other spatial databases is that there are a lot of spatial apps out there that will nicely integrate to PGSQL, like QGIS, Grass, MapServer. There's plenty of tool suites as well, like FWTools (for windows) or GDAL/OGR/GEOS for *nix.

In contrast, though MS has had spatial support for a couple of years now it's treated like third rate citizen - no official tool for importing shapefiles, no support in Integration Services, Entity Framework, RIA Services, Silverlight, etc etc. They do have an attractive map component for Reporting Services, but it's buggy at best.

Re:vs Oracle? (1)

fuzzytv (2108482) | more than 2 years ago | (#37381316)

I don't think it would be mentioned - actually it's a separate package built on top of PostgreSQL (thanks to the ability to write custom data types etc).

Re:vs Oracle? (1)

Unequivocal (155957) | more than 2 years ago | (#37381114)

++ on this comment. My experience is similar. Postgres and PostGIS are very reliable, very fast and scale well.. If you set them up right. There isn't as extensive of a commercial support network for them as Oracle (duh) but there are commercial options and the online free communities are amazingly open, supportive and helpful. The GiST indexes which enable (I believe) a lot of spatial operations to occur in a timely manner are really impressive and learning how they work itself is a nice little CS continuing education course.

Re:vs Oracle? (5, Interesting)

greg1104 (461138) | more than 2 years ago | (#37380480)

Lots of companies are converting from Oracle Spatial to PostgreSQL plus PostGIS because it's faster and has better compliance to GIS standards. The text of the talk isn't available, but the FAA Airports GIS and PostgreSQL [postgresqlconference.org] presentation was a typical report I was in the audience for. The FAA's first conversion happened very quickly: just export their data in a standard format, import into PostgreSQL, and tweak some queries. The result worked so much better that they've standardized on PostgreSQL for spatial applications at the FAA now. Internal projects needing a spatial database have to justify why they want the budget for Oracle Spatial, and it's default deny unless you have a really good reason.

The addition of synchronous replication to 9.1 has made it a pretty even match for Oracle's Data Guard now. The main bonus is that you can control the integrity level you want at the transaction level. So you can have a database with a mix of important data (only considered safe when on two nodes) and fast, best attempt eventual consistency data, all in one place. Nothing else can replace Oracle at the top end while still having a lean enough mode to be competitive with NoSQL database [pgcon.org] when integrity isn't the top priority.

We convert Oracle installs to PostgreSQL all the time at my day job [2ndquadrant.com]. The main obstacles I keep seeing that don't have simple solutions are 1) using a lot of PL/SQL, 2) differences in query handling, such as OUTER JOIN behavior or reliance on optimizer hints, and 3) can't limit the resources used by individual users easily in PostgreSQL yet. I actually have a design outline for how to solve (3)...would only cost a fraction of a typical Oracle license to sponsor that feature. EnterpriseDB's version of Oracle comes with PL/SQL compatibility, but only in a commercial product that lags behind the open-source releases--and buying from them just switches which vendor you're locked into.

Re:vs Oracle? (1)

Unequivocal (155957) | more than 2 years ago | (#37381128)

Good post. Errata -- I think you meant to write "EnterpriseDB's version of [[Postgres]] comes with PL/SQL compatibility, but only in a commercial product that lags behind the open-source releases--and buying from them just switches which vendor you're locked into."

Re:vs Oracle? (1)

0xA (71424) | more than 2 years ago | (#37380896)

Using PostGIS and replication here, works very nicely. Out biggest problem was that be have billions of rows that can't be locked for a huge high traffic mobile app. Using a round robin load balanced pool of postgres servers in hot spare mode as read only DBs solved a lot of issues.

Re:vs Oracle? (2)

fuzzytv (2108482) | more than 2 years ago | (#37381248)

The streaming replication is generally equal to Oracle DataGuard (physical standby). The hot_standby actually gives you about the same as Active DataGuard, i.e. the ability to run read-only queries on the standby for free (you have to pay for that with Oracle). With Oracle you'll get a management console to handle all this, with PostgreSQL you have to set it up manually (5-minute task), but there are several tools that may help (e.g. repmgr).

Spatial ... although it's not a built-in feature, there's a PostGIS (www.postgis.org). A great package to manage geospatial data.

There are companies that are migrating from Oracle, but they don't want to go public for good reasons. I know there were some case studies about how Sony replaced Oracle with EnterpriseDB - although it's mostly a marketing mumbo jumbo.

Replication Drawback (1)

Anonymous Coward | more than 2 years ago | (#37379890)

If you wish to use their replication implementation to increase performance, you will probably have to look elsewhere, I'm afraid. In the event that your primary server fails, you are required to promote one of the existing slaves to be the new primary server and all other slaves will require a fresh data dump from the new master. Maybe in another year (when 9.2 gets released) it will be ready for the masses.

Re:Replication Drawback (2)

greg1104 (461138) | more than 2 years ago | (#37380598)

It's not trivial to figure out, but we've been deploying PostgreSQL 9.0 without the problem you describe (must do a fresh dump from the master) for a while now. The repmgr [2ndquadrant.com] software we've released takes care of all the promotion trivia. Worst-case, unusual situations can require you use a tool like rsync to make an out of date standby node into a copy of the new master. That's not the expected case though.

Re:Replication Drawback (1)

fuzzytv (2108482) | more than 2 years ago | (#37381372)

Yup, and things will get a bit more interesting thanks to the cascading replication.

Re:Replication Drawback (1)

Lennie (16154) | more than 2 years ago | (#37380882)

A possible alternative is to use pgpool II, from the webpage:

- Automated failover. If one of the two PostgreSQL goes down, pgpool-II will automatically let remaining node take over and keep on providing database service to applications.
- Query dispatching and load balancing. In streaming replication mode, applications need to carefully chose queries to be sent to standby node. Pgpool-II checks the query and automatically chose primary or standby node in sending queries. So applications need not to worry about it.
- Online recovery. Recover failed node without stopping pgpool-II and PostgreSQL.

Obviously there are also commercial providers of PostgreSQL which have added their own features.

Re:Replication Drawback (1)

fuzzytv (2108482) | more than 2 years ago | (#37381350)

That is not true. If you promote the slave that's ahead of all the other slaves, then the other slaves can just reconnect to the new master. Tools like repmgr can handle this for you.

And no one actually says you have to do a completely fresh base backup. Ever heard about rsync?

The only decently sane SQL database (0)

Anonymous Coward | more than 2 years ago | (#37379908)

I've tried a few SQL databases, and my favorite of them all is Postgres. With Postgres I don't get nearly as many syntax/logic surprises as with SQL Server.. and I don't get nearly as many performance surprises as with MySQL.

I mean for some reason Microsoft couldn't give us a LIMIT/OFFSET clauses for the trivial operation of implementing paging leaving us instead to do sub-queries. What the hell? That's like making a programming language with arrays but leaving out the indexing operator.

Re:The only decently sane SQL database (1)

amorsen (7485) | more than 2 years ago | (#37380048)

I hate MSSQL as much as anyone, but it does (in later versions at least) support cursors and ROW_NUMBER. It is a bit silly to not support FETCH FIRST in 2011, but hey, it's doing better than Oracle.

Re:The only decently sane SQL database (1)

djdanlib (732853) | more than 2 years ago | (#37380098)

Actually, MSSQL11 will support FETCH FIRST.

Check it out (you'll have to scroll a couple pages down): http://www.codeproject.com/KB/database/Denali_Tsql_Part_2.aspx#3.3 [codeproject.com]

Re:The only decently sane SQL database (1)

djdanlib (732853) | more than 2 years ago | (#37380118)

Oops, I forgot to also mention that it will support LIMIT/OFFSET as well, which is noted in the same link. Sorry for double-post.

not excited (4, Interesting)

roman_mir (125474) | more than 2 years ago | (#37380004)

I am not excited about any of these changes unfortunately, they are somewhat specialized, though having synchronous replication and serializable transaction isolation sounds more useful than other stuff.

But there are real things that are missing. Most obvious is distributing of one SQL request into parallel processes or threads to speed up query execution on multi-core systems (which are all multi-core today). The other is the entire issue of attempting to calculate execution time and failing in various cases in the planner, like the really sad cases of completely mis-handling of the mergejoin estimates, which then forces people to set enable_mergejoin to false unfortunately, it's a sledgehammer approach, but otherwise things that can execute in a few milliseconds can take tens of seconds and even minutes instead.

There are so many ways to improve performance and really kick it up, and instead there are more features added. I think database performance is now more important for PostgreSQL than features (unless this means introducing parallelization of single SQL requests.)

Otherwise it's a good database, it already provides tons of features. The one weird thing that I find though, is that for replication or hot stand by or just for creating a dynamic backup, the segments that are written to the disk are always of fixed size.

You can modify the size, which is 16MB by default, but you can only modify the size when you configure the source code before compiling it: configure --with-wal-segsize=1 - this configures the segments to 1MB, which allows the second drive to last that much longer if all you are doing is using a second drive to keep dynamic backup (and that asynchronous backup method, by the way, the problem that they are solving with "synchronous replication", it's that you either have these segments fill up, and then the segment is written to disk, or you wait until time expires for segment to be written to disk if you set checkpoint_timeout). I imagine treating fixed sized segments is easier than generating segments that are of exact size equal to amount of data that was produced in a time period, but it's a waste of disk though.

The other big thing that I would love to have in a database is ability to scale the database to multiple machines, so have a logical database span multiple disks on multiple machines, have multiple postgres processes running against those multiple disks, but have it all as one scalable database in a way that's transparent to the application. That would be some sort of a breakthrough (SAN or not).

Re:not excited (2)

simcop2387 (703011) | more than 2 years ago | (#37380262)

The other big thing that I would love to have in a database is ability to scale the database to multiple machines, so have a logical database span multiple disks on multiple machines, have multiple postgres processes running against those multiple disks, but have it all as one scalable database in a way that's transparent to the application. That would be some sort of a breakthrough (SAN or not).

The big reason you don't find that and it would be a tremendous breakthrough, is that it is currently believed to be actually impossible to get that. Have a look at the CAP Theorem. http://en.wikipedia.org/wiki/CAP_theorem [wikipedia.org]

Re:not excited (0)

Anonymous Coward | more than 2 years ago | (#37380376)

Try cassendra, http://cassandra.apache.org/

Re:not excited (3, Interesting)

fuzzytv (2108482) | more than 2 years ago | (#37381468)

Cassandra is just one of many NoSQL databases, but yes - NoSQL can be an answer to workaround the CAP theorem in some cases.

But in many cases it's not a solution. If the data are relational, if you need full ACID, etc. then ditching "consistency" is not a choice. There are projects to build PostgreSQL clustering solutions, that may resemble RAC a bit, although none of them uses shared disk (so each instance needs a separate disk). Let's mention PGCluster, PGCluster II or Postgres-XC (aiming to build write-scalable cluster, something Cassandra does in the NoSQL world). Sure, all this has to follow the CAP theorem.

Re:not excited (1)

vlm (69642) | more than 2 years ago | (#37380432)

The big reason you don't find that and it would be a tremendous breakthrough, is that it is currently believed to be actually impossible to get that. Have a look at the CAP Theorem. http://en.wikipedia.org/wiki/CAP_theorem [wikipedia.org]

Most CAP arguments seem to rely on some combination of not understanding the concepts behind token ring networks, not understanding distributed hash tables, not tolerating latency, and/or trying to scale to a very small number (like 2) instead of a large prime number of majority voting servers. Or it assumes ENIAC level MTBF rates of individual voting devices and vote counters, etc.

Don't get me wrong, CAP is a good theoretical argument, and educational to think about, it just doesn't apply to many real world. Kind of like theoretically unbreakable encryption cannot exist so we shouldn't even try crypto... however crypto schemes taking multiple universe lifetimes to crack are none the less useful, despite "perfection" being impossible.

Re:not excited (5, Interesting)

fuzzytv (2108482) | more than 2 years ago | (#37381544)

The reliability probably improved since ENIAC, but the the question still is "when it is going to fail" and not if it is going to fail. Because it is going to fail - it may be a drive, CPU, PSU, a network switch, an AC unit, the whole AWS data center ... something is going to fail.

The beauty of CAP theorem as I see it that it says "You can't get all three at the same time, face it." If you don't need the strong consistency (and with most apps you don't), then ditch it and it'll be much easier and cheaper to built and scale the system. I'd say once you realize this inner beauty, it clears your mind - something like a Zen of distributed computing.

should be excited - what you want already exists (2)

ron_ivi (607351) | more than 2 years ago | (#37380980)

The other big thing that I would love to have in a database is ability to scale the database to multiple machines, so have a logical database span multiple disks on multiple machines, have multiple postgres processes running against those multiple disk

This exists for Postgres in the form of Yale's HadoopDB project: http://db.cs.yale.edu/hadoopdb/hadoopdb.html [yale.edu] http://radar.oreilly.com/2009/07/hadoopdb-an-open-source-parallel-database.html [oreilly.com]

HadoopDB is comprised of Postgres on each node (database layer), Hadoop/MapReduce as a communication layer that coordinates the multiple nodes each running Postgres, and Hive as the translation layer. The result is a shared-nothing parallel database, that business analysts can interact with using a SQL-like language. [Technical details can be found in the following paper.]

as well as for commercial forks of Postgres such as EMC's GreenPlum.

Re:not excited (3, Informative)

Anonymous Coward | more than 2 years ago | (#37380288)

I am not excited about any of these changes unfortunately, they are somewhat specialized, though having synchronous replication and serializable transaction isolation sounds more useful than other stuff.

Synchronous replication for many is a must have. In many cases this single feature was preventing adoption of PostgreSQL as many applications require synchronous replication support.

Per column collation is another feature which is commonly required in complex applications; especially if they are multilingual capable.

Extensions support is very important and will benefit anyone who uses any contrib or third party extension for PostgreSQL. Unless your database work starts and stop at the most basic of features, this is a something you'll likely use, if not today, then tomorrow.

Serializable isolation level support, for many, is a critical feature. I agree most people likely won't benefit, but its an important feature to round out support for all database applications and loads. It means PostgreSQL is now an option for whom it wasn't previously.

Unlogged tables is extremely important, super cool, and very powerful. Many applications require time consuming kludges to work around this previously lacking feature. For many applications this is a make it or break it feature from a performance perspective.

For many geo-location applications, but even beyond that, nearest-k is an extremely important feature which not only simplifies code, but also reduces development and test while also providing for a nice performance boost.

And of course, the SE Linux stuff is very important for customers (you'd be surprised how many there are, including the NSA) who consider this the most important feature of the release.

But there are real things that are missing. Most obvious is distributing of one SQL request into parallel processes or threads to speed up query execution on multi-core systems (which are all multi-core today).

You want EnterpriseDB. It already has this. As for the rest of your rant, pragmatically, it just doesn't happen very often. Assuming you're not just trolling, what did the PostgreSQL Perf guys have to say about your issue. Generally these types of issues are considered planner bugs (unlike all other SQL vendors) and if possible, will gladly create a fix, if able/appropriate. But 99% of this time, this is just one of those things people love to lie and troll about.

There are so many ways to improve performance and really kick it up, and instead there are more features added. I think database performance is now more important for PostgreSQL than features (unless this means introducing parallelization of single SQL requests.)

Except that PostgreSQL is already one of the fastest databases available for 90% of the likely workloads. Performance, in of itself, typically isn't the primary focus right now simply because, for the vast majority of users, it already spanks or is at least on par with most every other option.

You can modify the size, which is 16MB by default, but you can only modify the size when you configure the source code before compiling it: configure --with-wal-segsize=1 - this configures the segments to 1MB, which allows the second drive to last that much longer if all you are doing is using a second drive to keep dynamic backup (and that asynchronous backup method, by the way, the problem that they are solving with "synchronous replication", it's that you either have these segments fill up, and then the segment is written to disk, or you wait until time expires for segment to be written to disk if you set checkpoint_timeout). I imagine treating fixed sized segments is easier than generating segments that are of exact size equal to amount of data that was produced in a time period, but it's a waste of disk though.

Pragmatically, just not an issue. Which is why its a compile time option and not a runtime option.

Re:not excited (2)

roman_mir (125474) | more than 2 years ago | (#37380420)

people love to lie and troll about.

- more people saying that I am trolling, yet again. Yet again, incorrectly, I don't troll ever. [postgresql.org]

Re:not excited (0)

Anonymous Coward | more than 2 years ago | (#37380922)

Defensive much? I didn't say you were trolling. I gave you the benefit of doubt you were not trolling and explained why it was reasonable to say this is one of those things to which people constantly troll about. Your OP was somewhat splitting hairs and creating issues which generally are not, so while it was possible you were trolling, you still received the benefit of doubt. Which says a lot considering some of the content in your OP. This in turn makes your latest response pretty fishy. Its needlessly defensive and evasive. I'll also add, I was fully aware of the bug report before you posted a link to it.

Now then, having said that, most people simply don't trigger the bug. And since you didn't really clarify and answer to the questions posed, is it now safe to assume you read a bug report and are now trolling with that? That's the standard anti-PostgreSQL troll tactic these days. As I originally stated, the vast majority of people simply don't suffer from that bug. Did you actually suffer from the bug or did you read a bug and declare PostgreSQL is broken in spite of the fact few users actually experience an issue from the bug, let alone trigger it.

And to further clarify, yes, whining about things which are not issues for the general user base, or really, not an issue at all, absolutely does have an air of troll. Especially when its presented in such a way to suggest that it IS an issue for people who may read your post an not know any better. At best, its disingenuous. Regardless, its certainly not likely to endear you to the better informed in the audience.

Re:not excited (1)

vlm (69642) | more than 2 years ago | (#37380298)

The other big thing that I would love to have in a database is ability to scale the database to multiple machines, so have a logical database span multiple disks on multiple machines, have multiple postgres processes running against those multiple disks, but have it all as one scalable database in a way that's transparent to the application. That would be some sort of a breakthrough (SAN or not).

You need a middle-ware machine that understands enough SQL to send the correct request to the most optimized box, and a fallback slushware box to handle anything you didn't figure out how to manually optimize on the speedy indexed boxes.

I realize this is a postgresql article but at one time I had a herd of little mysql boxes (data was replicated outside mysql) and each had different custom indexes set up that matched certain queries. So depending on which set of disk drive filling indexes fit the best, my middle box shoved them out to the right box. Using horrible regexes. Icky, but it worked. Obviously this is simpler with lots of strange reads, not many joins, and few inserts.

A general generic extendable solution would probably take some kind of AI, or have a horrific overhead cost. Or both.

Re:not excited (2)

roman_mir (125474) | more than 2 years ago | (#37380474)

I address all of these shortcomings that I am writing about here within the application.

Of-course it's much simpler for me to do from application perspective, because I know what the business logic is, so I break SQLs into pieces that can run in parallel, then I execute them in multiple connections against the database (thread or process per connection), and then I merge data as it comes back. This speeds up execution dramatically, not even close to what a single serial SQL can do.

As to adding more machines to the cluster - again, in the application level I have to split data logically into separate instances, application knows where to go for different segments of data, so this is not transparent to the application of-course, it must know where different data is.

Re:not excited (-1)

Anonymous Coward | more than 2 years ago | (#37380550)

lol wtf are you talking. You clearly work in mom and pops shops. Derp derp database. lol, just lol.

Re:not excited (0)

h4rr4r (612664) | more than 2 years ago | (#37380590)

What a rational, well argued, and sophisticated comment that adds immense value to the conversation at hand.

Re:not excited (0)

Anonymous Coward | more than 2 years ago | (#37380730)

distributing of one SQL request into parallel processes or threads

This is very useful for some specific use cases like data warehouse applications, but the more common use is OLTP where there are many simultaneous simple queries this isn't a big deal because the multiple queries use the multiple CPUs/cores. Always nice to see more functionality, but this may be hard to implement with their multi-process articture.

really sad cases of completely mis-handling of the mergejoin estimates, which then forces people to set enable_mergejoin to false

Have you tried increasing your stastics target? Give the planner better data and it'll give you a better plan. Also, may get better results fiddling with Planner Cost Constants instead of turning it right off.

The other big thing that I would love to have in a database is ability to scale the database to multiple machines, so have a logical database span multiple disks on multiple machines, have multiple postgres processes running against those multiple disks, but have it all as one scalable database in a way that's transparent to the application.

Again, great for stuff like data wherehouse applications, but not a huge issue for OLTP where normal replication solutions are good enough. It may be better to leave stuff like this to the specialized database warehouse systems.

Re:not excited (1)

greg1104 (461138) | more than 2 years ago | (#37380874)

PostgreSQL changes are driven by what people want badly enough that they are willing to invest resources--development manpower, time, etc.--into that change. So your suggestion that performance is more important than these features can't be true. If performance were really the driver for most people, there would be more performance changes instead of all these features contributed. Extending PostgreSQL and getting the code committed is a lot of work to do right. There's very few people developing features here for fun; most of the new features are developed to solve very real business needs. PostgreSQL 9.2 looks like it will have a number of performance improvements however, the pendulum seems to have swung back to where those are more needed again. (The last release to focus mainly on performance was 8.3)

Parallel query and multi-node operation would all be nice. Progress continues toward those goals, while still shipping a new, stable version each year.

The fact that the WAL segments are fixed at 16MB doesn't have the impact you're describing anymore. The streaming replication introduced last year in PostgreSQL 9.0 allowed copying partial WAL files over. And there is no wasted space just because a checkpoint ran when only part of a WAL file was used; checkpoints don't move to another WAL file. If you turn on the archive_timeout feature, that has the problem you describe. But that's been considered obsolete by most people, now that the 9.0 streaming feature is available.

Re:not excited (1)

roman_mir (125474) | more than 2 years ago | (#37380958)

Hey, I am just writing on my experiences, obviously people don't add stuff without reason.

As to WAL segments - I am not using this feature for streaming replication, just to have an immediate backup to a separate disk, no hot stand by, nothing like that. Imagine a bunch of stores, each one has a small server for management. Data gets transfered to the central servers but also gets backed up to a separate drive that's only used for backup, nothing else. No hot stand by processes waiting for the main process to die or anything. So yes, archive_timeout is used to force a dump of the data to the second disk. You may consider it 'obsolete', but it has a valid use case, and it's not for streaming to a separate database, just to be able to replay the data back in case main disk fails.

Re:not excited (1)

Lennie (16154) | more than 2 years ago | (#37380926)

I'm kind of surprised that you mention parallel processing. In most tests I've seen PostgreSQL does add working with more cores than MySQL does.

Re:not excited (1)

roman_mir (125474) | more than 2 years ago | (#37380988)

As I said [slashdot.org] - a single SQL is not broken into smaller independent pieces to be executed by separate processors/threads. Separate connections/sql requests run in separate processes, but one sql request is not turned into many parallel executions.

Universal Relational Database of Probability (0)

hantarto (2421914) | more than 2 years ago | (#37380026)

I am think what we need are large database that essentially like table of all possible information stored in infinite configuration space. That way all possible information for any purpose is stored in one place that we all can share. We can pick and choose what sets of information we want to be 'real' in our place on probability axis, and find it instantly. Cure for aids-cancer already in there for sure! This is for the good of humanity and make IT guy job so much more easy haha.

I am hope somebody should work on this one soon. I really would like to see it haha.

Re:Universal Relational Database of Probability (1)

Bodhammer (559311) | more than 2 years ago | (#37380128)

The odds of this happening are NULL...

Re:Universal Relational Database of Probability (0)

hantarto (2421914) | more than 2 years ago | (#37380186)

I am think it not happen with that sort of attitude, my friend! Haha. We need positive thinker if ever we are going to make real progress for the good of mankind, not negative poo-poo people.

I have start writting script in pythong that am going to start populating large database with all possible informations. I will need to keep adding hard drive as I go haha, but eventually it will get there. Please send me your donation if you believe in this project!

Re:Universal Relational Database of Probability (1)

h4rr4r (612664) | more than 2 years ago | (#37380228)

Already we can tell you are doomed to failure. You are writing a script while ignoring the fact that this functionality already exists. Just cat /dev/random into your db.

Re:Universal Relational Database of Probability (0)

hantarto (2421914) | more than 2 years ago | (#37380246)

Haha my friend you just increase my productivity! I love you so much! URDP project underway haha!

Re:Universal Relational Database of Probability (0)

Anonymous Coward | more than 2 years ago | (#37380696)

I've thought about this problem, and a NoSQL type structure would actually work better. There would simply be too many columns for a relational model to work well.

Re:Universal Relational Database of Probability (1)

hantarto (2421914) | more than 2 years ago | (#37380784)

I are already ahead of you on that one, friend haha. I am try writting for SimpleDB in Erlang now. I hope to execute very soon yes. Very soon we have massively columnar database stretching toward infinity haha!

Thank you all for being so wonderful, slarshdot!

Installer improved? (0)

Anonymous Coward | more than 2 years ago | (#37380356)

How about an installation that doesn't suck?

Re:Installer improved? (1)

h4rr4r (612664) | more than 2 years ago | (#37380490)

What installer?
They have rpms and debs, what more could you want?

Re:Installer improved? (0)

Anonymous Coward | more than 2 years ago | (#37380534)

Sounds amazing. Unfortunately, I am referring to a Windows installer.

Re:Installer improved? (1, Flamebait)

h4rr4r (612664) | more than 2 years ago | (#37380640)

Don't run Postgres on Windows. That would be mindbogglingly stupid.

Re:Installer improved? (0)

Anonymous Coward | more than 2 years ago | (#37381454)

Come on... What about us guys that want to get off of MS-Access.

Re:Installer improved? (1)

fuzzytv (2108482) | more than 2 years ago | (#37381608)

Things are improving, although a bit slowly. Plus there are windows installers available at enterprisedb.com.

Re:Installer improved? (0)

Anonymous Coward | more than 2 years ago | (#37380650)

Why would someone install a database server on Windows?

Re:Installer improved? (1)

fuzzytv (2108482) | more than 2 years ago | (#37381584)

Because they already have a Windows server and they don't want to buy another machine?

Re:Installer improved? (0)

Anonymous Coward | more than 2 years ago | (#37381258)

Windows is good for games and internet browsing. Serious applications like database servers should never be run on Windows. It's only for consumer level applications that handle data you don't mind losing.

If you really need to run a database server, try Redhat or a Debian distribution. Postgres is a top notch choice in either case.

Re:Installer improved? (1)

roman_mir (125474) | more than 2 years ago | (#37380616)

I prefer source and my script

apt-get install -y libreadline5-dev
apt-get install -y zlib1g-dev
apt-get install -y zlibc
ln -s /usr/bin/make /usr/bin/gmake
 
gunzip /usr/local/pgsql/postgresql-9.0.4.tar.gz
tar xvf /usr/local/pgsql/postgresql-9.0.4.tar
mkdir /usr/local/pgsql/postgresql-9.0.4/build_dir
cd /usr/local/pgsql/postgresql-9.0.4/build_dir
/usr/local/pgsql/postgresql-9.0.4/configure --with-wal-segsize=1
gmake world
groupadd postgres
useradd postgres -g postgres
chown -R postgres /usr/local/pgsql/postgresql-9.0.4/build_dir/*
su postgres
gmake check
exit
gmake install-world
export POSTGRESQL_HOME=/usr/local/pgsql
export PATH=$PATH:$POSTGRESQL_HOME/bin
 
## need some 600MB shared
echo >> /etc/sysctl.conf
echo "kernel.shmmax=600000000" >> /etc/sysctl.conf
echo >> /etc/sysctl.conf
sysctl -p
mkdir /data
/usr/local/pgsql/bin/initdb -D /data
mkdir /data/base/SPACE
/usr/local/pgsql/bin/pg_ctl -D /data -l logfile start
 
# in postgresql.conf
shared_buffers = 500MB
checkpoint_segments = 10
track_counts = on
autovacuum_vacuum_scale_factor = 0.00002 # many small updates
autovacuum_analyze_scale_factor = 0.00001
bytea_output='escape' # still problems with other types
 
## restart db
 
/usr/local/pgsql/bin/psql -d dbname -U user < model.ddl
 
#place startup script to /etc/init.d/postgres
chmod aguo+wx /etc/init.d/postgres
ln -s /etc/init.d/postgres /etc/rc1.d/K02postgresql
ln -s /etc/init.d/postgres /etc/rc2.d/S98postgresql
ln -s /etc/init.d/postgres /etc/rc3.d/S98postgresql
ln -s /etc/init.d/postgres /etc/rc4.d/S98postgresql
ln -s /etc/init.d/postgres /etc/rc5.d/S98postgresql

Re:Installer improved? (1)

h4rr4r (612664) | more than 2 years ago | (#37380760)

2 things:
1. You need more ram
2. you forgot shmall, which you are probably not hitting with so little ram.

Re:Installer improved? (1)

roman_mir (125474) | more than 2 years ago | (#37380796)

that's part of install procedure I have for little boxes, that go into separate stores in a chain. They have maybe 2GB, and shmall is not set at all.

For the central servers the settings are different altogether, it's also 48-96GB RAM.

But... (1)

rduke15 (721841) | more than 2 years ago | (#37380434)

But I run Debian Stable on all my servers.

Insensitive clod!

Re:But... (1)

vlm (69642) | more than 2 years ago | (#37380452)

But I run Debian Stable on all my servers.

Insensitive clod!

Keep an eye on backports.debian.org

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...