The drumbeat of improvements keep coming from the PostgreSQL community that hopes to make that database an enterprise-class alternative to the MySQL and 11g databases controlled by Oracle, the three DB2 databases sold by IBM, and the SQL Server database from Microsoft. With today's delivery of the PostgreSQL 9.1 community …
I'm sorry but you lost me here....
Quote "Under normal circumstances, such an operation would require someone to use a database client to access the database table, select the data, download it to their client, delete that data, then upload it manually into the second table."
Seriously? does the author even understand the products that he's talking about?
How about this (excuse the lack of error handling, but you get the picture)
INSERT INTO TableB (ColA, ColB) SELECT ColA, ColB FROM TableA WHERE ColC = ?;
DELETE FROM TableA WHERE ColC = ?;
No data transfer to the client required... The only network traffic required here is sending the statements down the wire and receiving the appropriate success/fail code and optionally the number of rows affected.
Not just me then, thought I was going mad when I read that!
I think the author hasn't moved beyond the pgAdmin stage yet :-)
The example El Reg gave is a bad one and yes, it can be done with 2 statements in a transaction. But what it does provide is a logical chain of queries that can take an input and provide an output to the next, like you would with bash functions. So if you wanted to delete records from a table, then aggregate the data and update another table based on that information, you *could* update a table based on the aggregated data you plan to delete, then go ahead and delete it, but it's more logical to delete it, then start using that data. This way you've only had to define your conditions once.
Actually, there's an example of something more complex than this on the PostgreSQL wiki: http://wiki.postgresql.org/wiki/What's_new_in_PostgreSQL_9.1#Writeable_Common_Table_Expressions
I take your point, these CTE's based on data modification statements that return data *could* be useful... . However I wasn't criticising CTE's.
What I was objecting to was the authors statement of "the only other way to perform this task is...", where he then went on to describe possibly the most stupid way of transferring data between tables short of cut'n'pasting data and storing it in Excel prior to "uploading" it back to the database.
"Under normal circumstances, such an operation would require someone to use a database client to access the database table, select the data, download it to their client, delete that data, then upload it manually into the second table"
Very impressive software IMO
I really came into contact with PSQL during the process of replacing my Solaris (office) server with Win2k3 servers. One of the problems back then; the SQL backend. I didn't want any Oracle-based solutions anymore so the choice was simple (sort off).
After I installed it I was expecting something like I've seen with MySQL /years/ ago; namely an installed product and basically an approach like: "Ok, this is how it looks like on Linux so this is what you get on Windows. Have fun!". Iow; deeply diving into the documentation to check up on editing the config files (postgresql.conf, pg_hba.conf, etc.). And all using commandline of course; a bit like using Apache on Windows. Now, using cli this isn't a con or something, far from it, but it would have meant a slightly longer time to get my stuff going.
"When on Windows do like Windows does", I think the PSQL guys must have thought or realized that. And so I had the option to install pgAdmin. Simply during installation of the product, no special options or features or looking for solutions. Nah; you just install it. And not "just" a program ("quick hack which works on Windows and when it doesn't its probably your fault") but something which is very usable too!
PSQL for dummies ?
Perhaps. But it sure has helped this 'dummy' to get up to speed with PSQL but at my own pase. Of course I checked the important stuff first (host setting in pg_hba.conf for example) but once I was convinced all was securely setup I first relied on pgAdmin to setup some roles to logon, setup databases and in the first weeks to make some manual backups.
And so I could easily put PSQL to work even though I wasn't familiar with all the details yet, I looked into those during the weeks after installation and eventually setup a (commandline based) automated backup through means of a cronjob, err, "scheduled task". (yes, you even get pg_dump and pg_dumpall just as you'd expected).
Right now PSQL is powering my office CRM and both company mailservers (also using an automated replicated database) and even though my hardware is dated (these critters probably wouldn't be able to run Win 2k8 server (perhaps barely)) their performance certainly isn't!
I'm also very impressed with what I can squeeze out of my servers (context: hundreds (sometimes peeking at thousand) emails per day, a few (low profile) websites using a sql backend, Java based crm which is at least used by one person during the day (often 2 or 3), and (I'm proud of this one:) sometimes a remote connection from my workstation (odbc) to retrieve customer info from the crm database which is then used within Office 2010 (through means of vba macros).
Its no enterprise but it sure manages to do an impressive job, day in and day out. I've been highly impressed with PSQL the day I started using it and I fully realize that I haven't even /scratched/ the surface yet (stored procedures anyone?).
Still; if there's one team which can tackle the enterprise its them! IMVHO of course.
I came across PSQL some years ago when I was presenting a topic on Adaptive Linux Firewalls. I needed a database which could store large amounts of log messages being generated by IPTABLES.
Likes postgres. More people need to know about it.
"The drumbeat of improvements keep coming from the PostgreSQL community that hopes to make that database an enterprise-class alternative to ... MySQL"
Other way round, surely. MySQL wishes it could be as good as postgres.
MySQL still doesn't support transactions and is not ACID compliant. you have to graft some other thing onto it. this short video explains...
They can't all be wrong...can they?
Let's not forget that most of the open-sourced based massively parallel processing (MPP) database offerings that have appeared over the last 7-8 years chose to use Postgres - think Netezza, Greenplum, Aster, Dataupia. They can't all be wrong, can they?
I think the biggest table I've built using Postgres was 125 billion rows running on a 75 node MPP system.
mySQL? I don't think so!
PostgreSQL keeps leading the way
Contrary to the implications of this article, PostgreSQL already is an enterprise class database solution. In fact, in many respects it continues to lead the way, certainly when compared with MySQL.