I've been on a project where we were forced to migrate the opposite direction: From PostgreSQL to Oracle, because the client was already paying for Oracle licenses and really, really, wanted us to use Oracle to justify the expense.
It was actually a pretty big setback. We were using PostGIS to support spatial queries (a key requirement), and Oracle Spatial was just not at the same level (both in performance and features). The development experience with Oracle was also awful. The licensing for Oracle was highly granular, down to the feature level. More than once I'd identify a feature that provided a solution to an issue through online research only to be prevented from using it due to the customer not having the requisite license for it.
And the support was useless. Oracle was so complex (by design) we resorted to contacting support a couple of times - they would send out an "engineer" who could turn any technical troubleshooting session into a sales presentation for some Oracle product or feature that would "solve" whatever the issue was.
I will never work on a project involving Oracle again (barring obscene amounts of money to assuage my frustration, of course).
I've spent a fair amount of time at the Sofitel in Redwood Shores, and I'd often chat with Oracle "sales engineers" at the bar. A simple "so, what do you work on" question would inevitably generate an hour's worth of them explaining some convoluted acronym-heavy product with a pushy sales model, and I'd eventually be like "ok... so it's a database?".
I really don't like Oracle from a DBA perspective but it's still often far ahead of PostgreSQL when it comes to query performance. In Postgres, the query structure can make huge differences in terms of performance and it can take a lot of tuning to find the right query to optimize performance (especially when subqueries are involved). Oracle (and SQLServer) are usually pretty good at optimizing the query exactly the right way, reducing development time by quite a bit.
I think it's a company culture overall. The same experience with Oracle cloud, I saw low prices, decided to try... Their Kubernetes engine was purely horrible not even alpha comparing to Google offering. My bare-metal installation was much more mature.
A company I was working with, also did this. Their excuse? The Oracle licenses were very cheap because the CIO is a genius negotiator. Never mind that PostgresSQL is free and much better product. Either they are really stupid or they are getting something from Oracle under the table.
Oracle used to be by far and away the best database out there. Now I wouldn’t use it even if you paid me. It’s shocking how little Oracle invested in developing their products and services over the years. They are a distant second, if not merely an “also ran”, for everything that they do. The company largely exists as an experiment in just how far you can go with a vendor lock-in strategy. Sadly that experiment is proving to be a remarkably successful one
The article seems to misunderstand what table inheritance is in PostgreSQL.
CREATE TABLE new_table AS TABLE existing_table;
Doesn't create any PostgreSQL inheritance relationship between the parent and child tables. It merely makes a new non-inherited table with a copy of the data whereas with true table inheritance you're working with the same data (there's some visibility rules to consider between parent and child, but that's different than a copy).
I'm also uncomfortable with too simply stating that you should think of this like OOP inheritance; while I agree that in some respects there's passing similarity, it is its own beast and needs to be understood outside of the OOP paradigm to be useful. Many of the Object Relational aspects of PostgreSQL are very powerful, but can not be understood in OOP terms.
Also, another part of the article talks about the ramifications of not having Oracle "packages". So while it's not completely the same concept and there are different sets of trade-offs, one option includes using PostgreSQL schema for this sort of logical namespace organization. Both Oracle and PostgreSQL have the concept of different schemas, but Oracle has a much more rigid idea about schema usage (related to database users) and PostgreSQL has a much more fluid idea about usage. As a former Oracle guy, I can see how that organizational tool might not be front of mind when coming to PostgreSQL, but I've used PostgreSQL schema for this sort of organizational purpose with good success.
I'm a happy Postgres user and recently did some work with a government agency using Oracle - the thing that shocked me most about Oracle was the lack of transactional DDL operations which was something I'd just taken for granted in the Postgres world.
For one, it does things that the competition is not capable of or just inferior -- and despite the bias on this site, the major consumers of RDBMSes are not price sensitive scrappy startups.
There is no comparison between the HA offerings in Oracle and something like Postgres, which are comparatively toys. Replication doesn't equate to HA and the "nobody got fired for buying xxx.." actually has some justification. Why would I risk my reputation on a flimsy solution just to save a few bucks in a large corporation?
I have not been an Oracle fan in the past, especially because of their complicated (and expensive) licensing, but late last year we moved to their hosted autonomous database. The on demand pricing model makes it quite economical, and the performance is amazing.
However, the killer feature for me is that it has application Express (or APEX) included, which is a complete web application development framework, as well as Oracle restful data services (ORDS). With built-in application development and deployment, it is the only complete, full-stack data management platform I am aware of (enterprise level).
YRMV, but it has been incredible for us, both to support our data science initiatives, and for rapidly deploying applications. I couldn't imagine going back to anything else.
Oracle and SAP have products that touch niche areas of businesses. Oil company with complex shipping and receiving looking for accounting software? Global metal foundry who needs to track raw materials to finished goods and forecast everything? SAP and Oracle can sell your VP overly complex products for almost anything.
For the DB, corporate executives types feel much more comfortable choosing Oracle or IBM. It usually bites them in the ass down the road due to licensing or support costs.
I STRONGLY dislike Oracle the company, and the Oracle DB is quite complex, however Oracle RAC (HA) beats the pants off anything else out there as far as performant, reliable HA. Oracle DB also has been the leader performance wise for complex queries and large datasets. All of that is very important in my niche (enterprise grade eCommerce).
It's mostly used because the people who hold the purse strings in large organisations don't have a clue about databases, but the salesmen for Oracle wear fancy suits, take them out for expensive dinners and treat them like chums so they get the deal.
It's a super convenient operation but it has one big drawback which might be unexpected: `if exists` first acquires the relevant lock then checks.
This means a "ALTER TABLE table_name DROP COLUMN IF EXISTS column_name" will first acquire an ACCESS EXCLUSIVE lock, then check if the column exist.
Since DDL is transactional the lock will not be released until the transaction is committed or rollbacked, therefore even if the column doesn't exist it will prevent all concurrent operations on the table.
I don't have a lot of experience with Oracle but I can tell you that Postgres optimizer is shit compared to Db2 zOS or Db2 LUW.
When I worked in a large bank we tried to migrate core system from Db2 zOS to Postgres and it went nowhere.
I was a in-house developer working with Postgres consultants and they were amazed by db2 performance in OLTP scenarios.
So if your organization is already spending cash on Oracle, Db2 or MSSQL, use them for superior performance. Migration off them is costly and risky process.
If your working at a startup there is absolutely no reason to choose anything but Postgres if you need relational.
I've never understood why RDBMSes don't offer a low-level protocol where, rather than a SQL statement, you can just send over the exact query-plan AST you want the DB to prepare in some binary encoding. Then you could pre-compile your hot OLTP queries and heavy OLAP reports offline against your existing DB schema, using the same sort of techniques that got Lucene its optimized Levenstein-automata JVM bytecode. You could even tweak the resulting plan, op by op, before letting it go to the DB—as if you were doing final ASM tweaks on a game in the 90s.
In fact, my company wants to spare the expenses on Oracle. This is the main reason for migrating.
We'll see how it works out.
You are not the first one to point out postgre's optimizer. Is it really that bad?
Some good points in article.
There are some things that may need more attention.
1) Update from another table is not safe as it should be (in case of multiple values, final value will be sort of random)
2) Schemas as namespace separators (grouping tables inside database)
3) Extern join syntax in Oracle is actually more vulnerable (in case of error in multicolumn syntax it fallbacks to normal join). So, it's not better or easier - it is just created before standard JOIN existed.
4) Crucial difference how buffer-vs-filesystem cache works
5) Miss of plan stability - no solution out of the box in standard installation
6) Batch operation (in)efficiency
7) Pros/cons in undo/rollback handling
[likely some more that can't think of right now]
For some reason I find a query that uses (+) way easier to read than verbose outer joins. Probably because it's near the field and you see immediately "hey, this can be null".
Yes, it makes the query harder to migrate to other DMBS and to collaborate with non-Oracle persons.
What he demonstrated was just a way of making additional tables based on existing ones. While inheritance works sort of like partitioning except the child tables can contain additional data. Selecting from the parent will display all data from the child.
Biggest difference for me is DDLs are transactional in Postgres, but not on Oracle.
That means migration scripts for software on Postgress can just have all DDLs (alter, create, drop, grant etc.) and DMLs (inserty, update, etc.) mixed in whatever order they need to be, and if any particular line of the migration script fails - the whole thing is rolled back as if nothing happened. And then you fix the problem and run migration again. Easy.
In comparison writing migration scripts on Oracle is a nightmare - DDLs aren't transactional (THEY COMMIT ON EACH LINE...), so you have to separate them from DMLs and ensure that only the scripts that haven't passed yet are re-run later. I've worked in 3 different companies that used oracle, and there were 3 different approaches to that problem, and all 3 of them sucked :)
In one company we had several big customers each with 1 production db, and software was written on separate branches for each customer, and helpdesk staff was dealing with migrations - programmers just asked helpdesk to add a column and worked on the test db for that customer. It was a lot of unnecessary work to port changes and bugfixes between branches, but at least we knew exactly what is on each db and could fix problems by ourselves. There was no migration to speak of, just manual changes on dbs and documenting them in svn (it was before git was popular).
In another company there was one development branch and several customers, and there were migration scripts written by all developers when they made changes, which were merged into development branch for db by 1 guy whose whole job was to merge these scripts and check if migration works. It slowed down development (because when you finished your task on local db you had to make a migration script(s) and send them to be verified. And even "that guy" sometimes made mistakes and then if you fetched db scripts in the morning you couldn't work until stuff was fixed (or you had to recreate oracle db from scratch which took several hours).
That was before docker BTW, now they probably use docker so that can be less of a problem.
In the third company we had one customer but with hundreds of installations, and we had one development branch with frequent releases. Developers maintained migration scripts between release, major and minor versions. There was no "that guy" - we had smoke tests instead, and it sometimes took more time to write that migration script(s) than to change the code.
So you want to add 3 columns to 3 tables and fill them? And it has to be done in order because of dependencies? Write no less than 6 migration scripts (alter table 1, update table 1, alter table 2, ...). Add them with proper names and some boilerplate to the migration scripts for minor versions (3.4.5 -> 3.4.6). But that's not all! We also have migration scripts for major versions (3.4.0 -> 3.5.0), so you also need to add them there. You have to check the migration separately because these scripts often use shortcuts to run faster. So your scripts might break despite working for minor version migration.
Then there's the scripts for release version migration (3.0.0->4.0.0). Add your scripts there as well, and test once again.
Oh, and testing these scripts on test data doesn't mean they will work - each installation of db changes slightly over time - people add stuff from ui. There are rules what they can change and what they cannot, but if you don't think about it you might break something with your migration scripts on production despite it working on test data.
When that happens you have to write migration fixes which need to detect that problem and fix it on data you don't have direct access to :)
It was a nightmare.
Meanwhile Postgress is just doing the right thing, write 1 migration script with everything in it, if it works it works, if not - it rollbacks. Nobody thinks twice about it.
> Biggest difference for me is DDLs are transactional in Postgres, but not on Oracle.
The answer I received more than once from Oracle evangelists regarding transactional DDL: it's useless and if you need it, you are not testing your scripts properly
Thanks for your inputs.
Yes, you are totally right about the DDLs, you cannot mix it with DML.
Seeing all the comments, it looks like a good decision to migrate. :)
Whenever I see comment making comparison between Oracle and Postgre, I cant help bug wonder why isn't it compared to Enterprise DB, which is sort of like the unofficially official Postgre for Enterprise products.
I would bet the next "Oracle" (as in, large IT company) of the decade 2020-2030 will be based on PostgreSQL. Can't see an obvious candidate, yet. Perhaps someone has seen some interesting companies heading in this direction?
How come nobody has implemented an Oracle compatibility mode for PostgreSQL?
Or in general, why don't databases support each others SQL dialect? It can't be that much work, at least if one is content with only supporting the majority of applications, and seems pretty essential for popularizing a specific database.
Looking at the article, supporting Oracle syntax seems trivial in all cases except for adding full MERGE support.
[+] [-] jcadam|5 years ago|reply
It was actually a pretty big setback. We were using PostGIS to support spatial queries (a key requirement), and Oracle Spatial was just not at the same level (both in performance and features). The development experience with Oracle was also awful. The licensing for Oracle was highly granular, down to the feature level. More than once I'd identify a feature that provided a solution to an issue through online research only to be prevented from using it due to the customer not having the requisite license for it.
And the support was useless. Oracle was so complex (by design) we resorted to contacting support a couple of times - they would send out an "engineer" who could turn any technical troubleshooting session into a sales presentation for some Oracle product or feature that would "solve" whatever the issue was.
I will never work on a project involving Oracle again (barring obscene amounts of money to assuage my frustration, of course).
[+] [-] snarfy|5 years ago|reply
[+] [-] drchopchop|5 years ago|reply
[+] [-] forinti|5 years ago|reply
If your client was paying Oracle already, he should take steps to move away from it, not setup himself up for paying licences forever.
I cringe at having to call Oracle Support. It takes forever and they make you send a ton of files before someone even looks at it.
[+] [-] dx034|5 years ago|reply
[+] [-] maxdo|5 years ago|reply
[+] [-] takacsroland|5 years ago|reply
[+] [-] thijsvandien|5 years ago|reply
[+] [-] corpMaverick|5 years ago|reply
[+] [-] revel|5 years ago|reply
[+] [-] sbuttgereit|5 years ago|reply
CREATE TABLE new_table AS TABLE existing_table;
Doesn't create any PostgreSQL inheritance relationship between the parent and child tables. It merely makes a new non-inherited table with a copy of the data whereas with true table inheritance you're working with the same data (there's some visibility rules to consider between parent and child, but that's different than a copy).
I'm also uncomfortable with too simply stating that you should think of this like OOP inheritance; while I agree that in some respects there's passing similarity, it is its own beast and needs to be understood outside of the OOP paradigm to be useful. Many of the Object Relational aspects of PostgreSQL are very powerful, but can not be understood in OOP terms.
For inheritance, it's better to read about this from the documentation: https://www.postgresql.org/docs/12/tutorial-inheritance.html
Also, another part of the article talks about the ramifications of not having Oracle "packages". So while it's not completely the same concept and there are different sets of trade-offs, one option includes using PostgreSQL schema for this sort of logical namespace organization. Both Oracle and PostgreSQL have the concept of different schemas, but Oracle has a much more rigid idea about schema usage (related to database users) and PostgreSQL has a much more fluid idea about usage. As a former Oracle guy, I can see how that organizational tool might not be front of mind when coming to PostgreSQL, but I've used PostgreSQL schema for this sort of organizational purpose with good success.
[+] [-] bjpirt|5 years ago|reply
[+] [-] markab21|5 years ago|reply
[+] [-] catblast|5 years ago|reply
[+] [-] F_J_H|5 years ago|reply
However, the killer feature for me is that it has application Express (or APEX) included, which is a complete web application development framework, as well as Oracle restful data services (ORDS). With built-in application development and deployment, it is the only complete, full-stack data management platform I am aware of (enterprise level).
YRMV, but it has been incredible for us, both to support our data science initiatives, and for rapidly deploying applications. I couldn't imagine going back to anything else.
[+] [-] wil421|5 years ago|reply
For the DB, corporate executives types feel much more comfortable choosing Oracle or IBM. It usually bites them in the ass down the road due to licensing or support costs.
[+] [-] pjmlp|5 years ago|reply
- RAC and distributed transactions across a database cluster
- Integration with APIs
- A much better experience in Java and .NET drivers, including SQL custom data types.
[+] [-] modoc|5 years ago|reply
[+] [-] globular-toast|5 years ago|reply
[+] [-] dx034|5 years ago|reply
Small nitpick: The excluded table contains the values proposed for insertion, not the values already present in the table (as described in [1]).
[1] https://www.postgresql.org/docs/current/sql-insert.html
[+] [-] takacsroland|5 years ago|reply
[+] [-] masklinn|5 years ago|reply
It's a super convenient operation but it has one big drawback which might be unexpected: `if exists` first acquires the relevant lock then checks.
This means a "ALTER TABLE table_name DROP COLUMN IF EXISTS column_name" will first acquire an ACCESS EXCLUSIVE lock, then check if the column exist.
Since DDL is transactional the lock will not be released until the transaction is committed or rollbacked, therefore even if the column doesn't exist it will prevent all concurrent operations on the table.
[+] [-] krakatau1|5 years ago|reply
When I worked in a large bank we tried to migrate core system from Db2 zOS to Postgres and it went nowhere. I was a in-house developer working with Postgres consultants and they were amazed by db2 performance in OLTP scenarios.
So if your organization is already spending cash on Oracle, Db2 or MSSQL, use them for superior performance. Migration off them is costly and risky process.
If your working at a startup there is absolutely no reason to choose anything but Postgres if you need relational.
[+] [-] derefr|5 years ago|reply
[+] [-] takacsroland|5 years ago|reply
[+] [-] iracic|5 years ago|reply
[+] [-] drdec|5 years ago|reply
For the love of all you hold sacred, please don't do this.
[+] [-] miahi|5 years ago|reply
Yes, it makes the query harder to migrate to other DMBS and to collaborate with non-Oracle persons.
[+] [-] philliphaydon|5 years ago|reply
For example, in the partitioning, he states:
> SELECT * FROM sales_p_america;
But doesn't mention that if you select based on a region, it will use only the partition table.
> SELECT * FROM sales WHERE sales_region IN ('USA','CANADA');
While I believe if you do the equiv in Oracle it wont use the partition table?
---
The section on table inheritance isn't right either.
https://www.postgresql.org/docs/12/tutorial-inheritance.html
What he demonstrated was just a way of making additional tables based on existing ones. While inheritance works sort of like partitioning except the child tables can contain additional data. Selecting from the parent will display all data from the child.
[+] [-] miahi|5 years ago|reply
[1] https://docs.oracle.com/en/database/oracle/oracle-database/1...
[+] [-] takacsroland|5 years ago|reply
[+] [-] ajuc|5 years ago|reply
That means migration scripts for software on Postgress can just have all DDLs (alter, create, drop, grant etc.) and DMLs (inserty, update, etc.) mixed in whatever order they need to be, and if any particular line of the migration script fails - the whole thing is rolled back as if nothing happened. And then you fix the problem and run migration again. Easy.
In comparison writing migration scripts on Oracle is a nightmare - DDLs aren't transactional (THEY COMMIT ON EACH LINE...), so you have to separate them from DMLs and ensure that only the scripts that haven't passed yet are re-run later. I've worked in 3 different companies that used oracle, and there were 3 different approaches to that problem, and all 3 of them sucked :)
In one company we had several big customers each with 1 production db, and software was written on separate branches for each customer, and helpdesk staff was dealing with migrations - programmers just asked helpdesk to add a column and worked on the test db for that customer. It was a lot of unnecessary work to port changes and bugfixes between branches, but at least we knew exactly what is on each db and could fix problems by ourselves. There was no migration to speak of, just manual changes on dbs and documenting them in svn (it was before git was popular).
In another company there was one development branch and several customers, and there were migration scripts written by all developers when they made changes, which were merged into development branch for db by 1 guy whose whole job was to merge these scripts and check if migration works. It slowed down development (because when you finished your task on local db you had to make a migration script(s) and send them to be verified. And even "that guy" sometimes made mistakes and then if you fetched db scripts in the morning you couldn't work until stuff was fixed (or you had to recreate oracle db from scratch which took several hours).
That was before docker BTW, now they probably use docker so that can be less of a problem.
In the third company we had one customer but with hundreds of installations, and we had one development branch with frequent releases. Developers maintained migration scripts between release, major and minor versions. There was no "that guy" - we had smoke tests instead, and it sometimes took more time to write that migration script(s) than to change the code.
So you want to add 3 columns to 3 tables and fill them? And it has to be done in order because of dependencies? Write no less than 6 migration scripts (alter table 1, update table 1, alter table 2, ...). Add them with proper names and some boilerplate to the migration scripts for minor versions (3.4.5 -> 3.4.6). But that's not all! We also have migration scripts for major versions (3.4.0 -> 3.5.0), so you also need to add them there. You have to check the migration separately because these scripts often use shortcuts to run faster. So your scripts might break despite working for minor version migration.
Then there's the scripts for release version migration (3.0.0->4.0.0). Add your scripts there as well, and test once again.
Oh, and testing these scripts on test data doesn't mean they will work - each installation of db changes slightly over time - people add stuff from ui. There are rules what they can change and what they cannot, but if you don't think about it you might break something with your migration scripts on production despite it working on test data.
When that happens you have to write migration fixes which need to detect that problem and fix it on data you don't have direct access to :)
It was a nightmare.
Meanwhile Postgress is just doing the right thing, write 1 migration script with everything in it, if it works it works, if not - it rollbacks. Nobody thinks twice about it.
[+] [-] hans_castorp|5 years ago|reply
The answer I received more than once from Oracle evangelists regarding transactional DDL: it's useless and if you need it, you are not testing your scripts properly
[+] [-] takacsroland|5 years ago|reply
[+] [-] MrHamdulay|5 years ago|reply
[+] [-] zozbot234|5 years ago|reply
[+] [-] ksec|5 years ago|reply
[+] [-] pasha_golub|5 years ago|reply
[+] [-] jonahbenton|5 years ago|reply
https://www.enterprisedb.com/
[+] [-] simonebrunozzi|5 years ago|reply
[+] [-] davio|5 years ago|reply
[+] [-] aserafini|5 years ago|reply
Perhaps it was retaliation for Larry’s comments about Amazon in this interview https://m.youtube.com/watch?v=xrzMYL901AQ
[+] [-] ibejoeb|5 years ago|reply
If you're using any advanced features, migrating to anything else is going to be a risk-ridden project. Same goes for any other product, of course.
[+] [-] davidgerard|5 years ago|reply
PG does 99% of what Oracle does. If you have one of the 1% cases, that will be a remarkable circumstance.
Always test a move from Oracle to PG, see how it performs.
[+] [-] tandr|5 years ago|reply
With that said... Makes me wonder how many (if any) companies are moving opposite direction that Oracle survives and doing so nicely.
[+] [-] takacsroland|5 years ago|reply
[+] [-] devit|5 years ago|reply
Or in general, why don't databases support each others SQL dialect? It can't be that much work, at least if one is content with only supporting the majority of applications, and seems pretty essential for popularizing a specific database.
Looking at the article, supporting Oracle syntax seems trivial in all cases except for adding full MERGE support.
[+] [-] CodeSheikh|5 years ago|reply
[+] [-] takacsroland|5 years ago|reply
[+] [-] stuff4ben|5 years ago|reply