top | item 5514050

(no title)

hp50g | 13 years ago

Its not a drop in replacement. You have to write code/configuration (but not much). Its definitely suitable for your use case - we use it for the same thing.

Feature set is comparable. We tend to avoid specific platform features as they are a migration risk.

Ha - our SQL license fee is around £60k a machine one off per major drop. We're not dropping that again for 2012. No way. Not when we have 8 machines :)

discuss

order

chrislomax|13 years ago

We have dug ourselves the hole really by using feature specific code and relying on certain features in our structure.

We are slowly digging ourselves out the hole with dependency injection in our code from a Entity point of view and want to go a similar route with our DB.

We are nowhere near yourselves but we have spent £18k on SQL licenses in the past 2 years. It's a lot of cash and I don't think we see the full benefit to be fair. Coupled with the abundance of other Microsoft licenses we pay out for, more than half our server costs are licenses.

I appreciate your comment, it's taken me a step in the right direction to sorting out DBs out.

Ovid|13 years ago

Out of curiosity, how comprehensive is your test suite? For one company I worked for, we actually found PostgreSQL was outperforming Oracle and because of how comprehensive our test suite was, the lead dev connected to PostgreSQL and got 80% of the test suite passing in one evening. A strong test suite allows you to instantly find out where your app breaks down.

Side note for those who don't believe PostgreSQL can outperform Oracle: we needed a custom data type that we were aggregating over. The projected table size was over a billion rows (last I heard it had reached over 4 billion rows). As I recall, in Oracle, you were limited to writing user-defined data types in SQL or Java (now you can use C). PostgreSQL, being open source, allowed the leave to implement the custom data type in C. The query we needed in Oracle took several minutes to run due to the need to constantly serialize/deserialize the data over the aggregation. The PostgreSQl version returned in a few seconds.

We even hired a well-known Oracle performance consultant who did wonderful things to all of our queries ... except for this one custom data type which left him stumped.

AlisdairO|13 years ago

The code change (and risk) can potentially be quite substantial if you're relying on default SQL server behaviour (i.e. not MVCC/snapshot isolation). Code that relies on blocking when it hits rows that are locked by another process get a bit of a nasty surprise during that kind of switch :-).

hp50g|13 years ago

Always have deterministic known behavior :)

We shot ourselves when we first implemented NHibernate by setting our transaction boundary at the wrong place. This caused all sorts of portability problems.