At 34:30 mark he goes on to say that there are some buggy implementations for mmap. Is that in anyway related to how Linux handles pages marked as free?
I recently created a database engine (exosql [1]), only query and no storage. It uses postgres-like foreign data wrappers to get all data.
It's not valid for big datasets, as it stores all in memory (or maybe it is?), but as a learning experience has been amazing to think and develop a real database: planner, executor, choose algorithms, implement features as lateral joins and so on.
I will definetly listen very carefully to these talks.
A similar learning experience for me was when I was exploring Apache Calcite. That again is only query, and no storage. It has a concept of 'adapters' which, I assume, is similar to the postgres-like foreign data wrappers you mention.
I started watching this. The slides are unreadable but the camera is perfectly still and the slides are for several "key frames" where the compression algorithm decides to replace one set of compression artifacts for another.
For example try to read the first keyword under "Translates into:":
The keyword is unreadable at the start but as you keep looking at it over 50 keyframes it becomes readable to me.
Since the camera is in a fixed position it should be possible to combine the data from those artifacts into a single superresolution video with very small assumptions. (i.e. the assumption that the image is the same image until at least 5% change or something). There's not even anyone moving in front of it.
-> Can someone who actually knows this stuff apply a superresolution interlacing filter to this video and post the superresolution version somewhere?
I hope this is not too much work, and I am sure we would all appreciate the results since the slides are not human-readable before applying some kind of superresolution!
There was a similar issue with the detection of text at angles faced when trying to decipher if the first lady was wearing jacket with a dog whistle insensitive message on it. The original source image was at an angle, so determining its authenticity was challenging until other images emgerged.
so I was trying to figure out why a query was slow the other day... it was a nasty query with like 14 joins... I used explain and saw that it was a mess... now in my case I was able to switch to outer joints and nest related joins and got it fast.. but I had some interesting thoughts.
In SQL, indexes are implicit.. they are used if available but it's easy get a large query to scan sometimes when it shouldnt... what if there was a different query language with explicit index syntax.. I think you'd get a lot more predictable performance.
1. Query expresses the result it produces, not the method that was used to obtain it. Semantic vs implementation. It may be a pain to write, but it will be easier to read later.
2. DBA could add/drop indexes on the fly to tune performance of a live system without making any changes to the application code. And being 100% certain he is not changing the semantics of what's going on.
As others noted, if you must you can use query hints for force particular index to be used for a particular operation. MSSQL also allows to pin down a query plan you like for a given query so that it doesn't drift away later due to environment changes.
I agree it is sometimes a pain to force SQL to use the index you wanted it to use.
MySQL has or had a way of forcing index use, and while it was very occasionally a life-safer, it was much less useful than you might think, as often when a DB engine falls back to sequential scan, it's for a reason (e.g. the query planner might have found that the indexes don't cover the columns you need, and the number of indexed lookups into the table that is needed are costly enough that a sequential scan might end up being faster, for example).
It was useful occasionally "back in the day" when MySQL's query planner was really bad, but today it will mostly appear useful if there's something subtly wrong with your config. I don't use MySQL much any more, but on Postgres one typical mistake might be that the costs configured for the query planner doesn't match your hardware (e.g. if your seek cost is configured to be high enough relative to sequential reads, sequential scans starts to look really good even when you need only a small portion of the data; principle will be the same on MySQL but I don't know if you have the same control over the query planner costs or not).
I would suggest that there's potentially something you need to look at with your database schema - a couple dozen joins shouldn't be causing any problems you have to think about.
Part of this is because of the way a well-normalized database is organized. Most databases have a few large tables and many smaller tables. So in the general case, most of your joins will be against smaller tables. Joins with larger tables are usually very fast, as long as the fields you join on are indexed (and you're not doing a CROSS JOIN or something.) The other thing that helps (which it sounds like you did by "nesting related joins") is to always think about limiting (filtering) the datasets you're joining against at as many stages as possible; that way you're always doing the least amount of work necessary, and it's usually conceptually simpler to read and understand.
As others have said, most databases do have index hinting as part of the query language. However, in my (long) experience, you should almost never use it. Index hints should be a huge code smell.
If you are a Java programmer and want to learn how an SQL database engine works, take a look at the source code of H2.
Even better, try to add a basic feature to H2 (eg. a new built-in function). It is surprisingly easy, and you come away with a decent understanding of the basics of building an SQL database engine.
Gosh, I must say there seems to be some misunderstanding of RDBMS concepts in some posts in this thread!
I was writing database systems professionally, back in the days before the RDBMS concept was even a thing. So here's my (enormously long and convoluted) 2 cents worth. Please be sure to pack a sandwich and get hydrated before you continue.
Say you were dealing with doctors and patients, and needed to store that information in a database. Back in the day, you'd typically use a so-called hierarchical database. To design one of those, you need to decide, what is the most common access method expected to be: getting the patients for a given doctor, or the doctors for a given patient? You'd design the schema accordingly. Then, the preferred access method was easy to code, and efficient to run. The "other" access method was still possible, but harder to code, and slower to run. The database schema depended on how you thought the users would access the data.
But that is absolutely what NOT what to do with an RDBMS. Certainly you look at the users' screens, reports, and so on - but that's just to determine what unique entities the system must handle - in this case, doctors and patients. Then you ignore how the users will access the data, and work out what are the inherent logical relationships between all the entities.
Your initial answer might be this. A doctor can have many patients, and a patient can have many doctors. As any competent relational designer will instantly know, this means you need a resolving table whose primary key is a composite key comprising the primary keys of the other two tables. So if Mary was a patient of Tom, you'd add Mary to the patients table (if not already there), Tom to the doctors table (ditto), then add a Mary/Tom record to the resolving table. By that means, a doctor could have any number of patients, a patient could have any number of doctors, and it's trivially easy to write simple, performant SQL to access that data however you want.
But then you'd have a ghastly thought: patients can also be doctors, and doctors can also be patients! Say Tom was also a patient of Mary! Now you need a Tom record in the patient's table, but that would inappropriately duplicate all his data from the doctors table! Something's clearly wrong. You'd soon see that from a data modelling viewpoint, you don't want doctors and patients as separate entities - you want a single entity Person, and a resolving table to relate arbitrary people in specified ways.
So what?!!
So this. IMHO, many developers using relational databases have absolutely no idea about any of that. They design hopelessly unnormalised schemas, which then need reams of ghastly SQL to get anything out. The query planner can barely parse all that crap, let along optimise it. The database has to stop every five minutes to wet its brow and take a drink.
So here's my advice to any inexperienced relational database designers who have actually managed to get this far!! If you can answer the following questions off the top of your head, you're probably on the right track. If you can't, you're lacking basic knowledge that you need in order to use an RDBMS properly:
- what is a primary key?
- what is a foreign key?
- what is an important difference between primary keys and foreign keys?
- what is a composite key? When would you use one?
- what are the three main relations?
- what is normalisation?
- what is denormalization?
- what is a normal form?
and so on.
I'm guessing A_Person is making some up of what he/she said (to be entertaining) (I don't mean the DB facts - which are right, of course, and the questions at the end), but it was an amusing post anyway :)
I imagine you've written more than your fair share of PROGRESS 4GL code in the past .. your qualifications questions are pretty much straight out of the PROGRESS 4GL user guide .. ;)
Love how he started. People who don't stop their conversations for a presenter are the worst. People who don't stop their conversations for a presentation by Richard Hipp deserve a spell of laryngitis.
That's how languages evolve. Words that meant one very distinct thing come to mean something only partially like the original. People decry the misuse. And finally the new meaning becomes the one true meaning and the original sense is marked in dictionaries as "archaic."
For a word that's gone through that exact cycle, have a stare at "artificial," which was the adjective for "artifice," which at one time meant craftsmanship. When St. Paul's Cathedral was first shown to King Charles II, he praised it for being "very artificial" -- a compliment. [1]
In the meantime, I agree that it can be frustrating to see words apparently misused. But I think this is hardly the mark of an "idiot," as you put it.
Oh wow I totally disagree. I have always found the computer science usage of "orthogonal" to be very analogous to orthogonal sets in linear algebra. It is only in two dimensions that it boils down to right angles; That's the uninteresting case! The more general concept is one of independence, of something being broken down into its constituent parts, such that each part is pulling its weight in some way that even all the other parts combined could not. Which is exactly how I see it being used here. I still remember the moment my programming languages professor introduced the concept of orthogonality in that field; I intuitively grasped the meaning and was awed by that power of analogy.
What does it mean for something to be at a right angle to something else?
There's a euclidean geometric answer to that statement, but it's hardly the only correct answer.
When people use it to mean that they're speaking of two issues that have a range of independent possibilities, it's not wrong to invoke linearly independent bases.
I don't like to use SQL engine because I don't understand how they work, I never really know if my query will be O(1), O(log(n)), O(n), etc, or what kind of algorithm will optimize my query.
Who really does understand how a SQL engine work? Don't you usually require to understand how something work before starting using it? Which SQL analyst or DB architect really knows about the internals of a SQL engine? Do they know about basic data structures? Advanced data structures? Backtracking?
That's why I tend to avoid systematically using a SQL engine unless the data schema is very very simple, and manage and filter the data case by case in code. SQL is good for archiving and storing data, and work as an intermediary, but I don't think it should drive how a software works. Databases can be very complex, and unfortunately, since developers like to make things complicated, it becomes hairy.
I think SQL was designed when RAM was scarce and expensive, so to speed up data access, it has to be properly indexed with a database engine. I really wonder who, today, have data that cannot fit in RAM, apart from big actors.
I tend to advocate for simple designs and avoid complexity as most as I can, so I might biased, but many languages already offers things like sets, maps, multimaps, etc. Tailoring data structures might yield good results too.
You're not scared, you're just too lazy to learn the tools of your trade.
Databases are not very complex and use pretty much only textbook data structures and algorithms. Understanding how they process a given query and how a query will probably perform/scale (even without EXPLAIN ANALYZE) is not hard to learn. You do need to learn it (at some point; you don't for small data, which is most). But it's far from difficult.
> That's why I tend to avoid systematically using a SQL engine unless the data schema is very very simple, and manage and filter the data case by case in code.
And that's the mentality that gives us webshops were applying a simple filter results in a couple seconds load time and uses hundreds of MB of RAM per request, server side.
>I don't like to use SQL engine because I don't understand how they work, I never really know if my query will be O(1), O(log(n)), O(n), etc, or what kind of algorithm will optimize my query.
Unless you're generating totally dynamic queries that's a moot point.
You can always try it and measure it -- just like you know, you would profile a program in any programming language. And you can trivially have the database show you the query plan as well.
Do you also not use APIs because you don't know a priori if a call is O(1) or O(N) or O(Nlog(N)) etc?
>I think SQL was designed when RAM was scarce and expensive, so to speed up data access, it has to be properly indexed with a database engine. I really wonder who, today, have data that cannot fit in RAM, apart from big actors.
That's really orthogonal.
Speed and indexes still matter today with big data (or plain "tens of thousands of web users" loads), where we often have to denormalize or use indexed non-sql stores just to get more speed for the huge data we still need to be able to query fast.
Besides, something indexed will be faster whether they are in disk or in RAM compared to something in the same storage that's not indexed.
So unless we're coding something trivial, server side we still want all the speed we can get from our data than plain having them as simple structures RAM provides.
You wouldn't use a linked link as opposed to a hash table just because your data "fit in RAM". Even in RAM ~O(1) vs ~ O(N) matters [1].
SQL was invented and caught on because: companies had tried and were burned by no-sql stores with incompatible storage standards, lax reliability guarantees, no interoperability between client programs, no over-econmpassing logical abstraction (compared to relational algebra) but ad-hoc reinventions of the square wheel, and so on. Ancient issues the wave of Mongo fanboys brought back all over again.
[1] unless the linked list is so tiny as to fit in cache and avoid the hash table inderection, but I digress
[+] [-] ryanworl|7 years ago|reply
[+] [-] _wmd|7 years ago|reply
[+] [-] okket|7 years ago|reply
[+] [-] prudhvis|7 years ago|reply
[+] [-] codetrotter|7 years ago|reply
[+] [-] provlem|7 years ago|reply
[+] [-] dmoreno|7 years ago|reply
It's not valid for big datasets, as it stores all in memory (or maybe it is?), but as a learning experience has been amazing to think and develop a real database: planner, executor, choose algorithms, implement features as lateral joins and so on.
I will definetly listen very carefully to these talks.
[1] https://gitHub.com/Serverboards/exosql
[+] [-] suj1th|7 years ago|reply
[+] [-] zerr|7 years ago|reply
[+] [-] pipu|7 years ago|reply
https://www.youtube.com/playlist?list=PLSE8ODhjZXjYutVzTeAds...
[+] [-] PretzelFisch|7 years ago|reply
[+] [-] manigandham|7 years ago|reply
[+] [-] swinghu|7 years ago|reply
[+] [-] logicallee|7 years ago|reply
Request to any video filter expert
------------------------------------
I started watching this. The slides are unreadable but the camera is perfectly still and the slides are for several "key frames" where the compression algorithm decides to replace one set of compression artifacts for another.
For example try to read the first keyword under "Translates into:":
https://www.youtube.com/watch?v=Z_cX3bzkExE&t=2m14s
The keyword is unreadable at the start but as you keep looking at it over 50 keyframes it becomes readable to me.
Since the camera is in a fixed position it should be possible to combine the data from those artifacts into a single superresolution video with very small assumptions. (i.e. the assumption that the image is the same image until at least 5% change or something). There's not even anyone moving in front of it.
-> Can someone who actually knows this stuff apply a superresolution interlacing filter to this video and post the superresolution version somewhere?
I hope this is not too much work, and I am sure we would all appreciate the results since the slides are not human-readable before applying some kind of superresolution!
[+] [-] reilly3000|7 years ago|reply
[+] [-] coleifer|7 years ago|reply
[deleted]
[+] [-] peterwwillis|7 years ago|reply
[+] [-] dicroce|7 years ago|reply
In SQL, indexes are implicit.. they are used if available but it's easy get a large query to scan sometimes when it shouldnt... what if there was a different query language with explicit index syntax.. I think you'd get a lot more predictable performance.
[+] [-] DenisM|7 years ago|reply
1. Query expresses the result it produces, not the method that was used to obtain it. Semantic vs implementation. It may be a pain to write, but it will be easier to read later.
2. DBA could add/drop indexes on the fly to tune performance of a live system without making any changes to the application code. And being 100% certain he is not changing the semantics of what's going on.
As others noted, if you must you can use query hints for force particular index to be used for a particular operation. MSSQL also allows to pin down a query plan you like for a given query so that it doesn't drift away later due to environment changes.
I agree it is sometimes a pain to force SQL to use the index you wanted it to use.
[+] [-] vidarh|7 years ago|reply
It was useful occasionally "back in the day" when MySQL's query planner was really bad, but today it will mostly appear useful if there's something subtly wrong with your config. I don't use MySQL much any more, but on Postgres one typical mistake might be that the costs configured for the query planner doesn't match your hardware (e.g. if your seek cost is configured to be high enough relative to sequential reads, sequential scans starts to look really good even when you need only a small portion of the data; principle will be the same on MySQL but I don't know if you have the same control over the query planner costs or not).
[+] [-] wgjordan|7 years ago|reply
There is, it's a feature in MySQL called Index Hints [1].
[1] https://dev.mysql.com/doc/refman/8.0/en/index-hints.html
[+] [-] Amezarak|7 years ago|reply
Part of this is because of the way a well-normalized database is organized. Most databases have a few large tables and many smaller tables. So in the general case, most of your joins will be against smaller tables. Joins with larger tables are usually very fast, as long as the fields you join on are indexed (and you're not doing a CROSS JOIN or something.) The other thing that helps (which it sounds like you did by "nesting related joins") is to always think about limiting (filtering) the datasets you're joining against at as many stages as possible; that way you're always doing the least amount of work necessary, and it's usually conceptually simpler to read and understand.
As others have said, most databases do have index hinting as part of the query language. However, in my (long) experience, you should almost never use it. Index hints should be a huge code smell.
[+] [-] api|7 years ago|reply
[+] [-] stevoski|7 years ago|reply
Even better, try to add a basic feature to H2 (eg. a new built-in function). It is surprisingly easy, and you come away with a decent understanding of the basics of building an SQL database engine.
[+] [-] A_Person|7 years ago|reply
I was writing database systems professionally, back in the days before the RDBMS concept was even a thing. So here's my (enormously long and convoluted) 2 cents worth. Please be sure to pack a sandwich and get hydrated before you continue.
Say you were dealing with doctors and patients, and needed to store that information in a database. Back in the day, you'd typically use a so-called hierarchical database. To design one of those, you need to decide, what is the most common access method expected to be: getting the patients for a given doctor, or the doctors for a given patient? You'd design the schema accordingly. Then, the preferred access method was easy to code, and efficient to run. The "other" access method was still possible, but harder to code, and slower to run. The database schema depended on how you thought the users would access the data.
But that is absolutely what NOT what to do with an RDBMS. Certainly you look at the users' screens, reports, and so on - but that's just to determine what unique entities the system must handle - in this case, doctors and patients. Then you ignore how the users will access the data, and work out what are the inherent logical relationships between all the entities.
Your initial answer might be this. A doctor can have many patients, and a patient can have many doctors. As any competent relational designer will instantly know, this means you need a resolving table whose primary key is a composite key comprising the primary keys of the other two tables. So if Mary was a patient of Tom, you'd add Mary to the patients table (if not already there), Tom to the doctors table (ditto), then add a Mary/Tom record to the resolving table. By that means, a doctor could have any number of patients, a patient could have any number of doctors, and it's trivially easy to write simple, performant SQL to access that data however you want.
But then you'd have a ghastly thought: patients can also be doctors, and doctors can also be patients! Say Tom was also a patient of Mary! Now you need a Tom record in the patient's table, but that would inappropriately duplicate all his data from the doctors table! Something's clearly wrong. You'd soon see that from a data modelling viewpoint, you don't want doctors and patients as separate entities - you want a single entity Person, and a resolving table to relate arbitrary people in specified ways.
So what?!!
So this. IMHO, many developers using relational databases have absolutely no idea about any of that. They design hopelessly unnormalised schemas, which then need reams of ghastly SQL to get anything out. The query planner can barely parse all that crap, let along optimise it. The database has to stop every five minutes to wet its brow and take a drink.
So here's my advice to any inexperienced relational database designers who have actually managed to get this far!! If you can answer the following questions off the top of your head, you're probably on the right track. If you can't, you're lacking basic knowledge that you need in order to use an RDBMS properly:
- what is a primary key? - what is a foreign key? - what is an important difference between primary keys and foreign keys? - what is a composite key? When would you use one? - what are the three main relations? - what is normalisation? - what is denormalization? - what is a normal form? and so on.
Just my 2c! :-)
[+] [-] vram22|7 years ago|reply
Well done.
[+] [-] mmjaa|7 years ago|reply
[+] [-] kimdotcom|7 years ago|reply
[+] [-] unknown|7 years ago|reply
[deleted]
[+] [-] okket|7 years ago|reply
[+] [-] randop|7 years ago|reply
[+] [-] serioushaha|7 years ago|reply
[+] [-] angelfreak|7 years ago|reply
[+] [-] bitmapbrother|7 years ago|reply
https://www.youtube.com/watch?v=Jib2AmRb_rk
[+] [-] cup-of-tea|7 years ago|reply
[+] [-] ntonozzi|7 years ago|reply
[+] [-] ericnyamu|7 years ago|reply
[deleted]
[+] [-] anothergoogler|7 years ago|reply
[+] [-] Jacq5|7 years ago|reply
[deleted]
[+] [-] blackrock|7 years ago|reply
It doesn't even compute. It doesn't even make any sense, in how they use it in relation to the topic.
Are the issues at right angles of one another? No.
Are the issues statistically independent of one another? Perhaps.
I suggest to use a more appropriate descriptive word to describe the situation.
You folks should read the urban meaning of orthogonal, to understand how people roll their eyes at you, when you inappropriately use the term.
https://www.urbandictionary.com/define.php?term=orthogonal
Just another friendly PSA.
[+] [-] ternaryoperator|7 years ago|reply
For a word that's gone through that exact cycle, have a stare at "artificial," which was the adjective for "artifice," which at one time meant craftsmanship. When St. Paul's Cathedral was first shown to King Charles II, he praised it for being "very artificial" -- a compliment. [1]
In the meantime, I agree that it can be frustrating to see words apparently misused. But I think this is hardly the mark of an "idiot," as you put it.
[1] https://quoteinvestigator.com/2012/10/31/st-pauls-cathedral/
[+] [-] sanderjd|7 years ago|reply
[+] [-] wwweston|7 years ago|reply
There's a euclidean geometric answer to that statement, but it's hardly the only correct answer.
When people use it to mean that they're speaking of two issues that have a range of independent possibilities, it's not wrong to invoke linearly independent bases.
[+] [-] abiox|7 years ago|reply
> You folks should read the urban meaning of orthogonal
nope, nope, nope. that site's a hive of scum and villainy, and a massive number of entries are just random nonsense.
i'd rather go to wiktionary[0], which includes:
"Of two or more problems or subjects, independent of or irrelevant to each other."
[0] https://en.wiktionary.org/wiki/orthogonal
[+] [-] Izkata|7 years ago|reply
If that mattered at all, then we'd have stopped using other remapped words first, like "tree".
[+] [-] jokoon|7 years ago|reply
Who really does understand how a SQL engine work? Don't you usually require to understand how something work before starting using it? Which SQL analyst or DB architect really knows about the internals of a SQL engine? Do they know about basic data structures? Advanced data structures? Backtracking?
That's why I tend to avoid systematically using a SQL engine unless the data schema is very very simple, and manage and filter the data case by case in code. SQL is good for archiving and storing data, and work as an intermediary, but I don't think it should drive how a software works. Databases can be very complex, and unfortunately, since developers like to make things complicated, it becomes hairy.
I think SQL was designed when RAM was scarce and expensive, so to speed up data access, it has to be properly indexed with a database engine. I really wonder who, today, have data that cannot fit in RAM, apart from big actors.
I tend to advocate for simple designs and avoid complexity as most as I can, so I might biased, but many languages already offers things like sets, maps, multimaps, etc. Tailoring data structures might yield good results too.
Databases still scare me.
[+] [-] blattimwind|7 years ago|reply
Databases are not very complex and use pretty much only textbook data structures and algorithms. Understanding how they process a given query and how a query will probably perform/scale (even without EXPLAIN ANALYZE) is not hard to learn. You do need to learn it (at some point; you don't for small data, which is most). But it's far from difficult.
> That's why I tend to avoid systematically using a SQL engine unless the data schema is very very simple, and manage and filter the data case by case in code.
And that's the mentality that gives us webshops were applying a simple filter results in a couple seconds load time and uses hundreds of MB of RAM per request, server side.
[+] [-] coldtea|7 years ago|reply
Unless you're generating totally dynamic queries that's a moot point.
You can always try it and measure it -- just like you know, you would profile a program in any programming language. And you can trivially have the database show you the query plan as well.
Do you also not use APIs because you don't know a priori if a call is O(1) or O(N) or O(Nlog(N)) etc?
>I think SQL was designed when RAM was scarce and expensive, so to speed up data access, it has to be properly indexed with a database engine. I really wonder who, today, have data that cannot fit in RAM, apart from big actors.
That's really orthogonal.
Speed and indexes still matter today with big data (or plain "tens of thousands of web users" loads), where we often have to denormalize or use indexed non-sql stores just to get more speed for the huge data we still need to be able to query fast.
Besides, something indexed will be faster whether they are in disk or in RAM compared to something in the same storage that's not indexed.
So unless we're coding something trivial, server side we still want all the speed we can get from our data than plain having them as simple structures RAM provides.
You wouldn't use a linked link as opposed to a hash table just because your data "fit in RAM". Even in RAM ~O(1) vs ~ O(N) matters [1].
SQL was invented and caught on because: companies had tried and were burned by no-sql stores with incompatible storage standards, lax reliability guarantees, no interoperability between client programs, no over-econmpassing logical abstraction (compared to relational algebra) but ad-hoc reinventions of the square wheel, and so on. Ancient issues the wave of Mongo fanboys brought back all over again.
[1] unless the linked list is so tiny as to fit in cache and avoid the hash table inderection, but I digress