Yes, it always runs on a real database, without mocks, as integration tests where you can test each CTE's auxiliary statement separately, which acts as a step of our sql pipeline. So the initial data is inserted on the tables and then I can exercise the assertions on the sql queries or views against the real database. In theory it could be easy to rearrange the concatenations of the CTE strings from above so it can be tested as a unit when it's put together with the previous auxiliary step as a temporary table but I never had the need for that since the integrations are simpler and works really well for me. The essential part of my approach is to treat the sql code as concatenated pieces of strings, and call slices of that with just the right concatenation to exercise the test to that slice on the real database, which is valid since the query will always be valid.There is another pattern too, when I implemented a RLS based multi-tenancy with RBAC support, which needed an relatively large sql codebase and needed to be battle tested because it was critical, I've splited a big part of the sql code in a lot of sql functions instead of views to test the code units or integrations (using something similar to dependency injection but for the data, to switch the tenant RBAC's contexts), because for the sql functions I can pass different Postgresql's Configuration Parameters to test different tenants for example.
No comments yet.