This feels like it defeats the purpose of writing tests though. For me writing test very much has to do with validating my assumptions... it helps me gain confidence in my code. Now I'd have to trust these automated tests? My confidence just wouldn't be there.
Oftentimes, finagling with the REPL is merely about getting the syntax right ("did I use the right number of parens?") rather than formulating assumptions. In such cases, using this tool doesn't invalidate the TDD workflow, since your assumptions don't change.
The essential complexity of writing unit tests is in formulating your assumptions ahead of time. As you correctly point out, no tool can ever solve this. However, an accidental complexity is correctly expressing your assumptions in code. Tools such as a REPL and this library can definitely help with that.
It is tough. On the one hand, I agree with you. On the other , writing tests seems both incredibly inefficient as well as missing the intended purpose. You cannot have a test suite which ‘proves’ the implementation is correct. A test suite is really like Swiss cheese - there are so many holes that implementation code can get through.
Now, bear in mind that I say this as a passionate tester and verification enthusiast. I don’t have a better alternative, so I keep writing tests. And yes, they do catch bugs. I’m not saying they have no value. I’m saying, they always let some bugs through, I.e, they are not complete.
And that makes the cost of them very depressing. Not only cost to write, cost to continuously run.
This is more, did I accidentally break something? The correctness of the tests is not known but you will know which tests have changed. Then you can go and review that code and see why it changed and whether the new behavior is correct. This gives you confidence in a change didn’t break anything but not that things weren’t broken from the start.
To me, testing in the REPL is also about validating my assumptions.
REPL-driven development is basically test-driven development with sorter feedback cycles during active development, but the downside is you need to make an extra effort to go back and preserve the important tests for posterity. This might well reduce some of that friction.
This is neat, it reminds me of Hypothesis which does randomised property-based testing and if it finds a failing edge case it saves it to a database so that it's always used in future test runs.
That said, I suspect I'd find a similar limitation with this as with Hypothesis. Most of the tests I write aren't for pure functions, or require a fair amount of setup or complex assertions. It's possible to write test helpers that do all of this, and I do, but too much of that means tests that are complex enough to need their own tests, so I think it's important to balance.
This is probably specific to my normal work (Python, web services), but I'd suspect applies to a lot of testing done elsewhere.
> Most of the tests I write aren't for pure functions
In response to this, I recommend the "Functional Core, Imperative Shell"[0] talk/pattern. The idea is to extract your business logic as pure functions independent of the data access layer. This pattern allowed me to test large portions of a code base using property tests. This works really well in most cases and gives me much more confidence in the product that will be deployed.
This brings to mind a conversation that I've had several times over the years. It goes something like this:
> Wouldn't it be cool if we could have code that would test itself?
> Yeah, if only there were some kind of automated way to encapsulate the requirements of classes and methods that could ensure that they behaved within some known boundaries...?
We've had automated unit tests for quite some time, but they are more often called compiler checks. Want your "automated unit tests" to be more comprehensive? Improve your type system and your compiler!
With all that said, I do not want to be dismissive of peoples' projects. This is fun and neat.
Type checking, compile-time errors won't check if the program you wrote is correct, only that it's internally consistent. That is an entirely different thing than checking for correctness.
You do need a layer of tests that actually try to prove that given some input your program actually does what it's supposed to do. That is hard to automate.
I have yet to see an effective version of these "wonder" tools actually work in practice. That said, they make great snake oil for testing automation teams that often end up doing manual testing because the promises of the "easy" automation were never fulfilled.
Pretty good. Essentially just a way to save function executions and replay them later against previous evaluation results.
I would guess that Jupyter notebooks could easily be made into such a thing as well. Also doctests with editor integration might work like this already?
[+] [-] xutopia|4 years ago|reply
[+] [-] omginternets|4 years ago|reply
The essential complexity of writing unit tests is in formulating your assumptions ahead of time. As you correctly point out, no tool can ever solve this. However, an accidental complexity is correctly expressing your assumptions in code. Tools such as a REPL and this library can definitely help with that.
[+] [-] slver|4 years ago|reply
[+] [-] amw-zero|4 years ago|reply
Now, bear in mind that I say this as a passionate tester and verification enthusiast. I don’t have a better alternative, so I keep writing tests. And yes, they do catch bugs. I’m not saying they have no value. I’m saying, they always let some bugs through, I.e, they are not complete.
And that makes the cost of them very depressing. Not only cost to write, cost to continuously run.
[+] [-] twobitshifter|4 years ago|reply
[+] [-] mumblemumble|4 years ago|reply
REPL-driven development is basically test-driven development with sorter feedback cycles during active development, but the downside is you need to make an extra effort to go back and preserve the important tests for posterity. This might well reduce some of that friction.
[+] [-] cratermoon|4 years ago|reply
[+] [-] danpalmer|4 years ago|reply
That said, I suspect I'd find a similar limitation with this as with Hypothesis. Most of the tests I write aren't for pure functions, or require a fair amount of setup or complex assertions. It's possible to write test helpers that do all of this, and I do, but too much of that means tests that are complex enough to need their own tests, so I think it's important to balance.
This is probably specific to my normal work (Python, web services), but I'd suspect applies to a lot of testing done elsewhere.
[+] [-] CloselyChunky|4 years ago|reply
In response to this, I recommend the "Functional Core, Imperative Shell"[0] talk/pattern. The idea is to extract your business logic as pure functions independent of the data access layer. This pattern allowed me to test large portions of a code base using property tests. This works really well in most cases and gives me much more confidence in the product that will be deployed.
[0]: https://www.destroyallsoftware.com/screencasts/catalog/funct...
[+] [-] peterlk|4 years ago|reply
> Wouldn't it be cool if we could have code that would test itself?
> Yeah, if only there were some kind of automated way to encapsulate the requirements of classes and methods that could ensure that they behaved within some known boundaries...?
We've had automated unit tests for quite some time, but they are more often called compiler checks. Want your "automated unit tests" to be more comprehensive? Improve your type system and your compiler!
With all that said, I do not want to be dismissive of peoples' projects. This is fun and neat.
[+] [-] sidmitra|4 years ago|reply
Joe Armstrong(of the Erlang fame), talks about this a bit here: https://youtu.be/TTM_b7EJg5E?t=778
I've pointed to a specific timestamp, but there might be more details somewhere else in that talk.
[+] [-] carlmr|4 years ago|reply
[+] [-] teddyh|4 years ago|reply
[+] [-] lgleason|4 years ago|reply
Too many people looking for silver bullets.
[+] [-] darthrupert|4 years ago|reply
I would guess that Jupyter notebooks could easily be made into such a thing as well. Also doctests with editor integration might work like this already?
[+] [-] ehutch79|4 years ago|reply
[+] [-] darthrupert|4 years ago|reply