For my uses it's great that it has both test suite mode and individual invocation mode. I use it to execute a test suite of HTTP requests against a service in CI.
I'm not a super big fan of the configuration language, the blocks are not intuitive and I found some lacking in the documentation assertions that are supported.
Overall the tool has been great, and has been extremely valuable.
I started using interface testing when working on POCs. I found this helps with LLM-assisted development. Tests are written to directly exercise the HTTP methods, it allows for fluidity and evolution of the implementations as the project is evolving.
I also found the separation of testing very helpful, and it further enforces the separation between interface and implementation. Before hurl, the tests I wrote would be written in the test framework of the language the service is written in. The hurl-based tests really help to enforce the "client" perspective. There is no backdoor data access or anything, just strict separation betwen interface, tests and implementation :)
Maintainer here, thanks for the feedbacks. 6-7 years ago, when we started working on Hurl, we started with a JSON then a YAML file format. We gradually convinced ourself to write a new file format and I completely understand that it might feel weird. We tried (maybe not succeeded!) to have something simple for the simple case...
I'm really interested by issues with the documentation: it can always be improved and any issues is welcome!
Yeah love Hurl, we stared using it back in 2023-09.
We had a test suite using Runscope, I hated that changes weren't versioned controlled. Took a little grunt work and I converted them in Hurl (where were you AI?) and got rid of Runscope.
Now we can see who made what change when and why. It's great.
So, myself and many folks I know have taken to writing tests in the form of ".http" files that can be executed by IDE extensions in VS Code/IDEA.
Those basically go in the form
POST http://localhost:8080/api/foo
Content-Type: application/json
{ "some": "body" }
And then we have a 1-to-1 mapping of "expected.json" outputs for integration tests.
We use a bespoke bash script to run these .http file with cURL, and then compare the outputs with jq, log success/failure to console, and write "actual.json"
Can I use HURL in a similar way? Essentially an IDE-runnable example HTTP request that references a JSON file as the expected output?
And then run HURL over a directory of these files?
Hurl is awesome. A while back I ported a small web service from Python to Rust. Having rigorous tests of the public API is amazing; a language-independent integration test! I was able to swap it out with no changes to the public API or website.
Worth mentioning that using Hurl in Rust specifically gives you a nice bonus feature: integration with cargo test tooling. Since Hurl is written in Rust, you can hook into hurl-the-library and reuse your .hurl files directly in your test suite. Demo: https://github.com/perrygeo/axum-hurl-test
I must say, the sample section[1] does an excellent job of making a case for the tool, especially to people who are inclined to make a snap judgement about the usefulness of the tool within the first 5 minutes (I'm sometimes guilty of this).
I took a lot of inspiration from this project when designing my own HTTP testing tool[0]. We needed to be able to run hundreds of tests quickly, and in parallel. If that is something you need and you like Hurl, then you might like Nap also.
yep, I've played with Hurl and find it nice but recently have been leaning into the .http stuff more. IntelliJ has it built in, there's the plugin you linked, and then for CLI i've used httpYac. No "vendor lock in", really easy to share with copy & paste or source control.
I think the idea is nice, but I am struggling for why I should use it. I write using Django, which has plenty of hooks for testing within the framework. Why switch to a tool which is blind to my backend and is going to create more work to keep in sync? At minimum, I lose the ability to easily drop into my debugger to inspect why a result went wrong.
There is probably something to be said for keeping a hard boundary between the backend and testing code, but this would require more effort to create and maintain. I would still need to run the native test suite, so reaching out to an external tool feels a little weird. Unless it was just to ensure an API was fully generic enough for people to run their own clients against it.
> Why switch to a tool which is blind to my backend and is going to create more work to keep in sync? At minimum, I lose the ability to easily drop into my debugger to inspect why a result went wrong.
I don't use hurl but I've used other tools to write language agnostic API tests (and I'm currently working on a new one) so here's what I like about these kinds of tests:
- they're blind to the implementation, and that's actually a pro in my opinion. It makes sure you don't rely on internals, you just get the input and the output
- they can easily serve as documentation because they're language agnostic and relatively easy to share. They're great for sharing between teams in addition to or instead of an OpenAPI spec
- they actually test a contract, and can be reused in case of a migration. I've worked on a huge migration of a public API from Perl to Go and we wanted to keep relatively the same contracts (since the API was public). So we wrote tests for the existing Perl API as a non-regression harness, and could keep the exact same tests for the Go API since they were independent from the language. Keeping the same tests gave us greater confidence than if we had to rewrite tests and it was easy to add more during the double-run/A-B test period
- as a developer, writing these forces you to switch context and become a consumer of the API you just wrote, I've found it easier to write good quality tests with this method
It's just an alternative to Postman and similar so you don't have to start a whole damn electron window just to test a few http requests. It's somewhere between a curl script and Postman, so it hits the right spot for many.
We used Hurl to go from a ktor web server to a spring boot rewrite (Java/Kotlin stack). It was a breeze to have a kind of specifications test suite independent of the server stack and helped us a lot in the transition.
Another benefit is we built a Docker image for production and wanted to have something light and not tight to the implementation for integration tests.
There is no obligation on you to use it, especially if you have better tooling for the tasks.
For my team needs, I see benefits of using self-contained tool which doesn't require any extra modules to be installed and venv-like activated (this is a great barrier when ensuring others can use it too). Not even mention it will run fast.
Testing headers is particularly nice, so can test configuration of webservers and LBs/CDNs.
This looks interesting. Longtime user of the Vscode-restclient, but have been moving over to httpyac lately for the scripting and cli use. Will take a look to see if hurl is a good fit.
One annoying thing I've found in testing these tools is that a standard hasn't emerged for using the results of one request as input for another in the syntax of `.http` files. These three tools for instance have three different ways of doing it:
* hurl uses `[Captures]`
* Vscode-restclient does it by referencing request names in a variable declaration (like: `@token = {{loginAPI.response.body.token}}`).
* While httpyac uses `@ref` syntax.
From a quick round of testing it seems like using the syntax for one might break the other tools.
Guilty to have created yet-another-format for HTTP client! To "mitigate" this issue, you can use `hurlfmt` (distributed along `hurl`) that would allow you to export a Hurl file to JSON. You could then go from this JSON to another... It's not magic but it can help if you're going to change from Hurl to another thing.
With nice editor integration (especially emacs), hurl is a good postman replacement.
Kinda niche, but I wrapped libhurl to make it really easy to make an AWS Lambda availability monitor out of a hurl file https://gitlab.com/manithree/hurl_lambda
You should probably be looking at the Cargo.toml file(s) (for direct dependencies at least) instead of the lock file as the lock file will include dependencies used for dev/testing.
The one thing I never understood about the Hurl format is why the response status code assertion happens at the request section and not under the `[Asserts]` section. I wonder what the rationale behind that is.
Hurl has been great for testing in my RAD templating web server project. Like dm03514 says itt, 'The hurl-based tests really help to enforce the "client" perspective.' It's packaged for 3 application environments including a docker image (x2 archs, x3 oses) and with Hurl its easy to ensure the tests pass at the client level in all three environments.
It would be nice to have fancy-regex; today I tried to write a regex to match a case like this ~ <link href="/assets/reset.css\\?hash=(.*)" integrity="\\1" rel="stylesheet"> ~ but the regex crate (and thus hurl asserts) can't do backreferences so I guess I'll just live without checking that these two substrings match.
I wish there was some way to test streamed updates / SSE. Basically, open a connection and wait, then run some other http requests, then assert the accumulated stream from the original connection. https://github.com/Orange-OpenSource/hurl/discussions/2636
Very interesting tool; I've done something quite similar by implementing a CLI-mode interpreter to VS Code Rest-client (https://marketplace.visualstudio.com/items?itemName=humao.re...) files, with a support of executing a test-code (Javascript) against the result of http operations.
The idea was to have a tool which could run .http files in batch mode, and also to execute the selected set of http operations concurrently.
[+] [-] dm03514|8 months ago|reply
For my uses it's great that it has both test suite mode and individual invocation mode. I use it to execute a test suite of HTTP requests against a service in CI.
I'm not a super big fan of the configuration language, the blocks are not intuitive and I found some lacking in the documentation assertions that are supported.
Overall the tool has been great, and has been extremely valuable.
I started using interface testing when working on POCs. I found this helps with LLM-assisted development. Tests are written to directly exercise the HTTP methods, it allows for fluidity and evolution of the implementations as the project is evolving.
I also found the separation of testing very helpful, and it further enforces the separation between interface and implementation. Before hurl, the tests I wrote would be written in the test framework of the language the service is written in. The hurl-based tests really help to enforce the "client" perspective. There is no backdoor data access or anything, just strict separation betwen interface, tests and implementation :)
[+] [-] jicea|8 months ago|reply
I'm really interested by issues with the documentation: it can always be improved and any issues is welcome!
[+] [-] 1a527dd5|8 months ago|reply
We had a test suite using Runscope, I hated that changes weren't versioned controlled. Took a little grunt work and I converted them in Hurl (where were you AI?) and got rid of Runscope.
Now we can see who made what change when and why. It's great.
[+] [-] johns|8 months ago|reply
[+] [-] jicea|8 months ago|reply
[+] [-] gavinray|8 months ago|reply
Those basically go in the form
And then we have a 1-to-1 mapping of "expected.json" outputs for integration tests.We use a bespoke bash script to run these .http file with cURL, and then compare the outputs with jq, log success/failure to console, and write "actual.json"
Can I use HURL in a similar way? Essentially an IDE-runnable example HTTP request that references a JSON file as the expected output?
And then run HURL over a directory of these files?
[+] [-] hiddew|8 months ago|reply
[+] [-] LadyCailin|8 months ago|reply
[+] [-] airstrike|8 months ago|reply
[+] [-] jiehong|8 months ago|reply
Where do you see hurl in the next 2 years?
[+] [-] nikeee|8 months ago|reply
[+] [-] draw_down|8 months ago|reply
[deleted]
[+] [-] perrygeo|8 months ago|reply
Worth mentioning that using Hurl in Rust specifically gives you a nice bonus feature: integration with cargo test tooling. Since Hurl is written in Rust, you can hook into hurl-the-library and reuse your .hurl files directly in your test suite. Demo: https://github.com/perrygeo/axum-hurl-test
[+] [-] hliyan|8 months ago|reply
[1] https://github.com/Orange-OpenSource/hurl?tab=readme-ov-file...
[+] [-] twodave|8 months ago|reply
[0] https://naprun.dev
[+] [-] bitpush|8 months ago|reply
[+] [-] chvid|8 months ago|reply
https://marketplace.visualstudio.com/items?itemName=humao.re...
Which is a banger VS Code extension for all sorts of http xyz testing.
[+] [-] krisgenre|8 months ago|reply
[+] [-] jiehong|8 months ago|reply
[+] [-] mcescalante|8 months ago|reply
[+] [-] unknown|8 months ago|reply
[deleted]
[+] [-] 3eb7988a1663|8 months ago|reply
There is probably something to be said for keeping a hard boundary between the backend and testing code, but this would require more effort to create and maintain. I would still need to run the native test suite, so reaching out to an external tool feels a little weird. Unless it was just to ensure an API was fully generic enough for people to run their own clients against it.
[+] [-] thiht|8 months ago|reply
I don't use hurl but I've used other tools to write language agnostic API tests (and I'm currently working on a new one) so here's what I like about these kinds of tests:
- they're blind to the implementation, and that's actually a pro in my opinion. It makes sure you don't rely on internals, you just get the input and the output
- they can easily serve as documentation because they're language agnostic and relatively easy to share. They're great for sharing between teams in addition to or instead of an OpenAPI spec
- they actually test a contract, and can be reused in case of a migration. I've worked on a huge migration of a public API from Perl to Go and we wanted to keep relatively the same contracts (since the API was public). So we wrote tests for the existing Perl API as a non-regression harness, and could keep the exact same tests for the Go API since they were independent from the language. Keeping the same tests gave us greater confidence than if we had to rewrite tests and it was easy to add more during the double-run/A-B test period
- as a developer, writing these forces you to switch context and become a consumer of the API you just wrote, I've found it easier to write good quality tests with this method
[+] [-] sorashi|8 months ago|reply
[+] [-] jicea|8 months ago|reply
Another benefit is we built a Docker image for production and wanted to have something light and not tight to the implementation for integration tests.
[+] [-] CoolCold|8 months ago|reply
For my team needs, I see benefits of using self-contained tool which doesn't require any extra modules to be installed and venv-like activated (this is a great barrier when ensuring others can use it too). Not even mention it will run fast.
Testing headers is particularly nice, so can test configuration of webservers and LBs/CDNs.
[+] [-] kalli|8 months ago|reply
One annoying thing I've found in testing these tools is that a standard hasn't emerged for using the results of one request as input for another in the syntax of `.http` files. These three tools for instance have three different ways of doing it:
* hurl uses `[Captures]`
* Vscode-restclient does it by referencing request names in a variable declaration (like: `@token = {{loginAPI.response.body.token}}`).
* While httpyac uses `@ref` syntax.
From a quick round of testing it seems like using the syntax for one might break the other tools.
[1]: https://hurl.dev/docs/capturing-response.html
[2]: https://github.com/Huachao/vscode-restclient
[3]: https://httpyac.github.io/guide/metaData.html#ref-and-forcer...
[+] [-] jicea|8 months ago|reply
[+] [-] jiggawatts|8 months ago|reply
Conway's Law in action, ladies and gentlemen.
[+] [-] a57721|8 months ago|reply
It gives you full control of constructing requests and assertions because test scenarios may include arbitrary JavaScript.
[+] [-] the_arun|8 months ago|reply
[+] [-] resonious|8 months ago|reply
[+] [-] mdtrooper|8 months ago|reply
[+] [-] manithree|8 months ago|reply
Kinda niche, but I wrapped libhurl to make it really easy to make an AWS Lambda availability monitor out of a hurl file https://gitlab.com/manithree/hurl_lambda
[+] [-] Thaxll|8 months ago|reply
https://github.com/Orange-OpenSource/hurl/blob/master/Cargo....
[+] [-] ewpratten|8 months ago|reply
[+] [-] laerus|8 months ago|reply
[+] [-] porker|8 months ago|reply
[+] [-] whilenot-dev|8 months ago|reply
[+] [-] CommonGuy|8 months ago|reply
[+] [-] yoavm|8 months ago|reply
[+] [-] jicea|8 months ago|reply
[+] [-] infogulch|8 months ago|reply
It would be nice to have fancy-regex; today I tried to write a regex to match a case like this ~ <link href="/assets/reset.css\\?hash=(.*)" integrity="\\1" rel="stylesheet"> ~ but the regex crate (and thus hurl asserts) can't do backreferences so I guess I'll just live without checking that these two substrings match.
I wish there was some way to test streamed updates / SSE. Basically, open a connection and wait, then run some other http requests, then assert the accumulated stream from the original connection. https://github.com/Orange-OpenSource/hurl/discussions/2636
[+] [-] lelanthran|8 months ago|reply
The deficiencies in huel with client state management is not easy to fix.
What I'd like is full client state control with better variable management and use.
For my last project I used Python to write the tests, which appears to work well initially. Dunno how well it will hold up for ongoing maintenance.
[+] [-] zoidb|8 months ago|reply
[+] [-] mikmoila|8 months ago|reply