I am less than lukewarm on those sort of "Natural-Language-Like" DSLs that seem to mostly come out of the QA space.
When I worked together with a QA team for automated testing, they used Cucumber, which uses Gherkin as an english-but-not-really DSL. Now the idea is obviously, that someone who doesn't know how to code can still describe and read test scenarios. But what ended up happening eventually is that I spent a lot of time writing code to support different Gherkin expressions, but then had to take over writing the Gherkin expressions as well, because even though Gherkin -looks- like english, it still behaves like a programming language. In the end, it would've been much easier for me to just write the tests in a programming language from the start, instead of using an abstraction layer that turned out to serve no purpose.
What I took away from that frustrating experience is that a lot of people think that learning to code is learning a syntax of a programming language, and if you could just do away with that pesky syntax, everybody could code. But, obviously, the syntax ist just the initial barrier, and actually coding is crystallizing and formulating intent - something which programming languages are much better at than natural languages.
But for some reason, this low/no-code idea never seems to die down, especially in some spaces, and I have the feeling I'm just missing something here.
Yeah, I share the sentiment. What it boils down to is that you need to write the test two times. The supposed feature that once written the keywords can be reused is mostly a joke. There will be few heavily-used keywords, and a really long tail of one-time keywords, which is a pain.
I see a single advantage to these frameworks though. They force you to write a test in a specific way that looks like a real-world scenario with clear delineations between setup stages, assertions, and so on. Not that it's impossible to achieve with regular programming language, obviously.
From my POV, the idea of Natural-Language-Like tests is it is theoretically more inclusive, can avoid duplication, and can create a more abstract description.
Granted, I've had middling success convincing non technical people to write or even look at tests, but I think it's irresponsible not to try to create this bridge, depending on the audience of the code. Because very few projects are well documented, and this syntax can act as documentation. In my current work, I am using BDD style to create high level flows, where screens are automatically captured, which is useful both for verification and generated documentation, especially when versioned with code.
It can avoid duplication (and its pitfalls), because in unit style tests, there is a text description along with the code, which can drift away from being accurate. If the test is the description, that won't happen.
It creates a more abstract model, because the tests are forced to a natural language, which will tend ot make it less implementation dependant, so the implementation can be swapped out. I've had good success with this approach, seamlessly migrating a complex workflow from one backend to another, and the tests were kinda pricesless.
I also find keyword and flow re-use to be quite high, and it's great how this approach means the more tests you write, the less low level, implementation specific code and scenarios you need to write.
I will never understand the reason for Cucumber to exist. As you say, it just introduces unnecessary complexity for no reason.
If you can't code, you shouldn't be writing automated tests. Most of the "benefits" of Cucumber can be gained by using a good reporting framework. I like to use Serenity as I love its reports.
QA should be writing the underlying code along with the BDD-style tests, using something like a Page Object Model design, with BDD-wrappers on top mostly for non-technical analysts like PMs and BAs to understand coverage and add test cases. I think if you're doing it for anyone else's benefit it falls flat. If it's a no-code solution for QA then it becomes a drag and you should hire an SDET to manage it (and train the other staff to write the underlying code too). If BA/PM is not involved in test design it's a pointless wrapper, I agree. Personally I'm not a fan and spend my time focusing on better reporting so people can see what's tested & if they want an additional case I can just add it myself.
I think it all boils down to execution and the company’s mindset and skillset.
I was also a Quality Engineer for a time, and one of our bigger projects was to implement precisely a Natural Language system or something so that PMs or whoever wanted to “live in the future” could write their tests directly in the User Story.
What we did was create a large set of commonly used “phrases” to interact with systems, be it UI or APIs, which we coded the actual test framework for, document them for anyone wanting to write User Stories (in Rally platform, like JIRA), the instructions on what to do if a phrase didn’t exist for their case, etc. It took us 6 focused months, but we delivered, and people were happy. Of course, we are talking about technical PMs, PMs that met with designers and set standard guidelines, PMs that required API documentation and read it, and the like; just not “coding much” PMs.
Albeit this being a company where we already wrote automation tests right next to the developer’s timeframe for their own coding, where a feature got both code/tests right around the same time, 99% of the time, so it was easier for people to take advantage of features instead of shallowly complaining.
So, yes, I concur with you that probably your case would be better if everyone were a developer, I don’t really see the need for those kind of DSL, but if you reeeeeeally need to have other people intervene, and want them to focus more on their end of the bargain (not coding), I think these DSLs have their place: Assuming the whole company has the stomach for it.
I presume low code/no code comes from the simplest notion of programming language, which is a medium between human and machine. And the easier it is to read and write means the "better" it is. Therefore pitching it to a non-technical or even semi-technical people is as simple as teaching people to write code at surface level.
However what people don't understand deeply is that any language is a set of symbols and rules requiring a certain amount of cognitive space and a certain time to absorb it. The more ambiguous the symbols are, the closer the growth of the number of the rules to exponential. This is a fundamental point that happens in real-life but is rarely understood, moreover to programmers, surprisingly.
I believe a proper low-code/no-code should have natural language processing capability and understanding context, not just providing syntactical aliases.
I work in an environment which has dozens of products, very significant complexity and legislative needs. Cucumber works very well for us because we use it for specifying outcomes. There's way to much Gherkin out there which tries to use it for describing a journey (I click this, then this, and this....)
It works really well for us to get a shared understanding between a non-technical business, devs and QA.
That said, we then hand it the Gherkin over to the SDETs who fettle it according to our house style (which is strict and must pass enforced linting) and it does take gardening as the product matures. It's our SDETs who write the test code: we wouldn't dream of letting a non-technical person actually drive the test logic itself.
I’ve spent a fair number of years doing corporate cucumber training.
At every client I emphatically insist that if they are not using Gherkin as a “Rosetta stone” for the business folks then they’re wasting their time with cucumber. Don’t go to the trouble of writing gherkin if you’re not using it as living documentation. Save yourself loads of trouble and just use an ordinary test framework.
What you describe may apply more to writing code, rather than reading code. Of course, knowing syntax is not sufficient for programming, but if you want just to review existing code, (unfamiliar) syntax may be an obstacle to non-programmer domain experts.
I've read something along these lines about the choice of OCaml at Jane Street, where they argued that their investor-types are able to proof-read new code, despite not being programmers, because the language "reads like english", similar to SQL or COBOL.
RF is not in any way only web automation tool. Sure, you can use Selenium with it to automate web ui testing. Or you can use playwright. Or you can just not do web ui testing and do mobile ui testing, or REST, or SAP, or pretty much any other automation you wish.
What RF provides is by far the best reporting, test tagging, test filtering, and test data injection compared to any other test automation framework. And you can automate pretty much anything using it.
Well, I bet that if they managed to be extremely unproductive with Robot Framework, they'd be equally unproductive with Python ;)
-- Robot Framework builds on top of Python and usually wraps python libraries -- such as Selenium or Playwright -- in a different API (which IMHO is usually easier to use) and provides some better reporting out of the box, so, I can't really see how the tool made people unproductive here...
Depending what RF is being used for, Selenium may be under the hood wrapped in multi-layers of keywords. In other cases, where testing is not web/REST based (protocol testing, HW testing), what is being used for executing logic is up to devs/testers.
Used it extensively for a while. I think all in all, it's a nice, mature framework. The Gherkin stuff is completely optional, I'm not fond of it and never used it. Using Python, you can create a custom DSL tailored to your needs, so that you can write tests in a tabular form. All your tests will need to adhere to this rigid tabular structure. Whether this makes sense, entirely depends on the circumstances. Compared to just using pytest, I think the main advantage is that tests can also be written and extended by people who are not familiar with Python. So if you have a separate QA team with people who are not programmers, this might make sense. Also, I did like that this rigid tabular structure made all tests very much alike, whereas when using pytest, you often end up with a hodge-podge of different ways to test things and people re-inventing the wheel all the time.
I was also in nokia and we where pitched to use RF - at the time, we (well, i) made a decision of not to use it - mainly because our actual testing tool was implemented in ruby and using remote library wasnt really possible for our scenario and also i didnt like the test data format then (we where pitched to use a frigin EXCEL or html tables) ..
Things have changed a lot from those times though and I am somewhat happy camper with robot in its current state. Its not without its warts but definitely has come a long way since "nokia days".
Well, I don't know how things were back then, but Robot Framework had many changes in the latest times, so, it's possible things improved from there (thus your past experiences may not match the current state of affairs).
I'd say that it does have some quirks, but if well used it can be a good tool to have in your toolbox.
Disclaimer: I'm currently working on the Robot Framework Language Server extension for VSCode (by Robocorp, which uses Robot Framework to work with RPA -- albeit Robocorp supports well both Python as well as Robot Framework in the platform, there's a high traction on the Robot Framework side and many users enjoy it vs using just Python).
I remember this from a gig I did a while ago. Customer wanted to establish their own automated QA and thought Robot Framework was a good fit, because the language „looks easy“. The goal was to enable IT personnel that lacked a technical background, with no programming experience, to write tests.
I was sceptical from the start, and was eventually proven right (I did not enjoy this, it created more not-so-enjoyable work for me).
Test code was flaky to the point of being useless, and contained alot of redundancy.
IMHO it is really hard to write good tests without having a bit of a „programmers mindset“.
I develop tooling around Robot Framework as my main job, personally I feel biggest tradeoff of RF in Python world is that it competes against Pytest, which is my personal favourite testing framework out of any I have tried.
In JavaScript projects I don't have such a favourite, Jest and Mocha have been okayish when I have tried them but didn't really spark joy. In multilanguage projects (like the Playwright based robotframework-browser which I have been developing) I have enjoyed writing integration tests with Robot Framework.
I don't have excessive experience with other comparable QA tools (like cucumber), but I would say Robot Framework's main advantage and disadvantage are both the fact that it is constantly so close to python. (E.g. you can easily do in-line python expressions, but if you were to have a team working with Robot Framework where some are lacking Python competence those might become very confusing pretty fast.) Also for writing advanced libraries Python usage is pretty much mandatory. But I guess it's pretty rare for a DSL to support writing advanced extensions with the same DSL.
I have used RF for many years now at different companies. We use it for automated integration tests, it does not matter in which language your application is written. I do admit that it requires some programming skills, especially for the keywords. Choose the keywords wisely and ensure that the QA people know how to use them. We also use it as part of regression testing and even for acceptance tests with the customer.
From QA perspective Robot Framework is by far the best tool there is.
First of all, it's not web testing tool, it's not mobile testing tool, it's not REST API testing tool. Or any other specific testing tool. You can automate pretty much any testing activity using RF, including web, mobile and REST, but not in any way limited to those.
The killer feature from quality assurance point of view is test tagging combined with really powerful way of selecting what to include in test run and getting good reports which are supported by many many tools. Another killer feature is dead simple test instrumentation.
Regarding tagging. Let's take this example:
*** Settings ***
Force Tags feature-xyz
Suite Setup Initialize Tests
*** Variables ***
${test_environment} dev
*** Test Cases ***
Foobar
[Tags] jira-id-001 jira-test-id-001 smoke
No Operation
Lorem
[Tags] jira-id-002
No Operation
Ipsum
[Tags] jira-id-003 bug-in-jira-001
No Operation
*** Keywords ***
Initialize Tests
Connect To Environment ${test_environment}
I can select any combination of those test to be included in given test run, specify which environment to connect and send results automatically to Jira. This allows me to run only "smoke" tests against "Pull Request" environment when ever PR is opened. This also allows me to automatically run all tests every hour against "Dev" environment and submit results to Jira.
That would only run the one test tagged smoke and Connect To Environment would get the value "pr".
robot -i feature-xyz .
Would run all tests with tag feature-xyz (and in the example file that would be all tests) against dev environment. And then I could just `curl` the XML result file from the run to Jira (given it has XRAY installed) and Jira would automatically update all the Jira tickets in mentioned in the tags with the test results. If there is no Jira Test tagged in RF test, Jira would automatically create new Jira Test for me.
And in order to display test statistics in Jenkins, just install RF plugin in Jenkins and instruct your job to read the output XML and you get nice statistics, reporting etc.
That way, when you need to know what is you test coverage, just open Jira and see it yourself.
As someone working as QA Automation I have to admit I hate RF. IMHO Pytest is far better and everything you wrote above can be also done there. It will be a bit more code, that's true, but overall I find pytest better tool for automated testing.
> And in order to display test statistics in Jenkins, just install RF plugin in Jenkins and instruct your job to read the output XML and you get nice statistics, reporting etc.
Sadly, that same thing can be also a negative side.. There is really no proper way to expose the results if you are not using Jenkins as your main CI. Ofcourse one can use junit reports or whip up a own listener that will generate some sort of test reports for the given ci platform...
*** Test Cases ***
Write my test using a DSL
Read examples of the DSL
Write a basic test and see that it works
Write a complex test and find something missing
Learn quirks about DSL
Implement missing things on the language that the DSL was implemented on
Write part of the test in said language
Write part of the test in DSL matching regexes
Run the test
And it works 9 out of 10 times
Install flaky extension
Add flaky tag to test
And it is green
[Teardown] Reflect about my life as a software developer
I test embedded devices with RF. You write most of your test drivers in python, RF simply orchestrates the tests and collects the results into a report, which works well in simple QA scenarios.
Robot lacks features you'd need to support larger-scale embedded test, such as with a device farm, where you need the concept of tests leasing resources. If you build a test stand with 10 testable devices and need to run a suite of 100 tests, it's ideal to run the tests in parallel as devices become available. Some test setups might require more than one instance of a testable device, or instances of more than one type of device (for example, to ensure that a V2 product can still interoperate with a V3 product.) Robot doesn't really support this.
While a feature to support this could be made extremely general (resource classes, instances, and leases) the RF developers have been uninterested in incorporating this aspect of test into their framework. The result is everyone who does even mid scale embedded device testing has to writes their own.
Another complaint about robot framework is that when you have an expensive setup like a 4 minute flashing operation, you don't want to repeat it more than necessary. So in a file, you might make the expensive setup a suite-level setup, followed by the tests cases that depend upon it. When this file grows, you might want to refactor it into a multi-file test suite in it's own sub-directory. However, these tests no longer share a suite scope (because robot's "suite scope" is actually a file scope" for legacy reasons) so in practice you may need to tolerate 3000 line files to avoid long setups.
Have you found a solution to mid-large scale embedded test setup? Could you provide some shallow insight into frameworks or other infrastructure used for embedded testing? I've previously been responsible for firmware testing small production volume devices in aerospace but have since moved to a high volume product with multiple active hardware revisions and no test infrastructure currently in place. It's a different beast to test now while trying to balance schedule and feature dev.
When I started writing tests for my current project, I looked at what was available and decided forgo existing testing frameworks. I haven't been disappointed.
Writing code to call a bunch of test functions then generate a report is really not hard and having control over the whole thing is nice.
Robot Framework is a general purpose automation framework.
You can use it to automate
- Web Applications (with Selenium or Playwright)
- Rest APIs
- Desktop Applications (Java Swing, WPF, SAP, ..)
- All kinds of hardware
It offers control-structures like most programming languages (IF/ELSE/FOR/WHILE/TRY/EXCEPT)
The Extensions for VS Code or PyCharm/IntelliJ offer Code-Completion, step-wise Debugging, Linting.
It is very hackable and can be extended using Python.
It has a great API, allowing you to connect it to all kind of other tools.
I guess a lot has happened since some of the people here used it.
But I guess, if somebody is focused on using a single programming Language (like JavaScript) and is not open to learn Python or the RF Syntax I recommend to look into another native test framework
Does this framework stream sensor data? No. Does this framework control actuators? No. Does this framework represent an actuated device’s configuration space? No. Does this framework perform collision detection? No. Does this framework keep an obstacle map? No. Does this framework have any planners? No.
Just stop already. Stop calling things a robot when they are not.
They're referring to "Robotic Process Automation" (RPA).
I agree the name is a misnomer, but it's now part of the Enterprise Lexicon, which is filled with jargon.
I run an RPA startup, but as much as I hate the term (e.g:https://news.ycombinator.com/item?id=30755118) , once something becomes part of common language patterns, you can't unshift it.
I hate the modern usage of 'AI' as ML, but I can't change the new definition.
Fun to see this is still around! Used this in 2016, at a big bank. I did not enjoy it, but that can largely be attributed to the way management wanted us to use it. The tests had to be written in our native language (not English) which caused a lot of weird mixed language use behind the scenes. Then a manager would sign off on the test report, which we would deliver to him on paper.
I've been using RF for almost 8 years now. I came in touch with it, in the beginning with my C/C++ developer mind: "...what an overhead of work...".
But soon a realize I was wrong and start to notice the power of writing tests in robot. And how it allowed to easy the understand of what operation I did on code in the past... like 6 months before, 1 year before... and also the ability to understand more clearly how other teams' features worked and to fix their test and features many times, because we more clearly wrote test using Gherkins in RF... and see how new team members could ramp MUCH more easily inside our environments, by reading tests, most of time.
I was even able to implant RF inside some other business after that, bringing together that culture of writing clear tests with RF. Some of those places had the heavy culture of writing top-down requirements precisely. On those places, we integrate RF documentation and test case procedure on our process, generating final documentation using RF test case information.
Since them I'm helping to disseminate and evangelizing for the RF use. I saw system analysts embracing the RF, writing the first version of test cases for requirement in matter of minutes. RF provided us with much faster peace in maintaining some very complex environments.
We could grab requirements, and write down cases, that latter could be implemented, edited, removed or adjusted. Whatever is the test environment, testing it is a 'living' environment, and need catering from developers. RF had helped greatly maintaining such places, to produce great reports, to create this live ecosystem easy for uses.
Is perfect? Surely No.
But is much better that all other tools available around.
Is open source, easy to expand, allow use of natural language. And when well used, allow us to greatly improve our QA and DEV lives :D
I loved this tool :) ! Few years ago, as a junior I joined one global corporation to my first job and I started to use it with zero knowledge about the python or even testing at all. But learning curve was very easy & fast and as of time, we created more than 3000 E2E & integration tests for our business application, running through the Jenkins. The tests were written in a very human-readable format, so anybody at the team could easily understand the test and identify the issues. Reporting had nice (but quite slow) html & jquery page. Also, Robot editor was a recommended tool to write tests.
After a time, when I was starting a startup with a few junior developers which lacked the expertise, we used this tool in combination with Selenium to crawl data. It was fast & easy.
Always glad to see this tool is still alive. Good job Pekka Klarck!
I wrote my bachelor thesis about Robot Framework. The biggest issue i had with Robot Framework was the lack of support in IDEs. Initially you were suppose to work on Robot Framework in RIDE (dedicated IDE build on top of Eclipse) or use a Eclipse extension. I believe both of these are not supported anymore. I like Vim, so i wrote a package, which helps users when developing with Robot Framework. I also created some pull requests for Vim ALE to also support rflint.
We also used Robot Framework at work for automated testing, worked like charm.
Robot has it's peak maybe a couple of years ago. Although my company still uses it extensively there are other Automation frameworks that are proving to be better, like , say Cucumber, Cypress, etc.
But why is Robot Framework being mentioned and getting attention now? It's not like it's something new and doing anything different from the existing frameworks.
Cucumber does not really provide anything RF does not, except for Ruby houses. And Cypress is web automation tool, RF being generic one. One could write Cypress library for RF, but as there already are Selenium and Playwright libraries for RF, I don't think it will happen anytime soon at least.
[+] [-] lvncelot|3 years ago|reply
When I worked together with a QA team for automated testing, they used Cucumber, which uses Gherkin as an english-but-not-really DSL. Now the idea is obviously, that someone who doesn't know how to code can still describe and read test scenarios. But what ended up happening eventually is that I spent a lot of time writing code to support different Gherkin expressions, but then had to take over writing the Gherkin expressions as well, because even though Gherkin -looks- like english, it still behaves like a programming language. In the end, it would've been much easier for me to just write the tests in a programming language from the start, instead of using an abstraction layer that turned out to serve no purpose.
What I took away from that frustrating experience is that a lot of people think that learning to code is learning a syntax of a programming language, and if you could just do away with that pesky syntax, everybody could code. But, obviously, the syntax ist just the initial barrier, and actually coding is crystallizing and formulating intent - something which programming languages are much better at than natural languages.
But for some reason, this low/no-code idea never seems to die down, especially in some spaces, and I have the feeling I'm just missing something here.
[+] [-] urxvtcd|3 years ago|reply
I see a single advantage to these frameworks though. They force you to write a test in a specific way that looks like a real-world scenario with clear delineations between setup stages, assertions, and so on. Not that it's impossible to achieve with regular programming language, obviously.
[+] [-] davidy123|3 years ago|reply
Granted, I've had middling success convincing non technical people to write or even look at tests, but I think it's irresponsible not to try to create this bridge, depending on the audience of the code. Because very few projects are well documented, and this syntax can act as documentation. In my current work, I am using BDD style to create high level flows, where screens are automatically captured, which is useful both for verification and generated documentation, especially when versioned with code.
It can avoid duplication (and its pitfalls), because in unit style tests, there is a text description along with the code, which can drift away from being accurate. If the test is the description, that won't happen.
It creates a more abstract model, because the tests are forced to a natural language, which will tend ot make it less implementation dependant, so the implementation can be swapped out. I've had good success with this approach, seamlessly migrating a complex workflow from one backend to another, and the tests were kinda pricesless.
I also find keyword and flow re-use to be quite high, and it's great how this approach means the more tests you write, the less low level, implementation specific code and scenarios you need to write.
[+] [-] abracadabra_|3 years ago|reply
If you can't code, you shouldn't be writing automated tests. Most of the "benefits" of Cucumber can be gained by using a good reporting framework. I like to use Serenity as I love its reports.
[+] [-] eddsh|3 years ago|reply
[+] [-] anbotero|3 years ago|reply
I was also a Quality Engineer for a time, and one of our bigger projects was to implement precisely a Natural Language system or something so that PMs or whoever wanted to “live in the future” could write their tests directly in the User Story.
What we did was create a large set of commonly used “phrases” to interact with systems, be it UI or APIs, which we coded the actual test framework for, document them for anyone wanting to write User Stories (in Rally platform, like JIRA), the instructions on what to do if a phrase didn’t exist for their case, etc. It took us 6 focused months, but we delivered, and people were happy. Of course, we are talking about technical PMs, PMs that met with designers and set standard guidelines, PMs that required API documentation and read it, and the like; just not “coding much” PMs.
Albeit this being a company where we already wrote automation tests right next to the developer’s timeframe for their own coding, where a feature got both code/tests right around the same time, 99% of the time, so it was easier for people to take advantage of features instead of shallowly complaining.
So, yes, I concur with you that probably your case would be better if everyone were a developer, I don’t really see the need for those kind of DSL, but if you reeeeeeally need to have other people intervene, and want them to focus more on their end of the bargain (not coding), I think these DSLs have their place: Assuming the whole company has the stomach for it.
[+] [-] avodonosov|3 years ago|reply
The code consists of a series of invocations of functions ("keywords" in RF terms) with parameters. The syntax is tabular.
When I read the intro, it reminds me of Forth.
Although I only gave it brief reading a couple of times. (RF was used by QA at my past job.)
[+] [-] valand|3 years ago|reply
I presume low code/no code comes from the simplest notion of programming language, which is a medium between human and machine. And the easier it is to read and write means the "better" it is. Therefore pitching it to a non-technical or even semi-technical people is as simple as teaching people to write code at surface level.
However what people don't understand deeply is that any language is a set of symbols and rules requiring a certain amount of cognitive space and a certain time to absorb it. The more ambiguous the symbols are, the closer the growth of the number of the rules to exponential. This is a fundamental point that happens in real-life but is rarely understood, moreover to programmers, surprisingly.
I believe a proper low-code/no-code should have natural language processing capability and understanding context, not just providing syntactical aliases.
[+] [-] billyruffian|3 years ago|reply
It works really well for us to get a shared understanding between a non-technical business, devs and QA.
That said, we then hand it the Gherkin over to the SDETs who fettle it according to our house style (which is strict and must pass enforced linting) and it does take gardening as the product matures. It's our SDETs who write the test code: we wouldn't dream of letting a non-technical person actually drive the test logic itself.
[+] [-] ryanmarsh|3 years ago|reply
At every client I emphatically insist that if they are not using Gherkin as a “Rosetta stone” for the business folks then they’re wasting their time with cucumber. Don’t go to the trouble of writing gherkin if you’re not using it as living documentation. Save yourself loads of trouble and just use an ordinary test framework.
[+] [-] gjmacd|3 years ago|reply
[+] [-] leethargo|3 years ago|reply
I've read something along these lines about the choice of OCaml at Jane Street, where they argued that their investor-types are able to proof-read new code, despite not being programmers, because the language "reads like english", similar to SQL or COBOL.
[+] [-] mkl95|3 years ago|reply
Honestly if you know your Python you are better off with Selenium, which is probably what Robot uses under the hood anyway.
[+] [-] hpaavola|3 years ago|reply
What RF provides is by far the best reporting, test tagging, test filtering, and test data injection compared to any other test automation framework. And you can automate pretty much anything using it.
[+] [-] fabioz|3 years ago|reply
-- Robot Framework builds on top of Python and usually wraps python libraries -- such as Selenium or Playwright -- in a different API (which IMHO is usually easier to use) and provides some better reporting out of the box, so, I can't really see how the tool made people unproductive here...
[+] [-] captnswing|3 years ago|reply
use PlayWright!!
[+] [-] jtnag|3 years ago|reply
[+] [-] deng|3 years ago|reply
[+] [-] ithinkso|3 years ago|reply
Burn. It. With. Fire.
I was at Nokia and we used robot, what an absolute pain it was
[+] [-] rasjani|3 years ago|reply
Things have changed a lot from those times though and I am somewhat happy camper with robot in its current state. Its not without its warts but definitely has come a long way since "nokia days".
[+] [-] pjmlp|3 years ago|reply
I don't miss Robot at all.
[+] [-] fabioz|3 years ago|reply
I'd say that it does have some quirks, but if well used it can be a good tool to have in your toolbox.
Disclaimer: I'm currently working on the Robot Framework Language Server extension for VSCode (by Robocorp, which uses Robot Framework to work with RPA -- albeit Robocorp supports well both Python as well as Robot Framework in the platform, there's a high traction on the Robot Framework side and many users enjoy it vs using just Python).
[+] [-] djvdq|3 years ago|reply
[+] [-] keewee7|3 years ago|reply
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] alex_suzuki|3 years ago|reply
[+] [-] antman|3 years ago|reply
[+] [-] majkinetor|3 years ago|reply
You need devs to write good tests in general.
[+] [-] xylix|3 years ago|reply
In JavaScript projects I don't have such a favourite, Jest and Mocha have been okayish when I have tried them but didn't really spark joy. In multilanguage projects (like the Playwright based robotframework-browser which I have been developing) I have enjoyed writing integration tests with Robot Framework.
I don't have excessive experience with other comparable QA tools (like cucumber), but I would say Robot Framework's main advantage and disadvantage are both the fact that it is constantly so close to python. (E.g. you can easily do in-line python expressions, but if you were to have a team working with Robot Framework where some are lacking Python competence those might become very confusing pretty fast.) Also for writing advanced libraries Python usage is pretty much mandatory. But I guess it's pretty rare for a DSL to support writing advanced extensions with the same DSL.
[+] [-] mydevlprplanet|3 years ago|reply
[+] [-] hpaavola|3 years ago|reply
First of all, it's not web testing tool, it's not mobile testing tool, it's not REST API testing tool. Or any other specific testing tool. You can automate pretty much any testing activity using RF, including web, mobile and REST, but not in any way limited to those.
The killer feature from quality assurance point of view is test tagging combined with really powerful way of selecting what to include in test run and getting good reports which are supported by many many tools. Another killer feature is dead simple test instrumentation.
Regarding tagging. Let's take this example:
I can select any combination of those test to be included in given test run, specify which environment to connect and send results automatically to Jira. This allows me to run only "smoke" tests against "Pull Request" environment when ever PR is opened. This also allows me to automatically run all tests every hour against "Dev" environment and submit results to Jira.Like this:
That would only run the one test tagged smoke and Connect To Environment would get the value "pr". Would run all tests with tag feature-xyz (and in the example file that would be all tests) against dev environment. And then I could just `curl` the XML result file from the run to Jira (given it has XRAY installed) and Jira would automatically update all the Jira tickets in mentioned in the tags with the test results. If there is no Jira Test tagged in RF test, Jira would automatically create new Jira Test for me.And in order to display test statistics in Jenkins, just install RF plugin in Jenkins and instruct your job to read the output XML and you get nice statistics, reporting etc.
That way, when you need to know what is you test coverage, just open Jira and see it yourself.
[+] [-] djvdq|3 years ago|reply
[+] [-] rasjani|3 years ago|reply
Sadly, that same thing can be also a negative side.. There is really no proper way to expose the results if you are not using Jenkins as your main CI. Ofcourse one can use junit reports or whip up a own listener that will generate some sort of test reports for the given ci platform...
[+] [-] kh_hk|3 years ago|reply
[+] [-] elevation|3 years ago|reply
Robot lacks features you'd need to support larger-scale embedded test, such as with a device farm, where you need the concept of tests leasing resources. If you build a test stand with 10 testable devices and need to run a suite of 100 tests, it's ideal to run the tests in parallel as devices become available. Some test setups might require more than one instance of a testable device, or instances of more than one type of device (for example, to ensure that a V2 product can still interoperate with a V3 product.) Robot doesn't really support this.
While a feature to support this could be made extremely general (resource classes, instances, and leases) the RF developers have been uninterested in incorporating this aspect of test into their framework. The result is everyone who does even mid scale embedded device testing has to writes their own.
Another complaint about robot framework is that when you have an expensive setup like a 4 minute flashing operation, you don't want to repeat it more than necessary. So in a file, you might make the expensive setup a suite-level setup, followed by the tests cases that depend upon it. When this file grows, you might want to refactor it into a multi-file test suite in it's own sub-directory. However, these tests no longer share a suite scope (because robot's "suite scope" is actually a file scope" for legacy reasons) so in practice you may need to tolerate 3000 line files to avoid long setups.
[+] [-] bmcooley|3 years ago|reply
[+] [-] 2rsf|3 years ago|reply
Have they been asked? I had the same problem and had different solutions built in house with different levels of success
[+] [-] dcanelhas|3 years ago|reply
[+] [-] davemp|3 years ago|reply
Writing code to call a bunch of test functions then generate a report is really not hard and having control over the whole thing is nice.
[+] [-] manykarim|3 years ago|reply
You can use it to automate - Web Applications (with Selenium or Playwright) - Rest APIs - Desktop Applications (Java Swing, WPF, SAP, ..) - All kinds of hardware
It offers control-structures like most programming languages (IF/ELSE/FOR/WHILE/TRY/EXCEPT)
The Extensions for VS Code or PyCharm/IntelliJ offer Code-Completion, step-wise Debugging, Linting.
It is very hackable and can be extended using Python. It has a great API, allowing you to connect it to all kind of other tools.
I guess a lot has happened since some of the people here used it.
But I guess, if somebody is focused on using a single programming Language (like JavaScript) and is not open to learn Python or the RF Syntax I recommend to look into another native test framework
[+] [-] dbcurtis|3 years ago|reply
Does this framework stream sensor data? No. Does this framework control actuators? No. Does this framework represent an actuated device’s configuration space? No. Does this framework perform collision detection? No. Does this framework keep an obstacle map? No. Does this framework have any planners? No.
Just stop already. Stop calling things a robot when they are not.
[+] [-] yaseer|3 years ago|reply
I agree the name is a misnomer, but it's now part of the Enterprise Lexicon, which is filled with jargon.
I run an RPA startup, but as much as I hate the term (e.g:https://news.ycombinator.com/item?id=30755118) , once something becomes part of common language patterns, you can't unshift it.
I hate the modern usage of 'AI' as ML, but I can't change the new definition.
[+] [-] sheepybloke|3 years ago|reply
[+] [-] chungus|3 years ago|reply
[+] [-] hkfilho|3 years ago|reply
But soon a realize I was wrong and start to notice the power of writing tests in robot. And how it allowed to easy the understand of what operation I did on code in the past... like 6 months before, 1 year before... and also the ability to understand more clearly how other teams' features worked and to fix their test and features many times, because we more clearly wrote test using Gherkins in RF... and see how new team members could ramp MUCH more easily inside our environments, by reading tests, most of time.
I was even able to implant RF inside some other business after that, bringing together that culture of writing clear tests with RF. Some of those places had the heavy culture of writing top-down requirements precisely. On those places, we integrate RF documentation and test case procedure on our process, generating final documentation using RF test case information.
Since them I'm helping to disseminate and evangelizing for the RF use. I saw system analysts embracing the RF, writing the first version of test cases for requirement in matter of minutes. RF provided us with much faster peace in maintaining some very complex environments.
We could grab requirements, and write down cases, that latter could be implemented, edited, removed or adjusted. Whatever is the test environment, testing it is a 'living' environment, and need catering from developers. RF had helped greatly maintaining such places, to produce great reports, to create this live ecosystem easy for uses.
Is perfect? Surely No.
But is much better that all other tools available around. Is open source, easy to expand, allow use of natural language. And when well used, allow us to greatly improve our QA and DEV lives :D
[+] [-] Agoreddah|3 years ago|reply
After a time, when I was starting a startup with a few junior developers which lacked the expertise, we used this tool in combination with Selenium to crawl data. It was fast & easy.
Always glad to see this tool is still alive. Good job Pekka Klarck!
[+] [-] dang|3 years ago|reply
Robot Framework – test automation in Python - https://news.ycombinator.com/item?id=10631074 - Nov 2015 (5 comments)
[+] [-] Cupprum|3 years ago|reply
We also used Robot Framework at work for automated testing, worked like charm.
[+] [-] screwgoth|3 years ago|reply
But why is Robot Framework being mentioned and getting attention now? It's not like it's something new and doing anything different from the existing frameworks.
[+] [-] hpaavola|3 years ago|reply