robot framework: exception handling
Solution 1
Robot has several keywords for dealing with errors, such as Run keyword and ignore error which can be used to run another keyword that might fail. From the documentation:
This keyword returns two values, so that the first is either string PASS or FAIL, depending on the status of the executed keyword. The second value is either the return value of the keyword or the received error message. See Run Keyword And Return Status If you are only interested in the execution status.
That being said, it might be easier to write a python-based keyword which calls your Login keyword, since it will be easier to deal with multiple exceptions.
Solution 2
You can use something like this
${err_msg}= Run Keyword And Expect Error * <Your keyword>
Should Not Be Empty ${err_msg}
There are couple of different variations you could try like
Run Keyword And Continue On Failure
, Run Keyword And Expect Error
, Run Keyword And Ignore Error
for the first statement above.
Option for the second statement above are Should Be Equal As Strings
, Should Contain
, Should Match
.
You can explore more on Robot keywords
ewok
Software engineer in the Greater Boston Area. Primary areas of expertise include Java, Python, web-dev, and general OOP, though I have dabbled in many other technologies.
Updated on June 04, 2022Comments
-
ewok over 1 year
Is it possible to handle exceptions from the test case? I have 2 kinds of failure I want to track: a test failed to run, and a test ran but received the wrong output. If I need to raise an exception to fail my test, how can I distinguish between the two failure types? So say I have the following:
*** Test Cases *** Case 1 Login 1.2.3.4 user pass Check Log For this log line
If I can't log in, then the
Login
Keyword would raise anExecutionError
. If the log file doesn't exist, I would also get anExecutionError
. But if the log file does exist and the line isn't in the log, I should get anOutputError
.I may want to immediately fail the test on an
ExecutionError
, since it means my test did not run and there is some issue that needs to be fixed in the environment or with the test case. But on anOutputError
, I may want to continue the test. It may only refer to a single piece of output and the test may be valuable to continue to check the rest of the output.How can this be done?