Skip to content

Best Practices

Context variables

  1. Try to create context variables in camel case. Although not mandatory, following one standard reduces mistakes when referring to them later.
  2. Refer to context variables with the ${} syntax (e.g., ${variableName}) throughout BaseRock whenever you want to use the value of a variable or write back to it.

Test case generation

  1. Have a clear vision of what type of tests you expect from BaseRock. It helps to jot down your expectations before working with BaseRock.
  2. Try to always have the following items documented: functional requirement, API details, Request-Response examples, validation rules, business use cases, existing test scenarios or test cases (if any). This information makes BaseRock context-rich and helps generate high-quality test cases. However, none of this is mandatory to achieve your testing goals with BaseRock.
  3. Write prompts that are specific to your testing goals when generating test cases. You can mix the below examples in just one prompt if needed. Examples:
  4. If you are looking for more of security and I18N test cases along with regular coverage then write prompt:

    1- Along with standard coverage generate security test cases following OWASP rules and also Internationalisation test cases for the languages Arabic, French, German and Hindi.
    2- Do not create positive test cases for security, only negative.
    3- Do not create more than 3 test cases for each language during I18N test.
    4- Make sure to involve tests that include PII data in the payload
    

  5. If you have a quantity requirement, specify the number of test cases to generate in prompt like this:

    Generate only 1 test case for GET methods.
    Generate not more than 5 edge cases for the DELETE method APIs
    

  6. You can give negative prompts too.

    Ensure none of the test cases have similar payload and ensure there is no duplicacy.
    Make sure test cases do not focus on option payload fields.
    Don't generate non-functional test cases for POST method
    

  7. Avoid vague prompts such as "Create good quality test cases." In such cases, it is better to leave the prompt area empty and just click Generate.

Playbooks usage

  1. Have a mental diagram or written flow of what is required to run an API before you start automation in BaseRock. For example: authentication, fixed payload values, or generating random values before each run.
  2. Any precondition that is common for all endpoints MUST be written in the service-level playbook, not in the endpoint-level playbook.
  3. After saving playbooks, always check the AI workflows (aka AI adaptation) of the respective endpoint and at the test case level to verify the sequence of steps that will execute before, during, and after the test.
  4. When generating playbooks, try to visualize the configurations, validations, and post-test instructions you need, and mention them in the prompt area before clicking Generate Playbooks. Example:
  5. validation instructions can be prompted as:

    Validate that every successful response contains a message "successfully sent" for all POST endpoints of /user/form router.
    

  6. Looping of instruction steps can be prompted like this, where instructions can be even sending external API request during runtime:

    LOOP every 5 seconds up to 30 seconds until it succeeds
    GET /file/download/${{fileID}} with headers 
    --header 'X-QWERT-UUID: 822ccd6c-ee85-47c7-8081-ab89ad896e6c' 
    Verify 200 response code
    Verify response contains data.userid.value
    END LOOP
    

  7. Dynamic value generations like random text or system date can be done using prompt like this:

    1. Generate a random email ID following this pattern: name_role@gmail.com and store in variable testEmail.
    2. Store system date in variable sysDate once the test execution is over and pass it on to Teardown step.
    

Test suite and test run

  1. Create and keep multiple versions of run_tests.sh with different configurations, and rename each file accordingly. For ex:

    run_fullRegression.sh, run_serviceA_sanity.sh, run_onlyHappyFlows.sh, run_onlyPUTmethods.sh, run_POST_auth_endpoint.sh

  2. When playbooks and test cases are generated, always try executing some test cases individually first from UI before running full suite. Check Execute Single Test Case