# AI-driven custom testing
AI-powered custom testing leverages the capabilities of large models to enable users to describe use cases using natural language, with the underlying implementation utilizing Minium to automatically execute these use cases. Its main advantages are as follows:
- Users do not need to write code, use natural language to describe the implementation of the steps can be, use a lower threshold
- When the task is executed successfully, automatically generates minium scripts for the task execution process.The user can upload the script as a use case as a Minium use case, and then quickly execute the use case
Please read the usage instructions carefully before use:
- The AI custom testing feature is currently undergoing internal testing within the scope of **** , please refer to the help page , scan the QR code to join the official Cloud Test WeChat group, contact the group administrator to assess your needs, and obtain the necessary permissions to access the feature.If you encounter a problem, you can check the frequently asked questions below first. If you still have a problem please contact the group owner for feedback and schedule it one by one
- Currently, AI operations support capabilities such as clicks, simple input, swipes, function_call, and large-model assertions .If you have additional needs, please contact the group owner to evaluate
# Start quickly.
After gaining access to the experience, you can begin the AI custom testing tasks as outlined below:
# 1. Create new AI test cases
On the "AI Use Cases" page, create a new test case. In the pop-up window, fill in the name, priority, and description of the test case.
Among them, the use case describes in natural language, providing as accurate and detailed a description as possible of the process followed during this AI exploration.

# Fill in the use case description Tips:
When filling in the use case description, you can describe it from the perspective of human vision (will give the current screen screenshot to the large model), take Sichuan Airlines Weixin Mini Program as an example, reference description is as follows:
打开小程序后,执行以下步骤:
1. 在首页中,点击出发地下面的“成都”,进入更换出发地页面。
2. 在更换出发地页面中,在导航栏搜索“XIC”,在搜索结果中,选择第一个“西昌”,并返回购票页面
3. 在购票页面中,点击目的地下面的“北京”,进入更换目的地页面
4. 在更换目的地页面中,在导航栏搜索“NKG”,在搜索结果中,选择第一个“南京”,并返回购票页面
5. 在购票页面中,点击“搜索机票”,进入选择机票页面。途中如果遇到公告弹窗,要关闭这个弹窗
6. 能成功进入选择机票页面,任务成功。如果无法进入选择机票页面,任务失败
Here are some useful tips:
- When describing a sliding, you need to describe whether is sliding up and down or **is running up and down, so that the large model can make a correct judgment.
- If you need to swipe to the bottom of the page, you can simply describe sliding to the beginning of the page to speed up the execution process
# Special Operations
In addition, the AI custom measurement feature supports the following special operations:
# 1. minium_func_call
Minium_func_call enables users to call their own functions in the description of the large model, similar to the ability of the large model function_call.
Special notes for using function_call capabilities:
- Called functions must be uploaded to cloud testing
- The first argument to the function, needs to be passed in mini , as shown in the example below.Mini passes the minitest.MiniTest instance into the function, making it easier for the user to use minium's ability. Note that even if the function does not require the ability to use mini, the first argument needs to be set to mini
Use examples as follows:
The user has uploaded test_config.py and config / test_config.py to the Minium use case for cloud testing. Both files have the same content, but are only here to demonstrate how to call them in different directories.
The test_config.py content is also simple, with a sample function and sample class (again reminding you to pass the first argument into mini). The ways in which different situations are invoked are described as follows:
- Call test_config.py get_config** Functions:
Read [destination] and [destination] information from minium_func_call ("test_config.get_config") - Call the test_config.py in TestConfig in class get_config function: ```Fromminium_func_call("test_config.TestConfig.get_config")`"
- Call config / test_config.py In the get_config function:
Read [destination] and [destination] information from minium_func_call ("config.test_config.get_config") - Call config / test_config.py The get_config function in the TestConfig class in: '
from theminium_func_call("config.test_config.TestConfig.get_config")"
> Call function name, file name, file path, Please try to use English , not with special symbols, to avoid import failure
# 2. llm_assert
In the description, if you want to use the assert ability, you can use llm_assert.Note that if llm_assert fails, will terminate the current Case test .
For example, users can describe as follows:
1、从minium_func_call("test_config.get_config")中读取 [出发地] 和 [目的地] 信息
2. 将出发地换成第一步中读取的 [出发地]
3. 将目的地换成[目的地]
4. llm_assert("出发地成功切换为[出发地],目的地切换为[目的地]")
In the above description, the large model can identify the [origin] and [destination] of the results by itself, and replace it with Beijing and Shanghai, generating the llm_assert code:
self.op_llm_assert('''出发地成功切换为北京,目的地切换为上海''')
# 2. Creating a new AI testing plan
After the new AI test case is created, click on the "Test Plan" page, to create a new AI test plan .Enter the name of the plan and select the AI use case(s) that need to be executed.
Similar to Minium, Cloud Beta executes use cases** in the order in which they are checked.Please note that the AI test cases are executed sequentially using the pattern DDT .

# III. Submitted a test task
The AI-customized test task is executed using the Minium driver, so it is fundamentally still a Minium type task.
After creating the AI-customized test plan, on the "Test Tasks" page, click the "New Task" button.
In the pop-up window, selectMiniumtest type, and in the test plan, choose the AI test plan that was just created.
Note that on the right side of all AI testing plans, there is a tag indicating an AI-related task.

# IV. View test reports
After the task is completed, the results of the execution can be viewed on the test report page. For example, the above description executes as follows:
Test results are generally divided into test success and test failure .
- The test was successful, and the AI executed the tasks as described.Special note: This section represents the AI's self-assessment of a successful execution , and users are encouraged to make their own judgment based on the test report screenshots and the generated Minium code.
- Test failures In general, there are several situations:
- Return a failure based on the task description : For example, the task description says, "If you can't find 'Guangzhou', just return to failure and end the test.""During the exploration process, 'Guangzhou' was not found, at this time returning a failure and ending the test, with [[]] in the generated code
asesert Fasle, error_msg - AI Exploration Failure : The AI, based on the task description, repeatedly failed to complete the task successfully, resulting in the return of an unsuccessful outcome.In such cases, the error log associated with the execution results of the use case typically includes a message indicating, "AI exploration has failed consecutively x times, resulting in the termination of the test."
- AI exploration timeout : The AI is still in the process of exploration, but the time stipulated for the task has already passed. The test is terminated, and the result is marked as failure.
- Return a failure based on the task description : For example, the task description says, "If you can't find 'Guangzhou', just return to failure and end the test.""During the exploration process, 'Guangzhou' was not found, at this time returning a failure and ending the test, with [[]] in the generated code
Test the successful code, you can download directly, as a Minium use case upload to the cloud test, directly run the test Minium task

# Common problem
# 1. Why does the exploration using AI take a longer time, while the Minium use case that was downloaded is quickly executed?
The process of AI tasks is lengthy, requiring multiple interactions with large models, each step taking a considerable amount of time, whereas generating a good Minimum example only requires execution and no further interaction with the large model, thus significantly speeding up the process.
# 2. What should be done if the performance of AI-generated tasks does not meet expectations?
It is suggested to first review the test report to identify where the AI made mistakes, and then adjust the task description accordingly for optimization. If you still have problems, you can join the official cloud testing group and contact the group owner for feedback
# I need help.
If you have any suggestions or needs, welcome to need help page, scan the code to join the cloud test official enterprise micro group, contact the group main feedback.