Let's say that you are ready to launch your virtual agent! You have tested it and you are sure that everything's working for now - Good job!
Just be aware that once you deploy your chatbot, poeple will start using it and inevitably you will notice some things that uou want to change / tweak and soon you will need to re-deploy a more updated version of your bot.
When you do so, having in place a automatic testing mechanism comes in very handy. Why? Because when you push to production a new version, you want to have an improved version of your bot but for sure you don't want to ruin anything that previously was working.
There are two main reasons to have a Performance Testing in place when you launch your virtual agent:
1. Regression Test Scope: This can be used to test a set of testing phrases that are not part of the training set in your virtual agent. The idea is that you would re-run this test over time after the improvement phase, to check the health status of your workspace, and make sure that the workspace is still behaving in a consistent manner.
2. Blind Test Scope: Simply analyse a set of testing phrases that are not part of the training set in your virtual assistant. This can be a one-off task requested by the stakeholders who wants to make sure that your bot is meeting some standards.
The main characteristics of this test is that the testing set is formed by phases that are not part fo the training set: in other words, your bot has not being trained on these phrases yet! (Don't cheat! Don't put sentenses that are part of your training set :))
If you have a virtual agent build with Watson Assistant, I have created a Jupyter Notebook on this topic that helps perform a performance testing and guide you step by step: starting with connecting your bot, to analyse the results and fix the issues.
I have talked about the life cycle of a chatbot in an event in London. Take a look at the video presentation.