That ChatGPT took the internet by storm on November 30th, 2022, would be an understatement. Within a week of its release, it had reached a million users and a staggering 100 million by January 2023. It has avid supporters and harsh detractors alike, but it most certainly presents new opportunities across multiple industries.
Shortly after its release, it caught the attention of DevOps practitioners. It can create convincing-looking code, and its machine-learning capabilities seem to promise continued growth and refinements. With its seemingly endless possibilities, using ChatGPT for automated testing sounded feasible. But is it, and to what extent? Let’s find out.
What is ChatGPT?
If you happen to have missed the buzz around ChatGPT, a brief introduction to it should be in order.
ChatGPT is a Large Language Model (LLM) on a chatbot interface developed by OpenAI. The “chat” component in its name reflects the chatbot’s interface, and the GPT acronym (for “generative pre-trained transformer”) reflects the LLM core.
As it stands, ChatGPT pulls from over 175 billion parameters. It is also distinctly user-friendly, able to make sense of prompts and generate responses swiftly. In combination, these assets make it one of the most powerful and accessible AI language models to date.
In the context of DevOps, ChatGPT can also write prompt-based code in multiple programming languages. It’s this capability and promising early results that made users hopeful it could fully automate testing. AI seems to be the future of QA testing, so the hope was reasonable.
DevOps and MLOps
Indeed, DevOps and MLOps can – and already do – coexist and synergize. As we’ve written before, machine learning is powering the next era of DevOps. MLOps adds scalability, efficiency, and security to the common perks of DevOps.
Their relationship, however, is synergistic. No LLM can likely revolutionize DevOps overnight, despite its vast capabilities. Indeed, ChatGPT is capable of creating a complex test automation pipeline with CI/CD steps and bash code. It can produce well-documented code and can often make very accurate assumptions about the prompts’ intentions. But as deep dives began, the initial expectations started seeming overblown.
Using ChatGPT for automated testing in DevOps
With the above in mind, one cannot deny that ChatGPT does hold potential for DevOps uses. Any tool with such a massive scope of capabilities inevitably would. However, it’s equally vital to note that ChatGPT is not without its quirks. These quirks are notable and often substantial. As such, they should be taken into account when exploring the limits of this tool.
As with all emerging technologies, MoversTech CRM suggests embracing it – but cautiously so. Much like CRM technologies require years of evolution and polishing to reach their current, established form; they suggest that ChatGPT, too, will need its due time to reach maturity.
The following breakdown of ChatGPT’s perks and quirks as regards applications in DevOps should best illustrate this argument’s validity.
Initially, ChatGPT does come with an array of capabilities for DevOps uses. Some of them include the following:
- Accelerating code debugging. Initially, a great use case can be made for employing ChatGPT for code debugging. Copy-pasting failing code in it can unearth the exact reasons for failure or at least provide a useful perspective.
- Smoothening the learning curve. Second, ChatGPT can significantly smoothen the learning curve for less experienced users or smaller teams due to its knowledge of different languages and technologies. That its outputs frequently include explanations, which can be requested if not, also helps.
- Adding resiliency and security. Finally, ChatGPT can facilitate easier code creation for automation script resilience. As it does, it also frequently creates secure code pieces by default – such as by leveraging the GitHub Secrets feature.
LambdaTest’s recent deep dive into this subject should offer additional insights if you need a concrete use case example.
Beyond its perks, however, ChatGPT, in its current form, comes with quirks – as highlighted above. As such, using ChatGPT for automated testing should always be done with due caution. Currently, such quirks include the following:
- Incomplete code. LambdaTest and others also note that ChatGPT can often generate incomplete or partially written code. Addressing this can add significant workload and responsibilities, understanding the code, identifying gaps and shortcomings, and correcting it.
- Learning gaps. In addition, ChatGPT is not guaranteed to be up-to-date with a given framework’s methods. In such cases, it would unavoidably generate outdated code, which also requires manual review.
- Potentially wrong assumptions. Finally, as highlighted above, ChatGPT is a language model; it understands code structure and can generate accurate code but does not understand underlying meanings or code intent. Its contextual understanding also adds to the user’s workload, as do possibly incorrect assumptions.
Zhimin Zhan is among its most vocal detractors in this regard, and rightly so, as he explains in his own deep dive into the matter.
Conclusion: ChatGPT is a tool, not a panacea
It should thus be argued between the two, as LambdaTest does, that ChatGPT has immense potential value for the field. It can indeed often generate accurate and well-documented code, automating repetitive tasks like AI has long promised. Undoubtedly, it has use cases; accelerating code debugging may be one of the best examples.
However, much of its theoretical potential must clash with the real world and its demands. Sofy’s Grant Ongstad describes the experience as “more like training a junior developer than having a robot assistant,” echoing Zhimin’s concerns.
Using ChatGPT for automated testing should best be carried out under the “trust but verify” principle. It should be approached as a valuable toolkit addition, not as the automation panacea it first appeared to be.