Arxiv URL: https://arxiv.org/abs/2304.07183
Authors: Kiran Busch, Alexander Rochlitzer, Diana Sola, Henrik Leopold
The paper discusses the use of prompt engineering to leverage pre-trained language models for business process management (BPM) tasks. It identifies the potentials and challenges of prompt engineering for BPM research.
Key Insights & Learnings:
- Pre-trained language models (LMs) can be effectively used for various natural language processing (NLP) tasks, including BPM tasks.
- Fine-tuning LMs for BPM tasks requires large amounts of suitable training data, which is a common issue in BPM practice.
- Prompt engineering can help address the issue of limited downstream data and yield promising results in various NLP tasks.
- Prompt engineering involves the use of natural language task specifications, known as prompts, which are given to the LM at inference time to provide it with information about the downstream task.
- Prompt engineering has the potential to effectively address a large variety of NLP-related BPM tasks and reduce the need for highly specialized and use-case specific techniques as well as the need to obtain large training datasets.
Terms Mentioned: natural language processing, business process management, language models, fine-tuning, prompt engineering, downstream data, inference time, task specifications, in-context learning, prompt templates
Technologies / Libraries Mentioned: GPT-3, transformer architecture