As we have all become familiar with Generative AI systems like Chat GPT, one aspect that is getting more attention is the prompts written to generate the outputs we seek. Generally speaking, good prompts get good results. To optimize results, a new field of work, prompt engineering roles, has been created.
According to IBM, “Generative AI relies on the iterative refinement of different prompt engineering techniques to effectively learn from diverse input data and adapt to minimize biases, confusion and produce more accurate responses.”
Prompt engineers can help generative AI models grasp a query’s intent and even nuance. Basic queries are less likely to get the results companies are seeking. Prompt engineers can iterate on the query and dial in the specifics to obtain a better output, be it code, emails, chatbots, music, digital art, or any other application.
To fully understand the role of prompt engineering, let’s review how Generative AI systems work. These models rely on deep learning foundational models that contain artificial neural networks. Deep learning has been around for some time with applications like Siri and Alexa. The foundational models can also analyze unstructured data. Why is this important? In part because 80% of enterprise data is unstructured. Unstructured data is qualitative versus the quantitative nature of structured data.
A recent piece from McKinsey looks at how organizations deploy Generative AI models and adopt prompt engineers. At this point, developing a Generative AI model alone doesn’t make financial sense for most companies. Instead, they customize current models by training them with their data.
According to McKinsey’s research, Generative AI is “Poised to boost performance across sales and marketing, customer operations, software development, and more. In the process, gen AI could add up to $4.4 trillion annually to the global economy, across sectors from banking to life sciences.”
According to a recent McKinsey survey, companies are beginning to define their needs in the AI space and looking to fill prompt engineering roles. As a new field, hiring will continue to grow, as will reskilling efforts at many organizations.
If you’re interested in becoming a prompt engineer, you’ll need to understand large language models, know how to program, and be able to simplify complex technical concepts. Experience with algorithms and data structures would also help.
The IBM article also shares some advanced techniques for prompt engineers:
- Zero-shot prompting provides the machine learning model with a task it hasn’t explicitly been trained on. Zero-shot prompting tests the model’s ability to produce relevant outputs without relying on prior examples.
- Few-shot prompting or in-context learning gives the model a few sample outputs (shots) to help it learn what the requestor wants it to do. If the learning model has context to draw on, it can better understand the desired output.
- Chain-of-thought prompting (CoT) is an advanced technique that provides step-by-step reasoning for the model to follow. Breaking down a complex task into intermediate steps, or “chains of reasoning,” helps the model achieve better language understanding and create more accurate outputs.
Visit our blog for more articles on technology, leadership, and related HR topics,