Digital-First has become the trend of organizations in the world and Latin America, in which a digital strategy is chosen first for product and service delivery, especially when a greater impact of the brand is sought more immediately to a certain segment of the market along with a wider dissemination on the offer, in a more customized way and, above all, if it seeks to get closer to the end customer. According to Marketing4Commerce, Digital Report, the number of internet users in the world reaches 5.16 billion (64.4% of the world’s population, as of 2023) with an internet browsing time greater than 6 hours, and people with mobile devices reach 5.44 billion (68% of the world’s population, as of 2023).

Also, we see this reflected in an Adobe report (Digital Trends 2023) in which more than 70% of organizations, both leaders and followers, believe that their customers’ expectations are constantly adjusted to align with improved omnichannel experiences, this is because end customers are constantly evaluating their experiences in comparison to their last best experience. Certainly, the most memorable experiences will be created by organizations that know how to leverage data and combine it with human knowledge to anticipate customer needs, with greater empathy and in a more individualized way.

In this scenario, Artificial Intelligence (AI) becomes an ally to implement customer experience strategies in a customized and innovative way, taking advantage of voice recognition tools, understanding of natural language, data on behavior patterns and customer preferences. In recent years, interactions with virtual assistants have become commonplace, prompting the development of language models for certain tasks or expected outcomes. This is known as Prompt Engineering, which is the process of building alerts or inputs to guide a certain AI system behavior and get desired and accurate answers from AI models. So AI assumes a digital collaborator role that not only works as a point of contact with customers, but also boosts knowledge and productivity for the organization’s collaborators.

What is Prompt Engineering?

According to Techopedia, (Prompt Engineering) refers to a technique used in artificial intelligence (AI) to optimize and adjust language models for particular tasks and desired outcomes. Also known as Prompt design, which carefully builds prompts or inputs for AI models in order to improve their performance of specific tasks. Properly designed prompts are used to guide and modify the desired performance of the AI system and obtain accurate and desired responses from AI models.

Prompt Engineering uses the capabilities of language models and optimizes their results through properly designed prompts. This allows not only to rely on pre-training or fine-tuning, but also to help users guide models to specific goals by encouraging accurate responses and providing direct directions, exceptions, or examples in prompts.

According to a survey conducted by COPC Inc. During 2022, “Improving Customer Experience” reached 87% as the most mentioned goal in terms of implementing AI-based solutions. In this regard, 83% of respondents stated that they use AI-based solutions mainly for contact applications with their customers, and that AI has endless uses that directly impact Customer Experience. According to a study conducted by CX optimization 2023, the most implemented uses are content creation, customer profiling and reduction of internal calls.

Large Language Models, LLM),which are advanced linguistic models based on Deep Learning algorithms to process and analyze large amounts of text data. LLM works from artificial neural networks (systems that are inspired by the function of the human brain), which are trained to learn patterns and relationships in a text and generate answers to users’ questions. This enables LLM to be able to analyze large volumes of text data and from that, use that information to understand words, phrases and sentences and their contexts, enabling human-IA communication.

Prompt Engineering Technical Side

As we mentioned earlier, LLMs are trained from large amounts of text data to teach the model to recognize relationships and patterns in a text. All this data is processed to adjust model parameters and thereby improve language accuracy and understanding. Text preprocessing techniques (removal of irrelevant words, standardization of words to detect the variability of a text), hyperparameter adjustments and optimization are also used to achieve the highest model accuracy. To implement Prompt Engineering, there are several techniques, such as:

  1. Zero-Shot Prompting: It generates a response without giving linguistic models examples or previous context. It is used for quick answers to general questions or topics.
    • Example: “What is a prompt?”
    • Answer: “A Prompt is…”
  2. One-Shot Prompting: A response is retrieved from an example or context provided by the user.
    • Example: “If budget in Portuguese is orçamento, how can I translate Supplier?”
    • Answer: “Fornecedor”.
  3.  Information Retrieval: Generative AI is asked a specific question for detailed answers. The key to this is the data source from which the LLMs are fed. For example, ChatGPT only has access to data after September 2021.
    • Example: “What are the advantages of LLM?”
    • Answer: “1. Content diversification. 2. Customization…”
  4. Creative Writing: Through this technique it is possible to develop texts with imagination, stories and textual expressions that adapt to the preferences of the audience.
    • Example: “Write a poem to the moon”
    • Answer: “The moon can be taken in tablespoons…or as a capsule every two hours. It is good as a hypnotic and sedative and also relieves. “
  5. Context Enrichment: Enrichment of information to Artificial Intelligence to improve understanding using methods 5W and 1H, (which refers to 5 W questions: Who, What, Where, When, Why); and 1 H, which is the How of a topic.
    • Example: “Is it good to eat Beets?”
    • Answer: “Who: Most people can eat Beet; What: “Beet or beetroot refers to an edible deep red tuber…”
  6. Content Summary with a Specific Focus: It consists of directing the attention of AI to specific aspects of instruction, with a particular emphasis. It can be highlighted which elements should be prioritized in the model, so that the summary reflects the essence of the approach.
    • Example: “Full guide on website optimization techniques, but I only want the mobile optimization strategies.”
    • Answer: “The key aspects are: Mobile Cache – enabling this option allows…; List of Mobile User Agents – if the Mobile Cache feature is enabled…”
  7. Fill Templates: To create versatile and structured content. A template with placeholders is used to customize prompts in different instructions while maintaining consistency. Content managers or web developers use this strategy to create custom content snippets, which have been generated by AI on their websites. An example is using a standard template for quoting, making AI fill in customer data, products, pricing, etc. Another example is automating custom emails from a template with a general structure – from the greeting, main text, farewell (‘Hello {Name}, Thank you for requesting our {Service}… {Close}.”
  8.  Prompt Customization or Prompt Reframing: It allows you to change the wording of the questions while maintaining the original intent of the query. The language model can be designed to give multiple answers that respond to the original query in different ways. This can be done with synonyms or question rephrasing.
    • Example: “Original prompt: What are the ways to reduce network latency? Reworded Prompt: Can you list techniques to optimize network speed?”
  9. Prompt Combination: It consists of merging different prompts or questions in the same instruction to obtain a complete answer.
    • Example: “Can you explain the differences between shared hosting and VPS hosting and recommend which one is better for a small e-commerce website?”
    • Answer: “Shared hosting and VPS hosting are two types of hosting services… Shared Hosting: …”
  10. CChain-Of-Thought Prompting: It uses real-time AI interactions to guide toward more accurate and complete responses. It is not based on a single question, but on a sequence of questions or examples associated with it in order to elaborate on the original query. To do this, you need to divide a query or a complex topic into smaller sections. These parts are then presented as a sequence of queries that build on top of each other, to drive AI toward a desired answer.
    • Example: “What is the Main Theme?… For what purpose?… Who is the audience?…”
  11. Iterative Prompting: It consists of making follow-up queries based on previous responses to dive into a certain topic, obtain additional information, or clarify any ambiguities about the initial result. This technique requires experts in (Natural Language Processing) to design iterative prompts and elaborate responses similar to those made by a human being.
    • Example: “What are the best movies of 2022?”
    • Answer: “‘Drive My Car’ by Ryûsuke Hamaguchi; ‘The Alley of Lost Souls’ by Guillermo del Toro; Martin McDonagh’s ‘The Banshees of Inisherin’; ‘Holy Spider’ by Ali Abbasi…”
  12.  Interactive Storytelling & Role-Playing: It leverages AI’s ability to tailor responses based on previous prompts and interactions, developing a fluid narrative.
    • Example: “Prompt: I want to start a collaborative storytelling exercise with you. We will write a fantasy story about a land where magic exists,… The character will be….”
    • Answer: “In the shadows of a forest there was a…”
  13. Implicit Information Injection: The particularity of this technique is that context is subtly given so that AI understands the needs without the need to express it explicitly.
    • Example: “Can you mention the best practices of Modernizing a Datacenter?”
    • Answer: “1- Raise the operating temperature of your data center; 2- Upgrade servers and systems for better consolidation and efficiency.”
  14. Translation of Languages with Contextual Nuances: Generation of multilingual content, beyond translating words from one language to another, considering the cultural context or situation for a more accurate and natural translation.
    • Example: “Translate the sentence “She took the ball and ran with it” from English to French, bearing in mind that it is a business metaphor to refer to taking the reins of a project.”
    • Answer: “Elle a pris le ballon et a foncé avec”, considering the idea of taking the initiative of a project.”

In addition to these, we can mention Automatic Prompt Engineering (APE) as an advance in Artificial Intelligence that leverages LLMs to help AI automatically generate and select instructions on its own. The main steps are:

  1. Assign the chatbot a specific task and show some examples.
  2. The chatbot comes up with different ways to do the job, either by direct reasoning or by taking into account similar tasks that it knows.
  3. These different methods are then tested in practice.
  4. The chatbot assesses the effectiveness of each method.
  5. AI will then choose a better method and apply it.

By means of Machine Learning, Generative AI tools can streamline tasks, from in-context data analysis to automated customer service, without the need for constant human-generated prompts.

It is worth mentioning that in Prompt Engineering it is important to consider basic technical aspects such as Temperature and what we call Top-K Sampling ,to improve the quality and diversity of AI-generated content, by influencing the model’s token (word or subword) selection process:

  • Temperature: A higher temperature value (e.g., 1.0 or higher) will result in more diverse and creative text, while a lower value (e.g., 0.5 or lower) will produce more focused and deterministic results. To do this, it is recommended to encourage creativity based on higher temperature values when generating creative writing, brainstorming sessions or exploring innovative ideas. It is also recommended to improve coherence, opting for lower temperature values with well-structured, coherent and focused content, such as technical documentation or formal articles.
  • Top-k sampling: is another recommended technique in AI text generation to control the model token selection process, from a restricted set of most likable k tokens. A smaller k value (e.g., 20 or 40) will result in more focused and deterministic text, while a larger k value (e.g., 100 or 200) will produce more diverse and creative results. Applications of top-k sampling include driving content diversity, using larger k-values when generating content that requires a wide range of ideas, perspectives, or vocabularies. It is also about ensuring focused results, choosing smaller k-values, generating content that requires a high degree of concentration, accuracy or consistency.

To implement the Temperature and Top-k Sampling techniques, Experimentation (testing multiple combinations of temperature and top-k values to identify the optimal configuration for tasks or contents) and Sequential Adjustments, are recommended, during the text generation process to control the performance of the AI model at different stages. For example, start with a high temperature and a large k-value to generate creative ideas, then switch to lower values for further refinement and focus.

Finally, it is recommended to apply the downward gradients which consist of an optimization algorithm to minimize an objective function and calculate the rate of change or gradient of the loss function. In Machine Learning this objective function is usually the loss function to evaluate the performance of the model. Parameters are updated iteratively using downward gradients until a local minimum is reached.

Why Question Engineering Matters

The speed with which OpenAI ChatGPT works since 2022 is overwhelming, today it is being used by millions of people, as a form of conversational artificial intelligence, based on advanced deep learning algorithms to understand human language.

Currently, organizations use multiple AI techniques such as Natural Language Processing, Question Engineering, Artificial Neural Network (NN), Machine Learning, and Markov Decision Processing, (MDP) to automate different tasks.

The importance of Question Engineering is that it improves the customer experience and interactions between people and AI, and contributes to building better conversational AI systems. These conversational AI systems dominate and will dominate the market in the coming years by using LLM in a consistent, relevant and accurate way. Just for reference, we have ChatGPT reaching 100 million active users within weeks of its launch.

For developers, Question Engineering helps to understand how AI-based models arrive at the expected answers and also obtain accurate information on how AI models work on the back-end. Of course, the development of prompts covering several topics and scenarios will be needed. Other benefits that you may mention are: that Question Engineering and the context of the text-image synthesis, allow to customize the features of the image (the style, the perspective, the aspect ratio, the point of view and the image resolution). It also plays an important role in the identification and mitigation of prompt injection attacks, thus protecting AI models from possible malicious activities.

Evolución de la Ingeniería de Preguntas

Natural Language Processing (NLP), is part of AI that helps perceive, as its name says, the “natural language” used by humans, enabling interaction between people and computers, thanks to its ability to understand words, phrases and sentences. It also includes syntactic (meaning of words and vocabulary) and semantic (comprehension within a sentence or combination of sentences) processing. The first lights of NLP were seen in the 1950s, when rule-based methods began to be adopted, consisting mostly of machine translation. Its application was in word/sentence analysis, answering questions and machine translation. Until the 1980s, computational grammar appeared as an active field of research. There was more availability of grammar tools and resources, which boosted their demand. Towards the 90s, the use of the web generated a large volume of knowledge, which boosted statistical learning methods that required working with NLP. In 2012 Deep Learning appeared as a solution for statistical learning, producing improvements in NLP systems, deepening raw data and learning from its attributes.

By 2019, the Generative Pre-trained Transformer (GPT) a remarkable advance in the domain of natural language processing emerged, as it is possible to pre-train large-scale language models to teach AI systems how to represent words and sentences in context. This enabled the development of machines that can understand and communicate using language in a manner very similar to that of humans. Its most popular application is ChatGPT, which obtains information from texts published since 2021 on the Internet, including news, encyclopedias, books, websites, among others, but lacks the ability to discriminate which information is true and which is not. Precisely for this reason, Question Engineering emerges as a method to optimize natural language processing in AI and improve the accuracy and quality of its answers.

The Art and Science of Creating Questions

A prompt is itself a text included in the Language Model (LM), and Question Engineering is the art of designing that text to get the desired result, with quality and accuracy. This involves tailoring data input so that AI-driven tools can understand user intent and get clear and concise answers. Which tells us that the process must be effective to ensure that AI-driven tools do not generate inappropriate and meaningless responses, especially when GPT solutions are based mostly on the frequency and association of words, and may yield incomplete or erroneous results.

To create Questions in Generative AI tools, it is recommended to follow this essential guide:

  • Understanding the Desired Outcome

    Successful Prompt Engineering starts with knowing what questions to ask and how to do it effectively. So the user must be clear about what they want in the first place: objectives of the interaction and a clear outline of the expected results (what to get, for what audience and any associated actions that the system must perform).

  •  Choose words carefully

    Like any computer system, AI tools can be precise in their use of commands and language, not knowing how to respond to unrecognized commands or language. It is recommended to avoid ambiguity, metaphors, idioms and specific jargon so as not to produce unexpected and undesirable results.

  • Remember that form matters

    AI systems work based on simple, straightforward requests, through informal sentences and simple language. But complex requests will benefit from detailed, well-structured queries that adhere to a form or format consistent with the internal design of the system. This is essential in Prompt Engineering, as the shape and format may differ for each model, and some tools may have a preferred structure involving the use of keywords in predictable locations.

  • Make clear and specific requests

    Consider that the system can only act on what it can interpret from a given message. So you have to make clear, explicit and actionable requests and understand the desired outcome. From there, work should then be done to describe the task to be performed or articulate the question to be answered.

  • Pay attention to length

    Prompts may be subject to a minimum and maximum number of characters. Even though there are AI interfaces that do not impose a strict limit, extremely long indications can be difficult for AI systems to handle.

  • Raise open-ended questions or requests

    The purpose of Generative AI is to create. Simple Yes or No questions are limiting and with possible short and uninteresting results. Open-ended questions allow for more flexibility.

  • Include context

    A generative AI tool can meet a wide range of objectives and expectations, from brief and general summaries to detailed explorations. To take advantage of this versatility, well-designed prompts include context that helps the AI system tailor its output to the intended audience.

  • Setting goals or production duration limits

    Although generative AI claims to be creative, it is often advisable to include barriers in factors such as output duration. Context elements in prompts may include, for example, requesting a simplified and concise response versus a long and detailed response. Also consider that natural language processing models, such as GPT-3, are trained to predict words based on language patterns, not to count them.

  • Avoid contradictory terms

    Also derived from long prompts and may include ambiguous or contradictory terms. It is recommended for Prompt engineers to review Prompt training and ensure all terms are consistent. Another recommendation is to use positive language and avoid negative language. The logic is that AI models are trained to perform specific tasks, not to do them.

  • Use punctuation to clarify complex cues

    Just like humans, AI systems rely on punctuation to help analyze a text. AI prompts can also make use of commas, quotation marks, and line breaks to help the system analyze and operate in a complex query.

Regarding images, it is recommended to consider their description, the environment and mood in their context, colors, light, realism.

How Question Engineering Works

Prompt Engineering is a discipline to promote and optimize the use of language models in AI, through the creation and testing of data inputs, with different sentences to evaluate the answers obtained, based on trial and error until the training of the AI-based system is achieved, following these fundamental tasks:

  1. Specify the task: Definition of an objective in the language model, which may involve NLP-related tasks such as complementation, translation, text summary.
  2.  Identify inputs and outputs: Definition of the inputs that are required in the language model and the desired outputs or results.
  3. Create informative prompts: Creation of prompts that clearly communicate the expected behavior in the model, which must be clear, brief and in accordance with the purpose for which it was created.
  4. Interact and evaluate: It is tested using language models and evaluating the results that are returned, looking for flaws and identifying biases to make adjustments that improve their performance.
  5. Calibrate and refine: It consists of taking into account the findings obtained, making adjustments until the behavior required in the model is obtained, aligned with the requirements and intentions with which the prompt was created.

Throughout this process, the Prompt Engineer should keep in mind that when designing questions it is critical to be clear and accurate. If the designed message is ambiguous, the model will have difficulties for responding with quality. When designing prompts, attention should be paid to the sources used during the previous training, considering audiences without gender and cultural bias, to promote respect and inclusion. What is recommended is to focus on responses aimed at helping, learning, and providing neutral, fact-based responses

Also, the Role Play application is recommended in which a scenario is created where the model assumes a role and interacts with another entity. For example, if you wish to create a product review, you may take on the role of a customer who tried a product and writes down their satisfactory experience.

The Role of a Question Engineer

A Prompt Engineer es el responsable de diseñar, desarrollar, probar, depurar, mantener y actualizar aplicaciones de IA, en estrecha colaboración con otros desarrolladores de software para garantizar que el software responda y funcione de manera eficiente. En su función se requiere creatividad y atención al detalle para elegir palabras, frases, símbolos y formatos correctos que guíen al modelo IA en la generación de textos relevantes y de alta calidad. Este rol emergente ha cobrado mayor relevancia en la necesidad de que IA contribuya a mejorar y agilizar los servicios ante el cliente y en forma interna. Ahora, si nos preguntamos quiénes puede ser Ingenieros de Preguntas, no solo para agilizar sus tareas sino para desarrollarse profesionalmente, podemos decir que pueden ser los investigadores e ingenieros de IA, los científicos y analistas de datos, los creadores de contenido, ejecutivos de atención al cliente, personal docente, profesionales de negocios, investigadores. Se espera que la demanda de Ingenieros de Preguntas crezca en la medida que las organizaciones requieran de personas que sepan manejar las herramientas impulsadas por IA.

The Future of Prompt Engineering

It is anticipated that trends towards a future of Prompt Engineering will be linked to integration with augmented reality (AR) and virtual reality (VR), in the sense that the proper application of prompts can enhance immersive AR/VR experiences, optimizing AI interactions in 3D environments. Advances in Prompt Engineering allow users to converse with AI characters, request information, and issue natural language commands in simulated, real-time environments. This is based on the fact that, with Prompt Engineering, AI can be provided with a context or situation, a conversation and the exchange of the human being with AR/VR applications, whether for spatial, educational, research or exploration use.

Another of the forecasts of the use of Prompt Engineering is the possibility of achieving a simultaneous translation in spoken and written languages, taking advantage of the contexts in several languages so that AI translates bi-directionally in real time and in the most reliable way possible. The impact of this is communication in business, multicultural, diplomatic and personal contexts, taking into account regional dialects, cultural nuances and speech patterns.

Regarding interdisciplinary creativity, Prompt Engineering can boost AI to generate art, stories, works and music, combining with human creativity. Of course, this may have ethical implications, although the access of AI for artistic purposes is also democratized.

Of course, as Prompt Engineering matures, questions about fairness, respect and alignment with moral values are raised, from the formulation of the query itself to the type of answers that can be derived. Keep in mind that in the future of AI and Prompt Engineering, technology will always be a reflection of people.

Challenges and Opportunities

As we have seen, Prompt Engineering represents the opportunity to develop well-designed Prompts that improve the features of AI, more efficiently and effectively. The advantage of this is that everyday tasks can be streamlined, in addition to expanding knowledge on different topics and boosting creativity. Inclusion is also encouraged when properly implemented, with a positive impact on gender experiences.

On the other hand there are poorly designed questions that can result in AI responses with bias, prejudice, or erroneous data. Hence, ethical considerations in Prompt Engineering can mitigate these risks, without compromising fairness, respect, and inclusion. Also, the lack of application of best practices, even by professionals in the field, may not achieve the desired result on the first attempt and may be difficult to find a suitable point to start the process.

It can also be difficult to control the level of creativity and uniqueness of the result. Often, Prompt Engineering professionals can provide additional information in the message that may confuse the AI model and affect the accuracy of the answer.


In the digital economy, the most memorable experiences will be those in which data is leveraged and combined with human knowledge to anticipate customer needs, with empathy and customization. In this environment, AI becomes the digital partner, not only as a point of contact with the customer, but also as a driver of productivity in the organization. It is true that GPT has gained traction in a search for closer proximity to the customer; however, it is based on frequency and word association, lacking the ability to differentiate correct from incorrect information. Due to this need to improve the quality of answers that Prompt Engineering takes relevance to develop and optimize AI natural language models and obtain quality and accuracy in their answers, based on a greater understanding of user intent. Without a doubt, the demand for the Prompt Engineer will grow, confirming that organizations require professionals who know how to understand the nature of AI-based tools.

It is clear that, as the adoption of Mature Prompt Engineering will continue to raise issues of equity, respect and alignment with moral values in the formulation of prompts and results, so appropriate techniques are required to achieve its implementation without bias or prejudice. To embark on this journey to Prompt Engineering, it is recommended to be accompanied by a technology partner who transmits to their team the best techniques and practices for its implementation.