Hallucinations
GenAI hallucinations are plausible but false outputs generated by Large Language Models (LLMs). These occur because LLMs predict likely word sequences based on learned patterns, without true understanding. They can produce convincing but incorrect information, especially when generating citations. To mitigate this, some AI models are grounded in external sources, allowing for fact-checking. Examples include Microsoft Copilot, Perplexity, and ChatGPT Plus.
Misinformation
AI models can generate false information due to their probabilistic nature, leading to unpredictability and unreliability. This poses a significant risk for misinformation spread. LLMs can hallucinate facts, statistics, or quotes, potentially confirming biases or promoting conspiracy theories. They can create convincing but fictitious content across various formats, from news articles to academic papers and can be used to create targeted misinformation campaigns. As this technology evolves, careful oversight and validation processes are essential to mitigate these risks.
Bias
LLMs used to train GenAI models contain "the good, bad, and ugly of human thought" (Bowen & Watson, 2024) and often exhibit bias due to imbalanced representation in their source materials. While developers implement some safeguards, these are not comprehensive. Users can help by critically examining AI outputs, modifying prompts to counter stereotypes, and maintaining a human-in-the-loop approach. This active human oversight ensures accuracy, reliability, and fairness in AI-generated content, allowing for necessary adjustments to promote truthful and ethical outputs.
Plagiarism
GenAI models generate content by connecting tokens based on learned patterns, without a clear map of their training data sources. This lack of transparency poses challenges for academia, where building on and citing existing knowledge is crucial. Using GenAI may inadvertently lead to plagiarism or copyright infringement, as the origin of generated content remains unclear. This conflicts with academic practices of verifiable references and proper attribution.
Privacy
GenAI models pose significant privacy risks. These systems may store and learn from user inputs, potentially compromising personal, financial, and confidential information. Privacy concerns encompass not only personal data but also potentially expose confidential information from employers, academic sources, and other third parties. As these AI models continue to evolve, users must exercise caution by avoiding input of sensitive data and staying informed about privacy policies. The challenge is balancing the benefits of these tools with the need to protect personal information across various platforms.
Citing AI-generated content is crucial in academic and professional contexts to maintain intellectual honesty, promote transparency, and uphold academic integrity when utilizing AI tools. Proper attribution distinguishes human-authored and AI-generated content, enabling readers to evaluate the material accurately. As AI tools become more prevalent, citing AI sources contributes to ongoing discussions about their role and helps track the evolution of AI capabilities over time. For guidance on citing content generated by AI, refer to the CMU Libraries guide.
Ethan Mollick, one of the early adopters of AI models and the author of the book Co-Intelligence: Living and Working with AI, shares four ground rules for working with AI:
Learn more about the Four Rules according to Ethan Mollick here.
The CIDI framework, conceived by Gianluco Mauro, helps structure prompts and produce relevant outputs. CIDI stands for:
C - Context: provides the necessary background information about the task, the target audience, the desired tone, or any other relevant contextual factors.
I - Instruction: this is the core of your prompt - the specific question, instruction, or request that you want the AI to address. This should be clear, concise, and focused, providing the AI with a well-defined objective.
D - Details: specifies the type of response you expect from the AI, such as a paragraph of text, a list of recommendations, or a visual representation. This helps the AI understand the expected format and scope of the response.
I - Input: provides any additional data, documents, or other information about the intended use of the AI's response to help the AI tailor its response to your needs.
Here's an example of how the CIDI framework can be used in a prompt:
Context: You are a market analyst specializing in the Middle East region. The goal is to provide a comprehensive analysis of the latest trends and future directions in the travel industry, with a focus on the Gulf Cooperation Council (GCC) countries.
Instruction: Analyze the latest market trends from the past year and predict future directions for the travel industry in the GCC countries. Include data on consumer behavior and competitor performance.
Details: The summary will be used as a reference for businesses operating in the travel and hospitality sectors in the GCC region. It should be written in a clear, concise, and actionable style to help stakeholders make informed decisions. The analysis should be approximately 1000 words long and include the following elements:
Overview of the current state of the travel industry in the GCC region
Analysis of consumer behavior, including preferences, concerns, and spending patterns
Evaluation of competitor performance, highlighting key players and their strategies
Identification of emerging trends and potential growth areas
Recommendations for businesses looking to capitalize on the changing market landscape
Input: Use the following market reports and data sources:
GCC Travel and Tourism Competitiveness Report 2021
Middle East Hotel Market Review Q4 2021
GCC Outbound Travel Market Report 2022
UNWTO Tourism Highlights 2021 Edition (Middle East section)
By using the CIDI framework, you can create a well-structured and comprehensive prompt that guides the AI to generate a response tailored to the specific needs and context of the Middle East travel industry.
Copy and Paste this prompt in your favorite GenAI model and explore the output that you receive. Here's a link to the output generated by Claude 3.5 Sonnet.
Google DeepMind suggests using the OPRO framework which stands for Optimization by PROmpting.
OPRO uses a conversational approach to refine output where you can start with a general prompt and refine it as you go to get as close as possible to the result you have in mind.
Here's an example of how an OPRO framework can be used:
Initial Prompt Let's work together on creating a comprehensive analysis of the travel industry in the GCC countries. We'll focus on recent trends and future directions. Can you start by giving me an overview of the current state of the travel industry in the GCC region? |
|
Refined Prompt That's a good start. Can you elaborate on the investments in tourism infrastructure? Also, let's move on to consumer behavior. What are the key trends you're seeing in traveler preferences and concerns? |
|
Refined Prompt This is great information. Let's focus on competitor performance now. Can you identify some key players in the GCC travel industry and highlight their strategies? |
Keep refining your prompt until you get the output you are looking for.
Responses are generated by Claude 3.5 Sonnet.