{"id":484,"date":"2024-10-12T00:30:51","date_gmt":"2024-10-12T00:30:51","guid":{"rendered":"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/?post_type=chapter&#038;p=484"},"modified":"2025-01-19T22:01:41","modified_gmt":"2025-01-19T22:01:41","slug":"14-1-generative-ai-genai","status":"publish","type":"chapter","link":"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/chapter\/14-1-generative-ai-genai\/","title":{"rendered":"14.1 Generative AI (GenAI)"},"content":{"raw":"<div class=\"flex max-w-full flex-col flex-grow\">\r\n<div data-message-author-role=\"assistant\" data-message-id=\"56055f89-4dcf-471b-b53e-d96e6a72c700\" dir=\"auto\" class=\"min-h-8 text-message flex w-full flex-col items-end gap-2 whitespace-normal break-words [.text-message &amp;]:mt-5\" data-message-model-slug=\"gpt-4o\">\r\n<div class=\"flex w-full flex-col gap-1 empty:hidden first:pt-[3px]\">\r\n<div class=\"markdown prose w-full break-words dark:prose-invert light\">\r\n\r\nIn November 2022, OpenAI made a groundbreaking announcement with the release of ChatGPT, a generative AI model that surprised the world with its remarkably human-like conversational abilities, showcasing an unprecedented level of coherence and versatility in generating text across various topics.[footnote]Retrieved from <a href=\"https:\/\/openai.com\/index\/chatgpt\/\">https:\/\/openai.com\/index\/chatgpt\/<\/a>[\/footnote]. ChatGPT gained one million users five days after launching, while it took 2.5 months for Instagram, 5 months for Spotify, 10 months for Facebook, and 3.5 years for Netflix to reach the same number of users[footnote]Retrieved from <a href=\"https:\/\/www.statista.com\/chart\/29174\/time-to-one-million-users\/\">https:\/\/www.statista.com\/chart\/29174\/time-to-one-million-users\/<\/a>[\/footnote]. ChatGPT hit 100 million monthly active users in January 2023 and more than 200 million people weekly in August 2024[footnote].Retrieved from <a href=\"https:\/\/www.theverge.com\/2024\/8\/29\/24231685\/openai-chatgpt-200-million-weekly-users\">https:\/\/www.theverge.com\/2024\/8\/29\/24231685\/openai-chatgpt-200-million-weekly-users<\/a>[\/footnote].\r\n\r\nUnlike earlier versions of artificial intelligence tools, ChatGPT quickly captured public attention for its ability to generate coherent, human-like text responses across various topics. Built on the foundation of the GPT-3.5 architecture (Generative Pre-trained Transformer[footnote]Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, \u0141., &amp; Polosukhin, I. (2017). Attention is all you need. In 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. https:\/\/arxiv.org\/abs\/1706.03762[\/footnote]), this tool demonstrated a significant leap in natural language processing by offering new possibilities in communication, content creation, and automated tasks. Its free web interface offered accessibility to the general public. Therefore, it allowed millions of users to explore the potential of AI to be integrated into daily life and professional workflows. This launch showcased the maturity of generative AI and sparked widespread conversations about its applications, ethical implications, and future developments.\r\n\r\nThe Future of Jobs Report 2023 by the World Economic Forum indicated that some 75% of companies are set to have adopted AI technologies by 2027, while 80% plan to accelerate automation during this period[footnote].Retrieved from <a href=\"https:\/\/www.weforum.org\/agenda\/2023\/08\/ai-artificial-intelligence-changing-the-future-of-work-jobs\/\">https:\/\/www.weforum.org\/agenda\/2023\/08\/ai-artificial-intelligence-changing-the-future-of-work-jobs\/<\/a>[\/footnote]. This rapid technological shift is expected to significantly transform job roles, with a growing demand for skills in AI, data analysis, and digital tools, while also displacing more routine, manual tasks. As companies embrace these technologies, reskilling and upskilling initiatives will become critical to preparing the workforce for these changes.\r\n\r\n<\/div>\r\n<\/div>\r\n<\/div>\r\n<\/div>\r\n<h2>14.1.1 Generative AI (GenAI)<\/h2>\r\n<strong>GenAI<\/strong> refers to a subset of artificial intelligence (AI) technologies that generate new content\u2014text, images, music, code, and other forms of data\u2014by learning from existing patterns and examples. Unlike traditional AI models (i.e., classification, regression, and clustering) that primarily classify, predict, or analyze data, GenAI models can create novel outputs that mimic the data they were trained on[footnote]Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., ... &amp; Liang, P. (2021). On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258.[\/footnote].\r\n\r\nGenAI is powered by advanced machine learning techniques, particularly neural networks like <em>transformers<\/em>, which enable these systems to process large datasets and produce original content based on patterns learned during training[footnote]Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, \u0141., &amp; Polosukhin, I. (2017). Attention is all you need. In 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. https:\/\/arxiv.org\/abs\/1706.03762[\/footnote].\r\n\r\nMajor types of GenAI models are:\r\n<ol>\r\n \t<li>\r\n<h3><strong>Large Language Models (LLMs)<\/strong><\/h3>\r\n<ul>\r\n \t<li>LLMs are designed to understand and generate human-like text based on large datasets. They can perform a wide range of language tasks, such as text completion, summarization, translation, and answering questions. Most modern LLMs are based on transformer architectures but may also use autoregressive generation methods.<\/li>\r\n \t<li><strong>Examples<\/strong>: ChatGPT by OpenAI[footnote]<a href=\"https:\/\/chatgpt.com\/\">https:\/\/chatgpt.com\/<\/a>[\/footnote], Gemini by Google[footnote]<a href=\"https:\/\/gemini.google.com\/app\">https:\/\/gemini.google.com\/app<\/a>[\/footnote], Claude by Anthropic[footnote]<a href=\"https:\/\/claude.ai\/\">https:\/\/claude.ai\/<\/a>[\/footnote], LLaMA by Meta[footnote]<a href=\"https:\/\/www.llama.com\/\">https:\/\/www.llama.com\/<\/a>[\/footnote], and ERNIE by Baidu[footnote]<a href=\"https:\/\/yiyan.baidu.com\/\">https:\/\/yiyan.baidu.com\/<\/a>[\/footnote].<\/li>\r\n<\/ul>\r\n<\/li>\r\n \t<li><strong style=\"font-family: Roboto, Helvetica, Arial, sans-serif;font-size: 1em;font-style: italic\">Transformer-based Models<\/strong>\r\n<ul>\r\n \t<li>Transformer models are neural network architectures that excel at processing sequential data. While commonly used in LLMs, transformer models can also be adapted for other generative tasks, such as image generation, speech synthesis, and music composition.<\/li>\r\n \t<li><strong>Examples<\/strong>: The first version of DALL\u00b7E for image generation, ElevenLabs for speech generation[footnote]https:\/\/elevenlabs.io\/[\/footnote], SOUNDRAW for music generation[footnote]<a href=\"https:\/\/soundraw.io\/\">https:\/\/soundraw.io\/<\/a>[\/footnote], and Microsoft GitHub Copilot for code generation.<\/li>\r\n<\/ul>\r\n<\/li>\r\n \t<li><strong style=\"font-family: Roboto, Helvetica, Arial, sans-serif;font-size: 1em;font-style: italic\">Autoregressive Models<\/strong>\r\n<ul>\r\n \t<li>Autoregressive models generate outputs step by step, with each step conditioned on the preceding outputs. In text generation, for example, the model predicts the next word in a sequence based on the words that have already been generated. Many LLMs like GPT combine transformer architecture with an autoregressive generation process.<\/li>\r\n \t<li><span style=\"margin: 0px;padding: 0px\"><strong>Examples<\/strong>: GPT for text generation and WaveNet for audio generation.<\/span><\/li>\r\n<\/ul>\r\n<\/li>\r\n \t<li>\r\n<h4><strong>Diffusion Models<\/strong><\/h4>\r\n<ul>\r\n \t<li>Diffusion models are generative models that gradually denoise random noise to generate high-quality images. Due to their ability to create highly detailed and realistic images, diffusion models have become popular for image-generation tasks.<\/li>\r\n \t<li><strong>Examples<\/strong>: DALL\u00b7E 2 and 3 (incorporated into ChatGPT)[footnote]<a href=\"https:\/\/openai.com\/index\/dall-e-3\/\">https:\/\/openai.com\/index\/dall-e-3\/<\/a>[\/footnote], Stable Diffusion[footnote]<a href=\"https:\/\/stability.ai\/\">https:\/\/stability.ai\/<\/a>[\/footnote].<\/li>\r\n<\/ul>\r\n<\/li>\r\n \t<li>\r\n<h3><strong style=\"font-family: Roboto, Helvetica, Arial, sans-serif;font-size: 1em;font-style: italic\">Generative Adversarial Networks (GANs)<\/strong><\/h3>\r\n<ul>\r\n \t<li>GANs consist of two neural networks\u2014 a generator and a discriminator\u2014 that work together to generate realistic data such as images, videos, and audio. The generator creates data, while the discriminator evaluates its authenticity.<\/li>\r\n \t<li><strong>Examples<\/strong>: StyleGAN for realistic image generation and DeepFake models for video manipulation.<\/li>\r\n<\/ul>\r\n<\/li>\r\n \t<li>\r\n<h3><strong style=\"font-family: Roboto, Helvetica, Arial, sans-serif;font-size: 1em;font-style: italic\">Variational Autoencoders (VAEs)<\/strong><\/h3>\r\n<\/li>\r\n<\/ol>\r\n<ul>\r\n \t<li style=\"list-style-type: none\">\r\n<ul>\r\n \t<li>VAEs generate new data similar to a training set by encoding input data into a compressed representation and then generating data from this compressed format.<\/li>\r\n \t<li><strong>Examples<\/strong>: VAEs are applied to medical image generation and video game design.<\/li>\r\n<\/ul>\r\n<\/li>\r\n<\/ul>\r\n<h2>14.1.2 Relationship of GenAI with AI in General<\/h2>\r\nGenAI is a specialized branch of AI, existing as one of many applications within the broader field of artificial intelligence (Figure 14.1.1). Here\u2019s how it fits into the AI ecosystem:\r\n<ol>\r\n \t<li>\r\n<h3><strong>Broader AI Scope<\/strong>:<\/h3>\r\n<ul>\r\n \t<li>Narrow AI (also known as weak AI) encompasses a wide range of capabilities, including:\r\n<ul>\r\n \t<li><em>Reactive machines<\/em> that respond to specific stimuli (e.g., recommendation systems used in online platforms like Netflix and Amazon).<\/li>\r\n \t<li><em>Expert systems<\/em> that can make decisions or predictions based on predefined rules (e.g., AI used in medical diagnostic tools).<\/li>\r\n \t<li><em>Learning systems<\/em> that improve performance based on experience, including methods like reinforcement learning, supervised learning, and unsupervised learning[footnote]Russell, S. J., &amp; Norvig, P. (2021). Artificial intelligence: A modern approach (4th ed.). Pearson.[\/footnote].<\/li>\r\n<\/ul>\r\n<\/li>\r\n<\/ul>\r\n<\/li>\r\n<\/ol>\r\n<div class=\"textbox textbox--key-takeaways\"><header class=\"textbox__header\">\r\n<p class=\"textbox__title\"><strong>General AI vs. Narrow AI<\/strong><\/p>\r\n\r\n<\/header>\r\n<div class=\"textbox__content\">\r\n\r\nGeneral AI, or Artificial General Intelligence (AGI), is an advanced form of AI that can understand, learn, and perform any intellectual task a human can do. Unlike narrow AI, designed to excel at specific tasks like language translation or image recognition, general AI is adaptable and capable of transferring knowledge across different domains without requiring task-specific training. While narrow AI is widely used today, general AI remains a theoretical concept, aiming to replicate human-like reasoning, creativity, and problem-solving on a broad scale.\r\n\r\n<\/div>\r\n<\/div>\r\n<ol>\r\n \t<li style=\"list-style-type: none\">\r\n<ul>\r\n \t<li>GenAI\u00a0<span style=\"font-size: 14pt\">represents<\/span>\u00a0a distinct and advanced application within narrow AI. While traditional narrow AI models focus on tasks like classifying or predicting data, GenAI creates\u00a0<span style=\"margin: 0px;padding: 0px\">new content\u2014text, images, videos, and music\u2014by identifying and mimicking patterns learned during training<\/span>.<\/li>\r\n<\/ul>\r\n<\/li>\r\n \t<li><strong>Generative AI vs. Traditional AI<\/strong>:\r\n<ul>\r\n \t<li>Traditional AI models are typically designed for classification (e.g., detecting spam), regression (e.g., predicting sales), and optimization (e.g., routing logistics). They often rely on identifying patterns within structured datasets to assist with decision-making or predictions[footnote]Russell, S. J., &amp; Norvig, P. (2021). Artificial intelligence: A modern approach (4th ed.). Pearson.[\/footnote].<\/li>\r\n \t<li>GenAI, in contrast, focuses on generating new data that adheres to the learned patterns, which may include producing text, video, audio, code, or even entire 3D models[footnote]Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., ... &amp; Liang, P. (2021). On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258.[\/footnote]. It is creative in nature, aiming to extend beyond analyzing existing data to creating entirely new, realistic outputs.<\/li>\r\n<\/ul>\r\n<\/li>\r\n \t<li><strong>Core Techniques<\/strong>:\r\n<ul>\r\n \t<li><strong>Machine Learning<\/strong>: Both narrow AI and GenAI rely on machine learning, but generative models are often based on deep learning architectures like neural networks. In particular, <em>transformer models<\/em> have revolutionized GenAI by enabling a large-scale understanding of language and images[footnote]Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, \u0141., &amp; Polosukhin, I. (2017). Attention is all you need. In 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. https:\/\/arxiv.org\/abs\/1706.03762[\/footnote].<\/li>\r\n \t<li><strong>Training and Learning<\/strong>: GenAI models are trained on vast datasets, such as text corpora for language generation or image databases for image creation, and are designed to replicate the patterns and features observed in the training data[footnote]Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D., Wu, J., Winter, C., ... Amodei, D. (2020). Language models are few-shot learners. In 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. https:\/\/arxiv.org\/abs\/2005.14165[\/footnote]. This allows them to generate novel content consistent with the training examples.<\/li>\r\n \t<li><strong>Self-Supervised Learning<\/strong>: GenAI models often utilize self-supervised learning, which allows models to learn useful representations of data by predicting parts of the data itself. This method is crucial for tasks like text generation, where models predict the next word in a sequence.<\/li>\r\n<\/ul>\r\n<\/li>\r\n<\/ol>\r\nThe relationships between different AI levels and types are demonstrated in Figure 14.1.1 below.\r\n<ul>\r\n \t<li><strong>Artificial Intelligence (AI)<\/strong> (outermost layer):\r\n<ul>\r\n \t<li>This represents the broadest concept. AI encompasses any machine or system capable of performing tasks that typically require human intelligence, such as problem-solving, reasoning, and learning. AI includes various approaches and methods, such as machine learning, rule-based systems, and expert systems.<\/li>\r\n<\/ul>\r\n<\/li>\r\n \t<li><strong>Machine Learning (ML)<\/strong> (next layer):\r\n<ul>\r\n \t<li>A subset of AI, machine learning refers to systems that improve their performance on a task through experience, meaning they can learn from data. These systems use algorithms to identify patterns in data and make predictions or decisions without being explicitly programmed for specific tasks.<\/li>\r\n<\/ul>\r\n<\/li>\r\n \t<li><strong>Neural Networks<\/strong> (next layer):\r\n<ul>\r\n \t<li>Within machine learning, neural networks are a class of models inspired by the structure and functioning of the human brain. They consist of interconnected nodes (neurons) organized in layers that process input data and learn to perform complex tasks, such as image recognition or language translation.<\/li>\r\n<\/ul>\r\n<\/li>\r\n \t<li><strong>Generative AI<\/strong> (innermost layer):\r\n<ul>\r\n \t<li>GenAI is a specific application of neural networks and machine learning.<\/li>\r\n<\/ul>\r\n<\/li>\r\n<\/ul>\r\n[caption id=\"attachment_517\" align=\"aligncenter\" width=\"596\"]<a href=\"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-content\/uploads\/sites\/183\/2024\/10\/AI-Venn-Diagram.png\"><img src=\"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-content\/uploads\/sites\/183\/2024\/10\/AI-Venn-Diagram.png\" alt=\"A diagram showing the hierarchy of AI technologies, with concentric circles labeled from the outermost to innermost: Artificial Intelligence, Machine Learning, Neural Networks, and Generative AI.\" width=\"596\" height=\"557\" class=\" wp-image-517\" \/><\/a> Figure 14.1.1 Hierarchy of AI Technologies[\/caption]","rendered":"<div class=\"flex max-w-full flex-col flex-grow\">\n<div data-message-author-role=\"assistant\" data-message-id=\"56055f89-4dcf-471b-b53e-d96e6a72c700\" dir=\"auto\" class=\"min-h-8 text-message flex w-full flex-col items-end gap-2 whitespace-normal break-words [.text-message &amp;]:mt-5\" data-message-model-slug=\"gpt-4o\">\n<div class=\"flex w-full flex-col gap-1 empty:hidden first:pt-[3px]\">\n<div class=\"markdown prose w-full break-words dark:prose-invert light\">\n<p>In November 2022, OpenAI made a groundbreaking announcement with the release of ChatGPT, a generative AI model that surprised the world with its remarkably human-like conversational abilities, showcasing an unprecedented level of coherence and versatility in generating text across various topics.<a class=\"footnote\" title=\"Retrieved from https:\/\/openai.com\/index\/chatgpt\/\" id=\"return-footnote-484-1\" href=\"#footnote-484-1\" aria-label=\"Footnote 1\"><sup class=\"footnote\">[1]<\/sup><\/a>. ChatGPT gained one million users five days after launching, while it took 2.5 months for Instagram, 5 months for Spotify, 10 months for Facebook, and 3.5 years for Netflix to reach the same number of users<a class=\"footnote\" title=\"Retrieved from https:\/\/www.statista.com\/chart\/29174\/time-to-one-million-users\/\" id=\"return-footnote-484-2\" href=\"#footnote-484-2\" aria-label=\"Footnote 2\"><sup class=\"footnote\">[2]<\/sup><\/a>. ChatGPT hit 100 million monthly active users in January 2023 and more than 200 million people weekly in August 2024<a class=\"footnote\" title=\".Retrieved from https:\/\/www.theverge.com\/2024\/8\/29\/24231685\/openai-chatgpt-200-million-weekly-users\" id=\"return-footnote-484-3\" href=\"#footnote-484-3\" aria-label=\"Footnote 3\"><sup class=\"footnote\">[3]<\/sup><\/a>.<\/p>\n<p>Unlike earlier versions of artificial intelligence tools, ChatGPT quickly captured public attention for its ability to generate coherent, human-like text responses across various topics. Built on the foundation of the GPT-3.5 architecture (Generative Pre-trained Transformer<a class=\"footnote\" title=\"Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, \u0141., &amp; Polosukhin, I. (2017). Attention is all you need. In 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. https:\/\/arxiv.org\/abs\/1706.03762\" id=\"return-footnote-484-4\" href=\"#footnote-484-4\" aria-label=\"Footnote 4\"><sup class=\"footnote\">[4]<\/sup><\/a>), this tool demonstrated a significant leap in natural language processing by offering new possibilities in communication, content creation, and automated tasks. Its free web interface offered accessibility to the general public. Therefore, it allowed millions of users to explore the potential of AI to be integrated into daily life and professional workflows. This launch showcased the maturity of generative AI and sparked widespread conversations about its applications, ethical implications, and future developments.<\/p>\n<p>The Future of Jobs Report 2023 by the World Economic Forum indicated that some 75% of companies are set to have adopted AI technologies by 2027, while 80% plan to accelerate automation during this period<a class=\"footnote\" title=\".Retrieved from https:\/\/www.weforum.org\/agenda\/2023\/08\/ai-artificial-intelligence-changing-the-future-of-work-jobs\/\" id=\"return-footnote-484-5\" href=\"#footnote-484-5\" aria-label=\"Footnote 5\"><sup class=\"footnote\">[5]<\/sup><\/a>. This rapid technological shift is expected to significantly transform job roles, with a growing demand for skills in AI, data analysis, and digital tools, while also displacing more routine, manual tasks. As companies embrace these technologies, reskilling and upskilling initiatives will become critical to preparing the workforce for these changes.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<h2>14.1.1 Generative AI (GenAI)<\/h2>\n<p><strong>GenAI<\/strong> refers to a subset of artificial intelligence (AI) technologies that generate new content\u2014text, images, music, code, and other forms of data\u2014by learning from existing patterns and examples. Unlike traditional AI models (i.e., classification, regression, and clustering) that primarily classify, predict, or analyze data, GenAI models can create novel outputs that mimic the data they were trained on<a class=\"footnote\" title=\"Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., ... &amp; Liang, P. (2021). On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258.\" id=\"return-footnote-484-6\" href=\"#footnote-484-6\" aria-label=\"Footnote 6\"><sup class=\"footnote\">[6]<\/sup><\/a>.<\/p>\n<p>GenAI is powered by advanced machine learning techniques, particularly neural networks like <em>transformers<\/em>, which enable these systems to process large datasets and produce original content based on patterns learned during training<a class=\"footnote\" title=\"Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, \u0141., &amp; Polosukhin, I. (2017). Attention is all you need. In 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. https:\/\/arxiv.org\/abs\/1706.03762\" id=\"return-footnote-484-7\" href=\"#footnote-484-7\" aria-label=\"Footnote 7\"><sup class=\"footnote\">[7]<\/sup><\/a>.<\/p>\n<p>Major types of GenAI models are:<\/p>\n<ol>\n<li>\n<h3><strong>Large Language Models (LLMs)<\/strong><\/h3>\n<ul>\n<li>LLMs are designed to understand and generate human-like text based on large datasets. They can perform a wide range of language tasks, such as text completion, summarization, translation, and answering questions. Most modern LLMs are based on transformer architectures but may also use autoregressive generation methods.<\/li>\n<li><strong>Examples<\/strong>: ChatGPT by OpenAI<a class=\"footnote\" title=\"https:\/\/chatgpt.com\/\" id=\"return-footnote-484-8\" href=\"#footnote-484-8\" aria-label=\"Footnote 8\"><sup class=\"footnote\">[8]<\/sup><\/a>, Gemini by Google<a class=\"footnote\" title=\"https:\/\/gemini.google.com\/app\" id=\"return-footnote-484-9\" href=\"#footnote-484-9\" aria-label=\"Footnote 9\"><sup class=\"footnote\">[9]<\/sup><\/a>, Claude by Anthropic<a class=\"footnote\" title=\"https:\/\/claude.ai\/\" id=\"return-footnote-484-10\" href=\"#footnote-484-10\" aria-label=\"Footnote 10\"><sup class=\"footnote\">[10]<\/sup><\/a>, LLaMA by Meta<a class=\"footnote\" title=\"https:\/\/www.llama.com\/\" id=\"return-footnote-484-11\" href=\"#footnote-484-11\" aria-label=\"Footnote 11\"><sup class=\"footnote\">[11]<\/sup><\/a>, and ERNIE by Baidu<a class=\"footnote\" title=\"https:\/\/yiyan.baidu.com\/\" id=\"return-footnote-484-12\" href=\"#footnote-484-12\" aria-label=\"Footnote 12\"><sup class=\"footnote\">[12]<\/sup><\/a>.<\/li>\n<\/ul>\n<\/li>\n<li><strong style=\"font-family: Roboto, Helvetica, Arial, sans-serif;font-size: 1em;font-style: italic\">Transformer-based Models<\/strong>\n<ul>\n<li>Transformer models are neural network architectures that excel at processing sequential data. While commonly used in LLMs, transformer models can also be adapted for other generative tasks, such as image generation, speech synthesis, and music composition.<\/li>\n<li><strong>Examples<\/strong>: The first version of DALL\u00b7E for image generation, ElevenLabs for speech generation<a class=\"footnote\" title=\"https:\/\/elevenlabs.io\/\" id=\"return-footnote-484-13\" href=\"#footnote-484-13\" aria-label=\"Footnote 13\"><sup class=\"footnote\">[13]<\/sup><\/a>, SOUNDRAW for music generation<a class=\"footnote\" title=\"https:\/\/soundraw.io\/\" id=\"return-footnote-484-14\" href=\"#footnote-484-14\" aria-label=\"Footnote 14\"><sup class=\"footnote\">[14]<\/sup><\/a>, and Microsoft GitHub Copilot for code generation.<\/li>\n<\/ul>\n<\/li>\n<li><strong style=\"font-family: Roboto, Helvetica, Arial, sans-serif;font-size: 1em;font-style: italic\">Autoregressive Models<\/strong>\n<ul>\n<li>Autoregressive models generate outputs step by step, with each step conditioned on the preceding outputs. In text generation, for example, the model predicts the next word in a sequence based on the words that have already been generated. Many LLMs like GPT combine transformer architecture with an autoregressive generation process.<\/li>\n<li><span style=\"margin: 0px;padding: 0px\"><strong>Examples<\/strong>: GPT for text generation and WaveNet for audio generation.<\/span><\/li>\n<\/ul>\n<\/li>\n<li>\n<h4><strong>Diffusion Models<\/strong><\/h4>\n<ul>\n<li>Diffusion models are generative models that gradually denoise random noise to generate high-quality images. Due to their ability to create highly detailed and realistic images, diffusion models have become popular for image-generation tasks.<\/li>\n<li><strong>Examples<\/strong>: DALL\u00b7E 2 and 3 (incorporated into ChatGPT)<a class=\"footnote\" title=\"https:\/\/openai.com\/index\/dall-e-3\/\" id=\"return-footnote-484-15\" href=\"#footnote-484-15\" aria-label=\"Footnote 15\"><sup class=\"footnote\">[15]<\/sup><\/a>, Stable Diffusion<a class=\"footnote\" title=\"https:\/\/stability.ai\/\" id=\"return-footnote-484-16\" href=\"#footnote-484-16\" aria-label=\"Footnote 16\"><sup class=\"footnote\">[16]<\/sup><\/a>.<\/li>\n<\/ul>\n<\/li>\n<li>\n<h3><strong style=\"font-family: Roboto, Helvetica, Arial, sans-serif;font-size: 1em;font-style: italic\">Generative Adversarial Networks (GANs)<\/strong><\/h3>\n<ul>\n<li>GANs consist of two neural networks\u2014 a generator and a discriminator\u2014 that work together to generate realistic data such as images, videos, and audio. The generator creates data, while the discriminator evaluates its authenticity.<\/li>\n<li><strong>Examples<\/strong>: StyleGAN for realistic image generation and DeepFake models for video manipulation.<\/li>\n<\/ul>\n<\/li>\n<li>\n<h3><strong style=\"font-family: Roboto, Helvetica, Arial, sans-serif;font-size: 1em;font-style: italic\">Variational Autoencoders (VAEs)<\/strong><\/h3>\n<\/li>\n<\/ol>\n<ul>\n<li style=\"list-style-type: none\">\n<ul>\n<li>VAEs generate new data similar to a training set by encoding input data into a compressed representation and then generating data from this compressed format.<\/li>\n<li><strong>Examples<\/strong>: VAEs are applied to medical image generation and video game design.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h2>14.1.2 Relationship of GenAI with AI in General<\/h2>\n<p>GenAI is a specialized branch of AI, existing as one of many applications within the broader field of artificial intelligence (Figure 14.1.1). Here\u2019s how it fits into the AI ecosystem:<\/p>\n<ol>\n<li>\n<h3><strong>Broader AI Scope<\/strong>:<\/h3>\n<ul>\n<li>Narrow AI (also known as weak AI) encompasses a wide range of capabilities, including:\n<ul>\n<li><em>Reactive machines<\/em> that respond to specific stimuli (e.g., recommendation systems used in online platforms like Netflix and Amazon).<\/li>\n<li><em>Expert systems<\/em> that can make decisions or predictions based on predefined rules (e.g., AI used in medical diagnostic tools).<\/li>\n<li><em>Learning systems<\/em> that improve performance based on experience, including methods like reinforcement learning, supervised learning, and unsupervised learning<a class=\"footnote\" title=\"Russell, S. J., &amp; Norvig, P. (2021). Artificial intelligence: A modern approach (4th ed.). Pearson.\" id=\"return-footnote-484-17\" href=\"#footnote-484-17\" aria-label=\"Footnote 17\"><sup class=\"footnote\">[17]<\/sup><\/a>.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n<div class=\"textbox textbox--key-takeaways\">\n<header class=\"textbox__header\">\n<p class=\"textbox__title\"><strong>General AI vs. Narrow AI<\/strong><\/p>\n<\/header>\n<div class=\"textbox__content\">\n<p>General AI, or Artificial General Intelligence (AGI), is an advanced form of AI that can understand, learn, and perform any intellectual task a human can do. Unlike narrow AI, designed to excel at specific tasks like language translation or image recognition, general AI is adaptable and capable of transferring knowledge across different domains without requiring task-specific training. While narrow AI is widely used today, general AI remains a theoretical concept, aiming to replicate human-like reasoning, creativity, and problem-solving on a broad scale.<\/p>\n<\/div>\n<\/div>\n<ol>\n<li style=\"list-style-type: none\">\n<ul>\n<li>GenAI\u00a0<span style=\"font-size: 14pt\">represents<\/span>\u00a0a distinct and advanced application within narrow AI. While traditional narrow AI models focus on tasks like classifying or predicting data, GenAI creates\u00a0<span style=\"margin: 0px;padding: 0px\">new content\u2014text, images, videos, and music\u2014by identifying and mimicking patterns learned during training<\/span>.<\/li>\n<\/ul>\n<\/li>\n<li><strong>Generative AI vs. Traditional AI<\/strong>:\n<ul>\n<li>Traditional AI models are typically designed for classification (e.g., detecting spam), regression (e.g., predicting sales), and optimization (e.g., routing logistics). They often rely on identifying patterns within structured datasets to assist with decision-making or predictions<a class=\"footnote\" title=\"Russell, S. J., &amp; Norvig, P. (2021). Artificial intelligence: A modern approach (4th ed.). Pearson.\" id=\"return-footnote-484-18\" href=\"#footnote-484-18\" aria-label=\"Footnote 18\"><sup class=\"footnote\">[18]<\/sup><\/a>.<\/li>\n<li>GenAI, in contrast, focuses on generating new data that adheres to the learned patterns, which may include producing text, video, audio, code, or even entire 3D models<a class=\"footnote\" title=\"Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., ... &amp; Liang, P. (2021). On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258.\" id=\"return-footnote-484-19\" href=\"#footnote-484-19\" aria-label=\"Footnote 19\"><sup class=\"footnote\">[19]<\/sup><\/a>. It is creative in nature, aiming to extend beyond analyzing existing data to creating entirely new, realistic outputs.<\/li>\n<\/ul>\n<\/li>\n<li><strong>Core Techniques<\/strong>:\n<ul>\n<li><strong>Machine Learning<\/strong>: Both narrow AI and GenAI rely on machine learning, but generative models are often based on deep learning architectures like neural networks. In particular, <em>transformer models<\/em> have revolutionized GenAI by enabling a large-scale understanding of language and images<a class=\"footnote\" title=\"Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, \u0141., &amp; Polosukhin, I. (2017). Attention is all you need. In 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. https:\/\/arxiv.org\/abs\/1706.03762\" id=\"return-footnote-484-20\" href=\"#footnote-484-20\" aria-label=\"Footnote 20\"><sup class=\"footnote\">[20]<\/sup><\/a>.<\/li>\n<li><strong>Training and Learning<\/strong>: GenAI models are trained on vast datasets, such as text corpora for language generation or image databases for image creation, and are designed to replicate the patterns and features observed in the training data<a class=\"footnote\" title=\"Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D., Wu, J., Winter, C., ... Amodei, D. (2020). Language models are few-shot learners. In 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. https:\/\/arxiv.org\/abs\/2005.14165\" id=\"return-footnote-484-21\" href=\"#footnote-484-21\" aria-label=\"Footnote 21\"><sup class=\"footnote\">[21]<\/sup><\/a>. This allows them to generate novel content consistent with the training examples.<\/li>\n<li><strong>Self-Supervised Learning<\/strong>: GenAI models often utilize self-supervised learning, which allows models to learn useful representations of data by predicting parts of the data itself. This method is crucial for tasks like text generation, where models predict the next word in a sequence.<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n<p>The relationships between different AI levels and types are demonstrated in Figure 14.1.1 below.<\/p>\n<ul>\n<li><strong>Artificial Intelligence (AI)<\/strong> (outermost layer):\n<ul>\n<li>This represents the broadest concept. AI encompasses any machine or system capable of performing tasks that typically require human intelligence, such as problem-solving, reasoning, and learning. AI includes various approaches and methods, such as machine learning, rule-based systems, and expert systems.<\/li>\n<\/ul>\n<\/li>\n<li><strong>Machine Learning (ML)<\/strong> (next layer):\n<ul>\n<li>A subset of AI, machine learning refers to systems that improve their performance on a task through experience, meaning they can learn from data. These systems use algorithms to identify patterns in data and make predictions or decisions without being explicitly programmed for specific tasks.<\/li>\n<\/ul>\n<\/li>\n<li><strong>Neural Networks<\/strong> (next layer):\n<ul>\n<li>Within machine learning, neural networks are a class of models inspired by the structure and functioning of the human brain. They consist of interconnected nodes (neurons) organized in layers that process input data and learn to perform complex tasks, such as image recognition or language translation.<\/li>\n<\/ul>\n<\/li>\n<li><strong>Generative AI<\/strong> (innermost layer):\n<ul>\n<li>GenAI is a specific application of neural networks and machine learning.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<figure id=\"attachment_517\" aria-describedby=\"caption-attachment-517\" style=\"width: 596px\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-content\/uploads\/sites\/183\/2024\/10\/AI-Venn-Diagram.png\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-content\/uploads\/sites\/183\/2024\/10\/AI-Venn-Diagram.png\" alt=\"A diagram showing the hierarchy of AI technologies, with concentric circles labeled from the outermost to innermost: Artificial Intelligence, Machine Learning, Neural Networks, and Generative AI.\" width=\"596\" height=\"557\" class=\"wp-image-517\" srcset=\"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-content\/uploads\/sites\/183\/2024\/10\/AI-Venn-Diagram.png 1723w, https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-content\/uploads\/sites\/183\/2024\/10\/AI-Venn-Diagram-300x280.png 300w, https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-content\/uploads\/sites\/183\/2024\/10\/AI-Venn-Diagram-1024x957.png 1024w, https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-content\/uploads\/sites\/183\/2024\/10\/AI-Venn-Diagram-768x718.png 768w, https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-content\/uploads\/sites\/183\/2024\/10\/AI-Venn-Diagram-1536x1436.png 1536w, https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-content\/uploads\/sites\/183\/2024\/10\/AI-Venn-Diagram-65x61.png 65w, https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-content\/uploads\/sites\/183\/2024\/10\/AI-Venn-Diagram-225x210.png 225w, https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-content\/uploads\/sites\/183\/2024\/10\/AI-Venn-Diagram-350x327.png 350w\" sizes=\"auto, (max-width: 596px) 100vw, 596px\" \/><\/a><figcaption id=\"caption-attachment-517\" class=\"wp-caption-text\">Figure 14.1.1 Hierarchy of AI Technologies<\/figcaption><\/figure>\n<hr class=\"before-footnotes clear\" \/><div class=\"footnotes\"><ol><li id=\"footnote-484-1\">Retrieved from <a href=\"https:\/\/openai.com\/index\/chatgpt\/\">https:\/\/openai.com\/index\/chatgpt\/<\/a> <a href=\"#return-footnote-484-1\" class=\"return-footnote\" aria-label=\"Return to footnote 1\">&crarr;<\/a><\/li><li id=\"footnote-484-2\">Retrieved from <a href=\"https:\/\/www.statista.com\/chart\/29174\/time-to-one-million-users\/\">https:\/\/www.statista.com\/chart\/29174\/time-to-one-million-users\/<\/a> <a href=\"#return-footnote-484-2\" class=\"return-footnote\" aria-label=\"Return to footnote 2\">&crarr;<\/a><\/li><li id=\"footnote-484-3\">.Retrieved from <a href=\"https:\/\/www.theverge.com\/2024\/8\/29\/24231685\/openai-chatgpt-200-million-weekly-users\">https:\/\/www.theverge.com\/2024\/8\/29\/24231685\/openai-chatgpt-200-million-weekly-users<\/a> <a href=\"#return-footnote-484-3\" class=\"return-footnote\" aria-label=\"Return to footnote 3\">&crarr;<\/a><\/li><li id=\"footnote-484-4\">Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, \u0141., &amp; Polosukhin, I. (2017). Attention is all you need. In 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. https:\/\/arxiv.org\/abs\/1706.03762 <a href=\"#return-footnote-484-4\" class=\"return-footnote\" aria-label=\"Return to footnote 4\">&crarr;<\/a><\/li><li id=\"footnote-484-5\">.Retrieved from <a href=\"https:\/\/www.weforum.org\/agenda\/2023\/08\/ai-artificial-intelligence-changing-the-future-of-work-jobs\/\">https:\/\/www.weforum.org\/agenda\/2023\/08\/ai-artificial-intelligence-changing-the-future-of-work-jobs\/<\/a> <a href=\"#return-footnote-484-5\" class=\"return-footnote\" aria-label=\"Return to footnote 5\">&crarr;<\/a><\/li><li id=\"footnote-484-6\">Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., ... &amp; Liang, P. (2021). On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258. <a href=\"#return-footnote-484-6\" class=\"return-footnote\" aria-label=\"Return to footnote 6\">&crarr;<\/a><\/li><li id=\"footnote-484-7\">Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, \u0141., &amp; Polosukhin, I. (2017). Attention is all you need. In 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. https:\/\/arxiv.org\/abs\/1706.03762 <a href=\"#return-footnote-484-7\" class=\"return-footnote\" aria-label=\"Return to footnote 7\">&crarr;<\/a><\/li><li id=\"footnote-484-8\"><a href=\"https:\/\/chatgpt.com\/\">https:\/\/chatgpt.com\/<\/a> <a href=\"#return-footnote-484-8\" class=\"return-footnote\" aria-label=\"Return to footnote 8\">&crarr;<\/a><\/li><li id=\"footnote-484-9\"><a href=\"https:\/\/gemini.google.com\/app\">https:\/\/gemini.google.com\/app<\/a> <a href=\"#return-footnote-484-9\" class=\"return-footnote\" aria-label=\"Return to footnote 9\">&crarr;<\/a><\/li><li id=\"footnote-484-10\"><a href=\"https:\/\/claude.ai\/\">https:\/\/claude.ai\/<\/a> <a href=\"#return-footnote-484-10\" class=\"return-footnote\" aria-label=\"Return to footnote 10\">&crarr;<\/a><\/li><li id=\"footnote-484-11\"><a href=\"https:\/\/www.llama.com\/\">https:\/\/www.llama.com\/<\/a> <a href=\"#return-footnote-484-11\" class=\"return-footnote\" aria-label=\"Return to footnote 11\">&crarr;<\/a><\/li><li id=\"footnote-484-12\"><a href=\"https:\/\/yiyan.baidu.com\/\">https:\/\/yiyan.baidu.com\/<\/a> <a href=\"#return-footnote-484-12\" class=\"return-footnote\" aria-label=\"Return to footnote 12\">&crarr;<\/a><\/li><li id=\"footnote-484-13\">https:\/\/elevenlabs.io\/ <a href=\"#return-footnote-484-13\" class=\"return-footnote\" aria-label=\"Return to footnote 13\">&crarr;<\/a><\/li><li id=\"footnote-484-14\"><a href=\"https:\/\/soundraw.io\/\">https:\/\/soundraw.io\/<\/a> <a href=\"#return-footnote-484-14\" class=\"return-footnote\" aria-label=\"Return to footnote 14\">&crarr;<\/a><\/li><li id=\"footnote-484-15\"><a href=\"https:\/\/openai.com\/index\/dall-e-3\/\">https:\/\/openai.com\/index\/dall-e-3\/<\/a> <a href=\"#return-footnote-484-15\" class=\"return-footnote\" aria-label=\"Return to footnote 15\">&crarr;<\/a><\/li><li id=\"footnote-484-16\"><a href=\"https:\/\/stability.ai\/\">https:\/\/stability.ai\/<\/a> <a href=\"#return-footnote-484-16\" class=\"return-footnote\" aria-label=\"Return to footnote 16\">&crarr;<\/a><\/li><li id=\"footnote-484-17\">Russell, S. J., &amp; Norvig, P. (2021). Artificial intelligence: A modern approach (4th ed.). Pearson. <a href=\"#return-footnote-484-17\" class=\"return-footnote\" aria-label=\"Return to footnote 17\">&crarr;<\/a><\/li><li id=\"footnote-484-18\">Russell, S. J., &amp; Norvig, P. (2021). Artificial intelligence: A modern approach (4th ed.). Pearson. <a href=\"#return-footnote-484-18\" class=\"return-footnote\" aria-label=\"Return to footnote 18\">&crarr;<\/a><\/li><li id=\"footnote-484-19\">Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., ... &amp; Liang, P. (2021). On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258. <a href=\"#return-footnote-484-19\" class=\"return-footnote\" aria-label=\"Return to footnote 19\">&crarr;<\/a><\/li><li id=\"footnote-484-20\">Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, \u0141., &amp; Polosukhin, I. (2017). Attention is all you need. In 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. https:\/\/arxiv.org\/abs\/1706.03762 <a href=\"#return-footnote-484-20\" class=\"return-footnote\" aria-label=\"Return to footnote 20\">&crarr;<\/a><\/li><li id=\"footnote-484-21\">Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D., Wu, J., Winter, C., ... Amodei, D. (2020). Language models are few-shot learners. In 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. https:\/\/arxiv.org\/abs\/2005.14165 <a href=\"#return-footnote-484-21\" class=\"return-footnote\" aria-label=\"Return to footnote 21\">&crarr;<\/a><\/li><\/ol><\/div>","protected":false},"author":256,"menu_order":2,"template":"","meta":{"pb_show_title":"on","pb_short_title":"","pb_subtitle":"","pb_authors":[],"pb_section_license":""},"chapter-type":[],"contributor":[],"license":[],"class_list":["post-484","chapter","type-chapter","status-publish","hentry"],"part":478,"_links":{"self":[{"href":"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-json\/pressbooks\/v2\/chapters\/484","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-json\/pressbooks\/v2\/chapters"}],"about":[{"href":"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-json\/wp\/v2\/types\/chapter"}],"author":[{"embeddable":true,"href":"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-json\/wp\/v2\/users\/256"}],"version-history":[{"count":24,"href":"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-json\/pressbooks\/v2\/chapters\/484\/revisions"}],"predecessor-version":[{"id":1166,"href":"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-json\/pressbooks\/v2\/chapters\/484\/revisions\/1166"}],"part":[{"href":"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-json\/pressbooks\/v2\/parts\/478"}],"metadata":[{"href":"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-json\/pressbooks\/v2\/chapters\/484\/metadata\/"}],"wp:attachment":[{"href":"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-json\/wp\/v2\/media?parent=484"}],"wp:term":[{"taxonomy":"chapter-type","embeddable":true,"href":"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-json\/pressbooks\/v2\/chapter-type?post=484"},{"taxonomy":"contributor","embeddable":true,"href":"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-json\/wp\/v2\/contributor?post=484"},{"taxonomy":"license","embeddable":true,"href":"https:\/\/pressbooks.ulib.csuohio.edu\/projectmanagement2ndedition\/wp-json\/wp\/v2\/license?post=484"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}