Abstract
The emergence of Large Language Models (LLMs) is expected to impact the job market significantly, accelerating automation trends and posing a risk to traditionally creative-oriented jobs. LLMs can automate tasks in various fields, including design, journalism, and creative writing. Companies and public institutions can leverage generative models to enhance productivity and reduce workforce requirements through machine-assisted workflows and natural language interactions. While technical skills like programming may become less important in certain roles, generative models are unlikely to fully replace programmers due to the need for expertise in code validation and niche development. The enterprise landscape of LLMs comprises providers (organizations training proprietary models), integrators (technology companies fine-tuning LLMs for specific applications), and users (companies and individuals adopting LLM-powered solutions). The applications of the models include conversational search, customer service chatbots, content creation, personalized marketing, data analysis, and basic workflow automation. The regulatory landscape is rapidly evolving, with key considerations including copyright, data security, and liability. Government involvement and informed expertise are recommended to guide governance and decision-making processes in this domain.
Chapter PDF
References
Goldman Sachs. Ai investment forecast to approach $200 billion globally by 2025, August 2023. Accessed: 2023-08-24.
McKinsey & Company. The state of ai in 2023: Generative ai’s breakout year, August 2023. Accessed: 2023-08-24.
Tom B. Brown et al. Language models are few-shot learners, 2020.
Edward J. Hu et al. Lora: Low-rank adaptation of large language models, 2021.
Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. Qlora: Efficient finetuning of quantized llms, 2023.
Haokun Liu et al. Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning, 2022.
Xiangyu Qi et al. Fine-tuning aligned language models compromises safety, even when users do not intend to!, 2023.
Krystal Hu. ChatGPT sets record for fastest growing user base: Analyst note, 2023. Accessed: 2023-08-21.
Maria del Rio-Chanona, Nadzeya Laurentsyeva, and Johannes Wachs. Are large language models a threat to digital public goods? evidence from activity on stack overflow, 2023.
Nils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks, 2019.
Significant Gravitas. Auto-gpt, 2023. An experimental open-source attempt to make GPT-4 fully autonomous.
Matt Bornstein and Rajko Radovanovic. Emerging architectures for llm applications, 2023. Accessed: 2023-08-17.
Harrison Chase. Langchain, 10 2022. If you use this software, please cite it as below.
Jerry Liu. Llamaindex, 11 2022. If you use this software, please cite it as below.
Shunyu Yao et al. React: Synergizing reasoning and acting in language models, 2023.
Thomas Wolf et al. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online, 10 2020. Association for Computational Linguistics. Software available at https://github.com/huggingface/transformers.
Timo Schick et al. Toolformer: Language models can teach themselves to use tools, 2023.
Shishir G. Patil, Tianjun Zhang, Xin Wang, and Joseph E. Gonzalez. Gorilla: Large language model connected with massive apis, 2023.
OWASP. OWASP Top Ten for Large Language Model Applications, 2013. Accessed: 2023-08-17.
National Vulnerability Database. CVE-2023-29374, 2023. 28.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2024 The Author(s)
About this chapter
Cite this chapter
Schillaci, Z. (2024). LLM Adoption Trends and Associated Risks. In: Kucharavy, A., Plancherel, O., Mulder, V., Mermoud, A., Lenders, V. (eds) Large Language Models in Cybersecurity. Springer, Cham. https://doi.org/10.1007/978-3-031-54827-7_13
Download citation
DOI: https://doi.org/10.1007/978-3-031-54827-7_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-54826-0
Online ISBN: 978-3-031-54827-7
eBook Packages: Computer ScienceComputer Science (R0)