Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. The Lounge
  3. AI or LLM

AI or LLM

Scheduled Pinned Locked Moved The Lounge
ai-models
15 Posts 10 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • O obeobe

    Your sentiment touches on an interesting and nuanced debate within the field of artificial intelligence and technology. The terms "AI" (Artificial Intelligence) and "LLM" (Large Language Models) refer to different concepts, though they are related. **Artificial Intelligence (AI)** is a broad term that encompasses the development of computer systems able to perform tasks that normally require human intelligence. These tasks include understanding natural language, recognizing patterns, solving complex problems, and more. AI aims to mimic or replicate human cognitive functions, and it can be applied in various ways, from simple algorithms in a calculator to complex systems driving autonomous vehicles or providing personalized recommendations on streaming platforms. **Large Language Models (LLM)**, on the other hand, are a specific type of AI focused on understanding, generating, and interacting with human language. LLMs like GPT (Generative Pre-trained Transformer) are trained on vast amounts of text data and can generate coherent, contextually relevant text based on the input they receive. They are a subset of AI technologies, showcasing the advancements in natural language processing and understanding. The distinction you're hinting at might stem from the perception that the term "AI" is sometimes used too broadly or ambitiously, suggesting a level of intelligence or autonomy that current technology does not yet possess. In reality, most of what is popularly referred to as AI involves machine learning algorithms, including LLMs, that are highly specialized and operate within defined parameters set by their human creators. The debate around terminology also reflects concerns about AI's societal impact, ethical considerations, and the future of human-machine interaction. By preferring not to use "AI" to describe current technologies, you might be emphasizing the gap between the capabilities of present-day systems and the concept of true artificial general intelligence (AGI) — machines that can understand, learn, and apply knowledge across a wide range of tasks, similar to a human being. In essence, your preference for terminology might be advocating for a more precise language that accurately reflects the current state of technology and its limitations, fostering a clearer understanding among the general public and within the tech community.

    S Offline
    S Offline
    Sean Cundiff
    wrote on last edited by
    #6

    All good comments.

    obeobe wrote:

    The distinction you're hinting at might stem from the perception that the term "AI" is sometimes used too broadly or ambitiously, ...

    This is one of the things I'm getting at. It's more of a marketing term today and what is called 'AI' today is really a small subset of the overall field.

    obeobe wrote:

    The debate around terminology also reflects concerns about AI's societal impact, ethical considerations, and the future of human-machine interaction.

    I see news this morning that the US government is now requiring every agency to have a 'Chief AI Officer' to handle some of these issues.

    -Sean ---- Fire Nuts

    1 Reply Last reply
    0
    • S Steve Naidamast

      So do I considering that what we are seeing is not true AI but simply algorithms that go through massive amounts of data and in some case, as a result, make up things. An interesting technical article appeared the other day detailing how the reliance on these Large Language Models (LLMs) could actually increase surveillance on developers as the use of such tools requires interaction with a backend that can interface with these LLMs. Personally, I have yet to try one in my own development and have no plans to. I know what code I want to write and don't need the assistance of my machine to help me...

      Steve Naidamast Sr. Software Engineer Black Falcon Software, Inc. blackfalconsoftware@outlook.com

      S Offline
      S Offline
      Sean Cundiff
      wrote on last edited by
      #7

      Steve Naidamast wrote:

      So do I considering that what we are seeing is not true AI but simply algorithms that go through massive amounts of data and in some case, as a result, make up things.

      Agreed.

      -Sean ---- Fire Nuts

      J 1 Reply Last reply
      0
      • O obeobe

        Your sentiment touches on an interesting and nuanced debate within the field of artificial intelligence and technology. The terms "AI" (Artificial Intelligence) and "LLM" (Large Language Models) refer to different concepts, though they are related. **Artificial Intelligence (AI)** is a broad term that encompasses the development of computer systems able to perform tasks that normally require human intelligence. These tasks include understanding natural language, recognizing patterns, solving complex problems, and more. AI aims to mimic or replicate human cognitive functions, and it can be applied in various ways, from simple algorithms in a calculator to complex systems driving autonomous vehicles or providing personalized recommendations on streaming platforms. **Large Language Models (LLM)**, on the other hand, are a specific type of AI focused on understanding, generating, and interacting with human language. LLMs like GPT (Generative Pre-trained Transformer) are trained on vast amounts of text data and can generate coherent, contextually relevant text based on the input they receive. They are a subset of AI technologies, showcasing the advancements in natural language processing and understanding. The distinction you're hinting at might stem from the perception that the term "AI" is sometimes used too broadly or ambitiously, suggesting a level of intelligence or autonomy that current technology does not yet possess. In reality, most of what is popularly referred to as AI involves machine learning algorithms, including LLMs, that are highly specialized and operate within defined parameters set by their human creators. The debate around terminology also reflects concerns about AI's societal impact, ethical considerations, and the future of human-machine interaction. By preferring not to use "AI" to describe current technologies, you might be emphasizing the gap between the capabilities of present-day systems and the concept of true artificial general intelligence (AGI) — machines that can understand, learn, and apply knowledge across a wide range of tasks, similar to a human being. In essence, your preference for terminology might be advocating for a more precise language that accurately reflects the current state of technology and its limitations, fostering a clearer understanding among the general public and within the tech community.

        FreedMallocF Offline
        FreedMallocF Offline
        FreedMalloc
        wrote on last edited by
        #8

        When I was first introduced to the term AI many years ago it was (at least in my mind) couched in terms as a self-aware thinking machine. Much like the character Lt. Commander Data of Star Trek, though not necessarily as a bi-pedal android. To me it seems the term AI has been hijacked by those hyping their systems to make them seem more than they are. With LLMs leading the charge in a "fake it till you make it" kind of way: "The prompt closely matches with this pattern in my training data. Which indicates this type of response. Respond with factual data relevant to the prompts if available, else make something up." - vastly oversimplified I know. Don't get me wrong. I find their capabilities amazing: generating pictures and video from prompts, composing prose, writing/explaining code and what all else. I also find watching Penn and Teller make things disappear and reappear as amazing feats as well. I don't know how it's done, but I know it's not real, it's not magic. And they are the first to admit that it's all a trick. As with many things I suppose the clever marketeers have won the terminology day as it seems the term AI has indeed evolved into a broader meaning as you state. But, I still find myself aligning more with the OP in a curmudgeonly, get-off-my-lawn kind of way. When I see the term AI I personally tend to think of it not so much as "Artificial" Intelligence but as "Anthropomorphic" Intelligence. BTW, none of this will cause me to avoid using AI (whatever the definition) when I think it will benefit me.

        O 1 Reply Last reply
        0
        • S Sean Cundiff

          Personally, I have a pet peeve about calling it AI right now. #getoffmygrass

          -Sean ---- Fire Nuts

          U Offline
          U Offline
          User 13750731
          wrote on last edited by
          #9

          Certainly! Let’s delve into the distinctions between Artificial Intelligence (AI) and Large Language Models (LLMs): Artificial Intelligence (AI): Definition: AI encompasses the development of computer systems capable of performing tasks that typically require human intelligence. These tasks include speech recognition, natural language processing (NLP), text generation, decision-making, planning, and more. Functionality: AI systems mimic human behavior and decision-making processes. They analyze visual and textual data, adapt to their environment, and make informed decisions. Applications: AI finds applications in various fields, including marketing, medicine, finance, science, education, and industry. For instance, it generates marketing materials, aids in disease diagnosis, and analyzes financial markets for investment decisions. Types: AI can be classified into weak AI (limited to specific areas of cognition) and a hypothetical form called strong AI (endowed with human consciousness and capable of performing diverse tasks). Large Language Models (LLMs): Definition: LLMs are a specialized class of AI models specifically designed for handling language-related tasks. They use natural language processing (NLP) to understand and generate human-like text-based content. Functionality: LLMs excel at understanding and producing text. They serve as foundation models for a wide range of NLP tasks. Applications: LLMs are used in chatbots, language translation, content generation, and more. For example, tools like ChatGPT are based on LLMs. Compatibility with Generative AI: LLMs are a subset of generative AI. While generative AI emphasizes creativity and diversity, LLMs focus on consistency and proficiency in generating contextually relevant text

          1 Reply Last reply
          0
          • S Sean Cundiff

            Steve Naidamast wrote:

            So do I considering that what we are seeing is not true AI but simply algorithms that go through massive amounts of data and in some case, as a result, make up things.

            Agreed.

            -Sean ---- Fire Nuts

            J Offline
            J Offline
            jochance
            wrote on last edited by
            #10

            "make up things" Is kinda like a false positive on some kinds of tests in particular. Those could be interesting. Kinda like Freudian slips. Haha, this AI hallucinated a package. But why?

            1 Reply Last reply
            0
            • O obeobe

              Your sentiment touches on an interesting and nuanced debate within the field of artificial intelligence and technology. The terms "AI" (Artificial Intelligence) and "LLM" (Large Language Models) refer to different concepts, though they are related. **Artificial Intelligence (AI)** is a broad term that encompasses the development of computer systems able to perform tasks that normally require human intelligence. These tasks include understanding natural language, recognizing patterns, solving complex problems, and more. AI aims to mimic or replicate human cognitive functions, and it can be applied in various ways, from simple algorithms in a calculator to complex systems driving autonomous vehicles or providing personalized recommendations on streaming platforms. **Large Language Models (LLM)**, on the other hand, are a specific type of AI focused on understanding, generating, and interacting with human language. LLMs like GPT (Generative Pre-trained Transformer) are trained on vast amounts of text data and can generate coherent, contextually relevant text based on the input they receive. They are a subset of AI technologies, showcasing the advancements in natural language processing and understanding. The distinction you're hinting at might stem from the perception that the term "AI" is sometimes used too broadly or ambitiously, suggesting a level of intelligence or autonomy that current technology does not yet possess. In reality, most of what is popularly referred to as AI involves machine learning algorithms, including LLMs, that are highly specialized and operate within defined parameters set by their human creators. The debate around terminology also reflects concerns about AI's societal impact, ethical considerations, and the future of human-machine interaction. By preferring not to use "AI" to describe current technologies, you might be emphasizing the gap between the capabilities of present-day systems and the concept of true artificial general intelligence (AGI) — machines that can understand, learn, and apply knowledge across a wide range of tasks, similar to a human being. In essence, your preference for terminology might be advocating for a more precise language that accurately reflects the current state of technology and its limitations, fostering a clearer understanding among the general public and within the tech community.

              J Offline
              J Offline
              jschell
              wrote on last edited by
              #11

              Or perhaps it is just a marketing term. Which has little or nothing to do with anything in computer science.

              1 Reply Last reply
              0
              • FreedMallocF FreedMalloc

                When I was first introduced to the term AI many years ago it was (at least in my mind) couched in terms as a self-aware thinking machine. Much like the character Lt. Commander Data of Star Trek, though not necessarily as a bi-pedal android. To me it seems the term AI has been hijacked by those hyping their systems to make them seem more than they are. With LLMs leading the charge in a "fake it till you make it" kind of way: "The prompt closely matches with this pattern in my training data. Which indicates this type of response. Respond with factual data relevant to the prompts if available, else make something up." - vastly oversimplified I know. Don't get me wrong. I find their capabilities amazing: generating pictures and video from prompts, composing prose, writing/explaining code and what all else. I also find watching Penn and Teller make things disappear and reappear as amazing feats as well. I don't know how it's done, but I know it's not real, it's not magic. And they are the first to admit that it's all a trick. As with many things I suppose the clever marketeers have won the terminology day as it seems the term AI has indeed evolved into a broader meaning as you state. But, I still find myself aligning more with the OP in a curmudgeonly, get-off-my-lawn kind of way. When I see the term AI I personally tend to think of it not so much as "Artificial" Intelligence but as "Anthropomorphic" Intelligence. BTW, none of this will cause me to avoid using AI (whatever the definition) when I think it will benefit me.

                O Offline
                O Offline
                obeobe
                wrote on last edited by
                #12

                My previous reply was just ChatGPT's response to the OP (I copy-pasted their post as the prompt). I thought it would be obvious but I guess I was wrong, it seems that no one picked up on that. Anyway, I guess it depends on how you define "intelligence". A quick Google definition check yields:

                Quote:

                Intelligence: the ability to acquire and apply knowledge and skills.

                I would say that LLMs easily fit inside this definition... As for real vs. magic - our brains are also real and not magic. Doesn't mean we don't have intelligence, right?

                1 Reply Last reply
                0
                • S Steve Naidamast

                  So do I considering that what we are seeing is not true AI but simply algorithms that go through massive amounts of data and in some case, as a result, make up things. An interesting technical article appeared the other day detailing how the reliance on these Large Language Models (LLMs) could actually increase surveillance on developers as the use of such tools requires interaction with a backend that can interface with these LLMs. Personally, I have yet to try one in my own development and have no plans to. I know what code I want to write and don't need the assistance of my machine to help me...

                  Steve Naidamast Sr. Software Engineer Black Falcon Software, Inc. blackfalconsoftware@outlook.com

                  O Offline
                  O Offline
                  obeobe
                  wrote on last edited by
                  #13

                  Our brains are also algorithms that go through massive amounts of data and in some cases, as a result, make things up :)

                  Quote:

                  I know what code I want to write and don't need the assistance of my machine to help me

                  You are living in the past, my friend... I've been programming for 30 years. I am proficient in a variety of languages and platforms. I use AI daily for my work. You don't know what you're missing out on.

                  S 1 Reply Last reply
                  0
                  • O obeobe

                    Our brains are also algorithms that go through massive amounts of data and in some cases, as a result, make things up :)

                    Quote:

                    I know what code I want to write and don't need the assistance of my machine to help me

                    You are living in the past, my friend... I've been programming for 30 years. I am proficient in a variety of languages and platforms. I use AI daily for my work. You don't know what you're missing out on.

                    S Offline
                    S Offline
                    Sean Cundiff
                    wrote on last edited by
                    #14

                    obeobe wrote:

                    Our brains are also algorithms that go through massive amounts of data and in some cases, as a result, make things up

                    This would imply our brains are deterministic, which simply isn't so. If you want to classify our brains as a data structure/algorithm, I'd go with an NFA-lamba. :laugh: Even that is woefully inadequate in my opinion.

                    -Sean ---- Fire Nuts

                    1 Reply Last reply
                    0
                    • S Sean Cundiff

                      Personally, I have a pet peeve about calling it AI right now. #getoffmygrass

                      -Sean ---- Fire Nuts

                      C Offline
                      C Offline
                      Chris Maunder
                      wrote on last edited by
                      #15

                      These days an if/then statement seems to qualify as AI. There's some seriously deep, deep changes happening in the world right now. A lot of it seems to be covered in a blanket of "me toos" and re-wrapping of other's work. My bet is that AI will simply come to mean LLMs until someone (please, someone!) comes up with a better, sexier name than LLM.

                      cheers Chris Maunder

                      1 Reply Last reply
                      0
                      Reply
                      • Reply as topic
                      Log in to reply
                      • Oldest to Newest
                      • Newest to Oldest
                      • Most Votes


                      • Login

                      • Don't have an account? Register

                      • Login or register to search.
                      • First post
                        Last post
                      0
                      • Categories
                      • Recent
                      • Tags
                      • Popular
                      • World
                      • Users
                      • Groups