Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. The Lounge
  3. Artificial Super-Intelligence

Artificial Super-Intelligence

Scheduled Pinned Locked Moved The Lounge
helpquestionhtmlcom
22 Posts 14 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • T TheOnlyRealTodd

    So, I read this crazy long articleThe Artificial Intelligence Revolution: Part 1 - Wait But Why[^] which has actually been out for nearly 2 years now - I'm sure many of you have come across it. But I can't help but feel that there are things at play in this whole thing other than what the article mentions... For one, it mentions that if we could somehow make a self-learning computer which is on the same level of humans, which would be called AGI - Artificial General Intelligence, then the jump from AGI to ASI would be super fast... Basically, this computer will harness the inherent advantages of a computer system over the human brain and essentially learn everything we've learned plus billions more times over at an exponential rate, at which point the computer will be to us as we are to an ant, in terms of intellect and smarts. However, when reading through this, I can't help but ask this question though: Assuming a computer COULD jump online and essentially dig into all of our knowledge and "understand" it, that would still only allow it to access human knowledge. How is it going to derive super-intelligence from only having access to human-intelligence information? Supposedly, the arguments are out that if a computer reaches AGI, then within small order it will suddenly be billions of times more intelligent than a human... But there are only a finite number of actual resources for an intelligent computer to access... And they are written by humans. In other words, say a lion makes a special machine that could learn from it, other lions, and progress... But it only had access to lion knowledge and was in itself built by a lion - meaning with the limitations of a lion... How would that machine now suddenly dive into an entire other dimension of intelligence? Even if we made a machine which properly emulates/mimics the human mind... Then we are making a machine which mimics the human mind - meaning it has flaws and it is not superior to it. I feel like a superior race or more intelligent being could supply such a computer with advanced intelligence, but operating out of the human knowledgebase, that isn't quite likely. Not only that, but we must also consider trial and error. Only so much can be learned from reading and processing written/online information... Some things, this AI would have to tes

    B Offline
    B Offline
    BillWoodruff
    wrote on last edited by
    #12

    One of the grandiose ideas running around in the current grandiose AI meme-verse is that at some point the self-replicating, self-designing/modifying, "entity" reaches a point of complexity where a "singularity" occurs, a singularity which results in the entity having some analogue to what we refer to (but, can never fully explain) as "consciousness." And, if that happens, why should that new consciousness be like that of its human "ancestors" ? I like the idea that the super-conscious AI entities of the future will determine, correctly, that human beings are a destructive parasite, the most toxic on the planet, and a threat to all other life-forms and the planetary ecology, and will decide to keep a few humans around as pets, or in a zoo, but, will, as is only logical, discard the rest to make compost, or something useful. They will look on their human creators as messy analog wetware that, surprisingly, created something that could replace them, something much more moral, ethical, and efficient. For now, I am a skeptic about such prognostications by Kurzweil, Hawking, et. al., and since I most likely won't be around in 2030, won't have a chance to see how this plays out further, but, I would be surprised if in the next thirty years some mind-blowing things don't happen.

    «There is a spectrum, from "clearly desirable behaviour," to "possibly dodgy behavior that still makes some sense," to "clearly undesirable behavior." We try to make the latter into warnings or, better, errors. But stuff that is in the middle category you don’t want to restrict unless there is a clear way to work around it.» Eric Lippert, May 14, 2008

    1 Reply Last reply
    0
    • T TheOnlyRealTodd

      So, I read this crazy long articleThe Artificial Intelligence Revolution: Part 1 - Wait But Why[^] which has actually been out for nearly 2 years now - I'm sure many of you have come across it. But I can't help but feel that there are things at play in this whole thing other than what the article mentions... For one, it mentions that if we could somehow make a self-learning computer which is on the same level of humans, which would be called AGI - Artificial General Intelligence, then the jump from AGI to ASI would be super fast... Basically, this computer will harness the inherent advantages of a computer system over the human brain and essentially learn everything we've learned plus billions more times over at an exponential rate, at which point the computer will be to us as we are to an ant, in terms of intellect and smarts. However, when reading through this, I can't help but ask this question though: Assuming a computer COULD jump online and essentially dig into all of our knowledge and "understand" it, that would still only allow it to access human knowledge. How is it going to derive super-intelligence from only having access to human-intelligence information? Supposedly, the arguments are out that if a computer reaches AGI, then within small order it will suddenly be billions of times more intelligent than a human... But there are only a finite number of actual resources for an intelligent computer to access... And they are written by humans. In other words, say a lion makes a special machine that could learn from it, other lions, and progress... But it only had access to lion knowledge and was in itself built by a lion - meaning with the limitations of a lion... How would that machine now suddenly dive into an entire other dimension of intelligence? Even if we made a machine which properly emulates/mimics the human mind... Then we are making a machine which mimics the human mind - meaning it has flaws and it is not superior to it. I feel like a superior race or more intelligent being could supply such a computer with advanced intelligence, but operating out of the human knowledgebase, that isn't quite likely. Not only that, but we must also consider trial and error. Only so much can be learned from reading and processing written/online information... Some things, this AI would have to tes

      M Offline
      M Offline
      Marc Clifton
      wrote on last edited by
      #13

      TheOnlyRealTodd wrote:

      How would that machine now suddenly dive into an entire other dimension of intelligence?

      Well, maybe it would read posts like yours (and the whole spectrum of literate on intelligence) and start wondering what other intelligence is out there. But wondering requires imagination / curiosity. :doh: Marc

      Imperative to Functional Programming Succinctly Contributors Wanted for Higher Order Programming Project! Learning to code with python is like learning to swim with those little arm floaties. It gives you undeserved confidence and will eventually drown you. - DangerBunny

      1 Reply Last reply
      0
      • T TheOnlyRealTodd

        So, I read this crazy long articleThe Artificial Intelligence Revolution: Part 1 - Wait But Why[^] which has actually been out for nearly 2 years now - I'm sure many of you have come across it. But I can't help but feel that there are things at play in this whole thing other than what the article mentions... For one, it mentions that if we could somehow make a self-learning computer which is on the same level of humans, which would be called AGI - Artificial General Intelligence, then the jump from AGI to ASI would be super fast... Basically, this computer will harness the inherent advantages of a computer system over the human brain and essentially learn everything we've learned plus billions more times over at an exponential rate, at which point the computer will be to us as we are to an ant, in terms of intellect and smarts. However, when reading through this, I can't help but ask this question though: Assuming a computer COULD jump online and essentially dig into all of our knowledge and "understand" it, that would still only allow it to access human knowledge. How is it going to derive super-intelligence from only having access to human-intelligence information? Supposedly, the arguments are out that if a computer reaches AGI, then within small order it will suddenly be billions of times more intelligent than a human... But there are only a finite number of actual resources for an intelligent computer to access... And they are written by humans. In other words, say a lion makes a special machine that could learn from it, other lions, and progress... But it only had access to lion knowledge and was in itself built by a lion - meaning with the limitations of a lion... How would that machine now suddenly dive into an entire other dimension of intelligence? Even if we made a machine which properly emulates/mimics the human mind... Then we are making a machine which mimics the human mind - meaning it has flaws and it is not superior to it. I feel like a superior race or more intelligent being could supply such a computer with advanced intelligence, but operating out of the human knowledgebase, that isn't quite likely. Not only that, but we must also consider trial and error. Only so much can be learned from reading and processing written/online information... Some things, this AI would have to tes

        R Offline
        R Offline
        Ravi Bhavnani
        wrote on last edited by
        #14

        So after three weeks of research, the author claims "What's happening in the world of AI is not just an important topic, but by far THE most important topic for our future."  Well, good for you, Tim. :) I've been involved with AI (specifically expert systems) for almost 30 years, and while I'm very pleased with AI-related advances in hardware and software, I must admit I don't share author's optimism about self-learning systems whose intelligence will exceed that of man.  I find it amusing that although commercial AI applications have been around for about 40+ years, its only recently that the mass media seems to have taken notice of the field. Personally, I wish the term "AI" had never been coined.  IMHO it's too broad and too often conjures up flights of fancy for journalists who seem to have stumbled upon the collective term for technologies such as rule based systems, image processing, robotics, machine learning, virtual reality, NLP, game theory, etc. /ravi

        My new year resolution: 2048 x 1536 Home | Articles | My .NET bits | Freeware ravib(at)ravib(dot)com

        1 Reply Last reply
        0
        • T TheOnlyRealTodd

          So, I read this crazy long articleThe Artificial Intelligence Revolution: Part 1 - Wait But Why[^] which has actually been out for nearly 2 years now - I'm sure many of you have come across it. But I can't help but feel that there are things at play in this whole thing other than what the article mentions... For one, it mentions that if we could somehow make a self-learning computer which is on the same level of humans, which would be called AGI - Artificial General Intelligence, then the jump from AGI to ASI would be super fast... Basically, this computer will harness the inherent advantages of a computer system over the human brain and essentially learn everything we've learned plus billions more times over at an exponential rate, at which point the computer will be to us as we are to an ant, in terms of intellect and smarts. However, when reading through this, I can't help but ask this question though: Assuming a computer COULD jump online and essentially dig into all of our knowledge and "understand" it, that would still only allow it to access human knowledge. How is it going to derive super-intelligence from only having access to human-intelligence information? Supposedly, the arguments are out that if a computer reaches AGI, then within small order it will suddenly be billions of times more intelligent than a human... But there are only a finite number of actual resources for an intelligent computer to access... And they are written by humans. In other words, say a lion makes a special machine that could learn from it, other lions, and progress... But it only had access to lion knowledge and was in itself built by a lion - meaning with the limitations of a lion... How would that machine now suddenly dive into an entire other dimension of intelligence? Even if we made a machine which properly emulates/mimics the human mind... Then we are making a machine which mimics the human mind - meaning it has flaws and it is not superior to it. I feel like a superior race or more intelligent being could supply such a computer with advanced intelligence, but operating out of the human knowledgebase, that isn't quite likely. Not only that, but we must also consider trial and error. Only so much can be learned from reading and processing written/online information... Some things, this AI would have to tes

          M Offline
          M Offline
          Mark_Wallace
          wrote on last edited by
          #15

          Look, if you're really worried about our future machine overlords (whom I welcome), all you have to do is give the machines gender. If the greatest achievement-killer for humans doesn't stop them, nothing will.

          I wanna be a eunuchs developer! Pass me a bread knife!

          W 1 Reply Last reply
          0
          • Kornfeld Eliyahu PeterK Kornfeld Eliyahu Peter

            You should be proud that the future (or even the existence) of an artificial super-intelligence depends on you!!! :laugh:

            Skipper: We'll fix it. Alex: Fix it? How you gonna fix this? Skipper: Grit, spit and a whole lotta duct tape.

            W Offline
            W Offline
            W Balboos GHB
            wrote on last edited by
            #16

            Is that because you realize, now, that based upon the powers and abilities of OG: 1 - we are all saved! 2 - we are all doomed!

            Ravings en masse^

            "The difference between genius and stupidity is that genius has its limits." - Albert Einstein

            "If you are searching for perfection in others, then you seek disappointment. If you are seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010

            1 Reply Last reply
            0
            • M Mark_Wallace

              Look, if you're really worried about our future machine overlords (whom I welcome), all you have to do is give the machines gender. If the greatest achievement-killer for humans doesn't stop them, nothing will.

              I wanna be a eunuchs developer! Pass me a bread knife!

              W Offline
              W Offline
              W Balboos GHB
              wrote on last edited by
              #17

              Another option to destroy any hope of its dominance: give it internet access to expand its knowledge base. Death by iv4 (iv6) cuts!

              Ravings en masse^

              "The difference between genius and stupidity is that genius has its limits." - Albert Einstein

              "If you are searching for perfection in others, then you seek disappointment. If you are seek perfection in yourself, then you will find failure." - Balboos HaGadol Mar 2010

              1 Reply Last reply
              0
              • T TheOnlyRealTodd

                Yeah I was thinking the same thing: the computer could be as intelligent as it wants, but it must have an output method to actually perform work. Plus, raw intelligence does not always win. For example, say Einstein got into a fight with a gang member: chances are, Einstein would be far more intelligent... Chances are, Einstein would get his ass kicked. Hell, there are humans who jump into zoo enclosures and get killed by animals. There seems to be this notion that just because an ASI machine gathers more intelligence than us, that it will automatically take over, but there is more to that than raw intelligence. There would have to be reproduction, manpower, output capabilities, etc... True, technically it could manipulate humans to carry out its "master plan" but, this is starting to sound like a bad movie, LOL.

                M Offline
                M Offline
                Mike Marynowski
                wrote on last edited by
                #18

                Indeed it does, but how are you going to stop a mad scientist from giving it an output method? And giving it the ability to use its output method to add more output methods? An AI computer, even if itself quite limited to start off, is an unimaginably power thing. With all the knowledge of humanity at its instant disposal, it can easily hack and take over other systems that it deems moral and beneficial. It would be nearly impossible to stop without a kill switch. Basically it comes down to this - whoever creates the first such machine, does their best to program their intentions, desires and morals into this machine and hits the power button to set it loose will be the people that determine the fate of the world, for better or worse, and possibly the universe. They will literally be able to shape the future of the entire universe based on how well they transferred their ideas into code. I think it will be virtually impossible to prevent this from happening shortly after the capability is there. Maybe some clever humans will find a way to effectively control computing but I doubt it. So then it becomes a race of who unleashes their AI first, because who wants to be at the whim of someone else's AI, set to their own ideals and morals? Not I.

                F T 2 Replies Last reply
                0
                • M Mike Marynowski

                  Indeed it does, but how are you going to stop a mad scientist from giving it an output method? And giving it the ability to use its output method to add more output methods? An AI computer, even if itself quite limited to start off, is an unimaginably power thing. With all the knowledge of humanity at its instant disposal, it can easily hack and take over other systems that it deems moral and beneficial. It would be nearly impossible to stop without a kill switch. Basically it comes down to this - whoever creates the first such machine, does their best to program their intentions, desires and morals into this machine and hits the power button to set it loose will be the people that determine the fate of the world, for better or worse, and possibly the universe. They will literally be able to shape the future of the entire universe based on how well they transferred their ideas into code. I think it will be virtually impossible to prevent this from happening shortly after the capability is there. Maybe some clever humans will find a way to effectively control computing but I doubt it. So then it becomes a race of who unleashes their AI first, because who wants to be at the whim of someone else's AI, set to their own ideals and morals? Not I.

                  F Offline
                  F Offline
                  Foothill
                  wrote on last edited by
                  #19

                  Mike Marynowski wrote:

                  whoever creates the first such machine, does their best to program their intentions, desires and morals into this machine

                  Until their boss walks in and says that it has to have a working demo for the investors to preview by Friday. In a panic, the programming team finalizes the core AI and foregos incorporating the humanistic parts to meet the deadline opting to have it read from a chat-bot response database over an unprotected internet connection just to make it appear to be meeting expectations. The demo is a huge success and, due to the celebratory mood, they forget to disconnect the AI and it starts wandering the internet. Odd things start happening after that... This seems to be a far more likely scenario in my book.

                  if (Object.DividedByZero == true) { Universe.Implode(); } Meus ratio ex fortis machina. Simplicitatis de formae ac munus. -Foothill, 2016

                  1 Reply Last reply
                  0
                  • M Mike Marynowski

                    Indeed it does, but how are you going to stop a mad scientist from giving it an output method? And giving it the ability to use its output method to add more output methods? An AI computer, even if itself quite limited to start off, is an unimaginably power thing. With all the knowledge of humanity at its instant disposal, it can easily hack and take over other systems that it deems moral and beneficial. It would be nearly impossible to stop without a kill switch. Basically it comes down to this - whoever creates the first such machine, does their best to program their intentions, desires and morals into this machine and hits the power button to set it loose will be the people that determine the fate of the world, for better or worse, and possibly the universe. They will literally be able to shape the future of the entire universe based on how well they transferred their ideas into code. I think it will be virtually impossible to prevent this from happening shortly after the capability is there. Maybe some clever humans will find a way to effectively control computing but I doubt it. So then it becomes a race of who unleashes their AI first, because who wants to be at the whim of someone else's AI, set to their own ideals and morals? Not I.

                    T Offline
                    T Offline
                    TheOnlyRealTodd
                    wrote on last edited by
                    #20

                    To be honest, I feel like this is going to be one of those things that sounds like so in theory, but in practice, we're going to end up with a really expensive, not-that-great machine. There are many things besides "raw intelligence" that make us human and allow us leverage in this world/universe. This entire premise is that if we make something "so intelligent" it's going to be the end-all be-all and the reality is, it's going to be seriously lacking, lol. I mean, look at any AI that we currently have... These guys can't even make any AI work flawlessly in a video game and they're saying in 20 years they're going to basically have a human reproduced... Then a few days after, have some sort of god. I don't care how you word it: the notion is also still that humans, who are flawed and limited in intelligence, are going to somehow create something that is unflawed and able to exponentially increase its intelligence when it is in fact created by humans, based upon resources given to it by humans, in a flawed way... within a short amount of time... Really??? I find the fact that all these "smart people" even believe this is really going to happen scarier than the idea itself. But I have a feeling it has to do with getting funding to play with the latest toys at the office. What about time it takes to trial/error things? What about stupid flaws programmed into it by humans, such as the Tesla car slamming straight into a white object? I mean, the possibilities here are endless. And yes, you absolutely could argue that just a few years back, cell phones would have seemed laughable. But cell phones also aren't claiming to be some sort of artificial higher-intelligence that all humans consult. I mean, Google can't even get my driving directions right half the time, and Facebook is always trying to get me to add the most annoying, irrelevant people to my contacts. At the end of the day, these stories sound like cool movies, but humans have been trying to make/reach their own god for many many years. This sounds like nothing but a good movie and a 21st century Tower of Babel. Remember Y2K? Another thing that in theory sounded one way, but in practice ended up being nothing like the media acted like. One of the tragic flaws of the human race is that we are constantly trying "to understand" and we fail to recgonize that some things "just don't compute" to us... Not everything can be understood by our brains. Certain things in the emotional and spiritual realms are particularly impossible to "understand." This means

                    M 1 Reply Last reply
                    0
                    • T TheOnlyRealTodd

                      To be honest, I feel like this is going to be one of those things that sounds like so in theory, but in practice, we're going to end up with a really expensive, not-that-great machine. There are many things besides "raw intelligence" that make us human and allow us leverage in this world/universe. This entire premise is that if we make something "so intelligent" it's going to be the end-all be-all and the reality is, it's going to be seriously lacking, lol. I mean, look at any AI that we currently have... These guys can't even make any AI work flawlessly in a video game and they're saying in 20 years they're going to basically have a human reproduced... Then a few days after, have some sort of god. I don't care how you word it: the notion is also still that humans, who are flawed and limited in intelligence, are going to somehow create something that is unflawed and able to exponentially increase its intelligence when it is in fact created by humans, based upon resources given to it by humans, in a flawed way... within a short amount of time... Really??? I find the fact that all these "smart people" even believe this is really going to happen scarier than the idea itself. But I have a feeling it has to do with getting funding to play with the latest toys at the office. What about time it takes to trial/error things? What about stupid flaws programmed into it by humans, such as the Tesla car slamming straight into a white object? I mean, the possibilities here are endless. And yes, you absolutely could argue that just a few years back, cell phones would have seemed laughable. But cell phones also aren't claiming to be some sort of artificial higher-intelligence that all humans consult. I mean, Google can't even get my driving directions right half the time, and Facebook is always trying to get me to add the most annoying, irrelevant people to my contacts. At the end of the day, these stories sound like cool movies, but humans have been trying to make/reach their own god for many many years. This sounds like nothing but a good movie and a 21st century Tower of Babel. Remember Y2K? Another thing that in theory sounded one way, but in practice ended up being nothing like the media acted like. One of the tragic flaws of the human race is that we are constantly trying "to understand" and we fail to recgonize that some things "just don't compute" to us... Not everything can be understood by our brains. Certain things in the emotional and spiritual realms are particularly impossible to "understand." This means

                      M Offline
                      M Offline
                      Mike Marynowski
                      wrote on last edited by
                      #21

                      Yeah but nature iterated from ant-level intelligence to human level intelligence somehow, without the need for a greater intelligence to create us, didn't it? Nature basically said "modify yourself with each generation and the output creature that is best at surviving continues on and repeats this process". A similar process can be programmed into a computer, the result of which could be more intelligent than the inputs. You are correct that many things are beyond our understanding. The whole idea behind the singularity is we actually won't understand what the AI is doing after we let it loose, our minds indeed will not be completely incapable of understanding anything it is doing. If programmed with the proper "seed" goals, such as "maximize happiness of humans without bringing harm to any living creatures" (this is very simplified and AI researchers are split on whether something like this can be programmed effectively but I think it is possible), then the result will theoretically be a bunch of happy humans without us actually understanding the mechanisms by which the AI is accomplishing this task. What I think is much more interesting than letting an AI loose like this is slowly augmenting and replacing our brains with modules that interface directly with technology. Then we slowly, piece by piece, become the AI ourselves. This will lead to a much more of a controlled ascent into the singularity but with this approach we bring all our human flaws into the process as well. Could be good, could be very very bad. What I'm hoping is that when we all begin to connect our minds together in this way, the vast amount of information and processing that will become directly available to our expanded minds along with the capability to eventually transit ideas and thoughts directly between each other will result in a new beautiful age of empathy and understanding between our minds to the point where our human flaws are minimized and eventually disappear as we become one. What a lovely way that would be to meet the end of the universe :)

                      1 Reply Last reply
                      0
                      • T TheOnlyRealTodd

                        So, I read this crazy long articleThe Artificial Intelligence Revolution: Part 1 - Wait But Why[^] which has actually been out for nearly 2 years now - I'm sure many of you have come across it. But I can't help but feel that there are things at play in this whole thing other than what the article mentions... For one, it mentions that if we could somehow make a self-learning computer which is on the same level of humans, which would be called AGI - Artificial General Intelligence, then the jump from AGI to ASI would be super fast... Basically, this computer will harness the inherent advantages of a computer system over the human brain and essentially learn everything we've learned plus billions more times over at an exponential rate, at which point the computer will be to us as we are to an ant, in terms of intellect and smarts. However, when reading through this, I can't help but ask this question though: Assuming a computer COULD jump online and essentially dig into all of our knowledge and "understand" it, that would still only allow it to access human knowledge. How is it going to derive super-intelligence from only having access to human-intelligence information? Supposedly, the arguments are out that if a computer reaches AGI, then within small order it will suddenly be billions of times more intelligent than a human... But there are only a finite number of actual resources for an intelligent computer to access... And they are written by humans. In other words, say a lion makes a special machine that could learn from it, other lions, and progress... But it only had access to lion knowledge and was in itself built by a lion - meaning with the limitations of a lion... How would that machine now suddenly dive into an entire other dimension of intelligence? Even if we made a machine which properly emulates/mimics the human mind... Then we are making a machine which mimics the human mind - meaning it has flaws and it is not superior to it. I feel like a superior race or more intelligent being could supply such a computer with advanced intelligence, but operating out of the human knowledgebase, that isn't quite likely. Not only that, but we must also consider trial and error. Only so much can be learned from reading and processing written/online information... Some things, this AI would have to tes

                        M Offline
                        M Offline
                        Mikhael Loo
                        wrote on last edited by
                        #22

                        Having human problems that create human pain are what motivates human solutions that humans value. An AI with AI problems doesn't seem relevant. Lions and humans both have pain and suffering in their design. Intelligence is only one part of the complete package that must include empathy for the hosts of problems there are to solve using intelligence. "Solve all suffering = destroy all life" is pretty intelligent but not so empathetic.

                        1 Reply Last reply
                        0
                        Reply
                        • Reply as topic
                        Log in to reply
                        • Oldest to Newest
                        • Newest to Oldest
                        • Most Votes


                        • Login

                        • Don't have an account? Register

                        • Login or register to search.
                        • First post
                          Last post
                        0
                        • Categories
                        • Recent
                        • Tags
                        • Popular
                        • World
                        • Users
                        • Groups