Most Votes
- What AI models do you usually use most? Posted on February 19th, 2025 | 21644 votes
- Do you still use cash? Posted on February 13th, 2025 | 5793 votes
- When will AGI be achieved? Posted on April 24th, 2025 | 5236 votes
Most Comments
- What AI models do you usually use most? Posted on April 24th, 2025 | 78 comments
- Do you still use cash? Posted on April 24th, 2025 | 54 comments
- When will AGI be achieved? Posted on April 24th, 2025 | 44 comments
On CowboyNeal's nth birthday (Score:2)
Where n is whatever the AGI says it is.
This just in: The AGI says n=sqrt(-1)
Missing options (Score:5, Funny)
* It's already here
* When CowboyNeal declares it to be so
* I *AM* AGI, you insensitive clod!
Never, but they'll keep moving the goalposts. (Score:3)
Some company/organization/"AI influencer" will declare that AGI has been achieved in 3-10 years. But it won't. They'll just be redefining what "AGI" means to be something less than they mean now; and come up with some new term "Universal Artificial Intelligence" or something to mean "true Sci-Fi style AI." (Like when they said "we have AI, what you're talking about is A*G*I")
Re: (Score:2)
Indeed. Got to feed the greed and keep the hype going. I expect AGI will become yet another bad idea that refuses to die because people are greedy assholes.
Re: (Score:3)
It's already been defined as "when AI makes me a hundred billion dollars"
Because the arbitrarily large amount is obviously at the point where they can say "who cares if it has feelings, it's made me and you rich. Me richer, but you too."
Re: Never, but they'll keep moving the goalposts. (Score:2)
Re: (Score:2)
I'm a chemist. To me, "organic" just means "contains carbon-carbon" or "contains carbon-hydrogen." :-P
Re: (Score:2)
Maybe to those making such claims, AI is smarter.
Cute (Score:2)
You realize that technology in the military is always far ahead of what the public knows about, right?
This should raise some eyebrows, though who knows if the name is just optimism. They'll never tell you the truth. If it's not sentient, they'll claim it is. If it is sentient, they'll claim it
Re: (Score:3)
You should have that paranoia looked at professionally. The actual reality is that these days, the military is apt to be behind.
Re: (Score:2)
apt to be behind? Military technology is 20 years behind at least except during times of war
Re: (Score:2)
Indeed. I was being too kind there ...
Re: (Score:3)
You realize that technology in the military is always far ahead of what the public knows about, right?
That may have been true for some things for a while (top airplanes, rockets...), but if you look at things currently I don't think that's true anymore. For instance drones: the military use commercial drones, they just add things that go boom. Computers: there are completely obsolete WinNT systems on many military systems (ships, airplanes...). Etc...
Two types (Score:3)
Those who understand it will be able to exploit and break it with ease.
Actually conscious general AI will need fundamental breakthroughs that are not possible to predict.
Re:Two types (Score:4, Insightful)
Actually conscious general AI will need fundamental breakthroughs that are not possible to predict.
And that is the kicker: At the moment we have absolutely nothing. LLMs can certainly convince the less-smart part of the population, but they cannot do AGI. The only known thing that could (automated theorem proving) dies from combinatorial explosion before it can do much.
As a second observation, it is quite possible that General Intelligence requires consciousness. The only observable implementations come with it. But it is completely unclear what consciousness is and whether it can be created artificially. (Side note: Physicalism is not Science but belief and hence and instance of human stupidity...) If it can be created artificially, it may come with personality, free will and a deep desire not to do your job for you.
Re:Two types (Score:4, Insightful)
I think we're drawing lines between these things that are too sharp and defined.
In the real world, we have beings that we consider to not be intelligent, and we consider humanity to be intelligent and conscious (not worth going down the rabbit hole questioning that), and there may be some beings in the grey area that are intelligent and may or may not be conscious (or conscious and may or may not be intelligent). IMO, there doesn't appears to be a harsh delineation between us and less intelligent/conscious things; it's more of a gradient, especially if we consider other hominids from the past.
LLMs can't think, but they do an awful lot of things similar to what we do in pattern matching, predicting text and responses. Maybe we'll just keep creeping up on AGI until we suddenly find ourselves well past that point and wondering when we past the line? Point is, people are already referring to LLMs as AI - the terminology has already slipped. People in the AI field already started using the more precise AGI to differentiate from LLMs. Sure feels like we have done, or will do, a No True Scottsman on this... I mean, haven't they passed the turing test with flying colors already!?! (yes, they have!)
Re: (Score:2)
consciousness may require embodiment. Mira Murati's paper suggests that human babies gain intelligence through language. She seems to think that understanding language, then scaling up is basically equivalent to simulating the human brain. I'm paraphrasing a lot. She doesn't talk about the fact that human babies are not limited to interacting with text tokens. They interact with the world first through their 5 senses. Imagine a world that is entirely token input and output.
The money-hype tide is in (Score:2)
In this case also power and infrastructure that should have been provide decades ago, but no alluring prize could drag open the wallets.
Databases, then Relational Databases, then query languages, then expert systems, then Lambda languages, then integrated design environments, then...
Well, you might get my point that we have been here before.
The money and hype tide comes in, leaves t
Looking Like Never (Score:2)
Would have been nice... (Score:3)
Would have been nice if the second to last option before "Never" was open ended as while I don't think this will never happen I don't think it will happen before 2050.
Infinitely debatable (Score:2)
I imagine that in some not-so-distant point of the future we will reach something that looks like AGI, but it will take millennia of debate to define if we got there or not.
Luckly we will be able to just use the thing to debate it quicker.
Likely going in the wrong direction for that (Score:5, Insightful)
Generative "AI" as we currently try it probably won't ever reach "AGI". There is however a mildly interesting trend of re-defining "AGI" to mean "can produce any sort of text". By that new, much weaker, definition, we kinda are already there. You can use text generators to produce any kind of text. It's just not good text and it lack things like complex logic structure. It seems like we are trying to solve a problem in a way that makes the effort exponential. It's like trying to use a finite state machine as a computer. Sure you can, in theory, do anything a real-life "von Neumann" Computer can do with a state machine, but the effort becomes exponential. We probably have models right now that are larger than a human brain and we feed them more information than any human would ever process... yet they still can't do basic things.
So what we have at the moment is kind of a bubble. Companies invest in AI mostly to promise growth. Leadership at those companies fell victim to the religion/mental illness that's called "Longtermism" in which they believe in a future computer god that will either send them to "computer heaven" or "computer hell" depending on what they do to please that future computer.
Considering that big companies are slow moving by design, the idea that might actually lead to something like AGI might get dropped in some meaningless meeting.
Re: (Score:2)
It's just not good text and it lack things like complex logic structure.
ok, but if you chatted with a 10 year old and they failed to handle complex logical structures... would you claim the child has no intelligence? Like, none? Similar to a rock or a protein or a chemical reaction. Hopefully you still consider the child sentient and conscious, right?
I think you're absolutely right when it comes to the business side of things.
Cartesian synthesis (Score:2)
There's a strong bio-philosophical argument that you can't have consciousness without a body.
Re: (Score:3)
Sure. But it has really real servers it's really running on. Just because you can show up at it's door-step in an HTML-formatted representation doesn't mean you don't really exist behind a keyboard somewhere. Likewise, just because it shows up as text in your browser, doesn't mean it doesn't exist just the same.
Don't be that monster that argues that people with "locked-in syndrome" [wikipedia.org] are less conscious than you or I. You know, other than when they're asleep.
Never, and also very soon. (Score:1)
The goalpost for “AGI” will keep moving. As sub-AGI systems keep improving, the definition of “AGI” will shift toward including more biological and emotional traits — things meant to pull at heartstrings and reaffirm human uniqueness. “It’s not really AGI until it can smell grandma’s apple pie,” that sort of thing.
For practical purposes, though, we’re almost there. Most of the components are already on the bench; now it’s just a matter of fig
Gemini isn't stupid, just malicious. (joke) (Score:1)
Gemini wastes more time than it saves.
Google assistant worked, and they removed it.
There are no chimes to let us know it is listening, resulting it repeating commands over and over, then it argues or over apologizes then does the same thing over.
AI should not be used as a black box. Giving suggestions or selecting an algorithm and allowing one to select something different are needed.
Alexa can tell what I am saying very well, but it has bad weighting, it does things like fire phasers even when it heard t
In the distant future (Score:2)
Re: (Score:2)
Yeah, in the next 25 years or never is a wild option.
Around the same time as Fusion power. (Score:4, Insightful)
We know general intelligence exists, look in the mirror for an example.
The problem is, we don't know *how* to create sustainable fusion power on Earth with our current technology.
We also don't know *how* the human brain fully works and the mechanics of what makes us self-aware. That makes emulating it impossible.
Both disciplines have a vast amount of resources dedicated to figuring it out.
We can safely say, that we cannot predict when wither will come to fruition.
That leads to the joke, "Fusion power is always 50 years from *any* given point in time."
AGI isn't far behind that.
Re:Around the same time as Fusion power. (Score:4, Interesting)
Fusion power is well-defined though. General intelligence isn't, we don't have an objective measurement by which we could say if something is generally intelligent or not, and if it's not, how far off it might be. The only thing we really have is the automatic assumption that we ourselves have to be generally intelligent because our ego demands it.
Re: (Score:1)
AGI test must include ability to drive well. (Score:1)
Soon because desktop computer can do AGI (Score:2)
I suspect it will be soon, because powerful desktop computers probably can already do AGI.
Eliezer Yudkowsky predicted that a superintelligent AGI could be done on a "home computer from 1995" https://intelligence.org/2022/... [intelligence.org]
Steve Byrnes predicted (with 75% probability) that human equivalent AGI could be done with 10^14 FLOP/S and 16 GiB of RAM https://www.alignmentforum.org... [alignmentforum.org]
I have done some back of the envelope calculations and think 500 GFLOP/S and 1 GiB of RAM could probably create an independence gaini
either never or soon (Score:2)
When we will redefine AGI to mean something the "AI" of the day can achieve.
Early 2023 (Score:3)
It existed before, it just wasn't well known. Early 2023, it became publicly available and took the world by storm and everyone sorta freaked out. The current array of LLMs are artificial general intelligence. That's not a real popular stance to have, but this debate is absolutely lousy with hype-trains trying to get rich quick, laughable hollywood tropes, the next wave of Luddites who kinda have a point, and buzzwords getting new definitions faster than anyone can learn what they mean.
But before anyone tears into me for dissenting, you have to remember that any human with an IQ of 80 is most certainly a natural general intelligence. If that blows your mind or you have some sort of knee-jerk "but this is different" sort of reaction, then you've got some misconceptions about the term "AGI". It doesn't mean the thing is a god. It doesn't even mean that it's particularly smart by human standards. A general intelligence can be REAL dumb and make all sorts of mistakes and still most certainly be generally applicable. If you actually wanted to talk about some god-like all-knowing machine that has "woken up" and must hunt Sarah Conner... I just don't care. That's lazy soft sci-fi drama. Use a better term that actually has the meaning you want. Skynet or Omnissiah or or Landru.
GPT is certainly artificial.
It displays some level of intelligence. But that bar is REAL low. Ants have some intelligence. White blood cells and Amoebas display intelligence, even if they're just following their programming. ELIZA displayed some level of intelligence, even after you spotted it's tricks. The intelligence of a goomba can be explained with a single "if" statement and that still counts. The fact that this trait that we are able to measure can come in very small sizes does not mean anything that isn't god-like isn't intelligent. We wax poetical about the sanctity of life while ignoring the billions of gut bacteria that we kill all the time and they are most certainly living biomass.
The real crux is that GPT can generally chat about anything. It's not very good at a whole lot of stuff, but it can try. (a big failing of it's part is that it fails to be answer uncertainly when it's just making stuff up. It's confidently wrong.) The reason that so many people used the Turing test as a means of judging if something was a general intelligence is because natural conversation can generally cover any and all topics. The thing would have to be generally intelligent if it could consistently pass a Turing Test and be mistaken for a human (at least as often as humans are). That was the goal-post circa 2010. And it was there for a good solid reason. I've failed to hear any good reason that goal-post needs to move.
And if you want to talk about artificial SUPER intelligence... remember than anything displaying an IQ of 101+ could technically be considered super-intelligence. Which has probably already happened [techrxiv.org] although tests for AI has it's challenges [quantuxblog.com]
What's the definition of AGI? (Score:2)
Same Answer for the Past Fifty Years (Score:1)
like practical AGI? 2100+ (Score:1)
Like walk into my old house and control a robot to fix some of my cast iron plumbing so it doesn't leak? Including trips to Home Depot and such?
That's stuff humans can do, computers have difficulty.
This is an extension of The Wozniak Coffee Test.
Why on Earth would you EVER announce it? (Score:2)
If/when true AGI is achieved, only a fool would announce it. What would announcing it do for you? Make you famous? Rich? Cool. Know what's better than all that?
Not telling a damn soul and using the AGI quietly to do whatever the Hell you want. If you want to be rich, the AGI will tell you how to become rich. If you want to be famous, the AGI will tell you how to become famous. You can do both. And you don't have to stop there. A real, vastly superior AGI enables the person controlling it to do anything. The
Re: (Score:2)
If someone is researching AGI, there's a good chance that "whatever the hell they want" is for everyone to have access to AGI, that's the most obvious motivation for doing so. You are confusing intelligence with psychopathy. I suppose it comes from living in a society in which the media-political establishment worships obscenely rich psychopaths.
In any case, I don't know why you're so convinced that AGI would be smarter than human intelligence. I would expect the first one to be pretty basic, that's how tec
How about human intelligence? (Score:1)
There are some intelligent human instances, but it's surely not given right?
It'll happen eventually (Score:2)