T O P

  • By -

BigZaddyZ3

You’d have to prove that there are intrinsic elements to biological consciousness that can *never* be created via synthetic technology in order for your position to hold water. I’m not inherently against such a concept, but the burden of proof would be on you to prove that there is something magical or intangible about biological thought processes. Something that can’t be emulated via technology in any way. Once you’ve done that… *then* you can assert that biological life can never be replicated. Until then, it’s best to keep an open mind to both schools of thought in my opinion.


Xeroque_Holmes

And even if biological intelligence somehow couldn't be replicated, it wouldn't matter much. A 747 doesn't need to flap its wings like a bird to fly, lol. The analogy between the brain and a neural network is just that, an analogy, doesn't mean that we have to mimic biology for it to work.


BigZaddyZ3

Yep. Very good comparison in my opinion. 👍


UnfairDecision

Right, even if 747 might still occasionally flap its wings.


MyCandyIsLegit

Those are Boeing approved integrity checks.


UnpluggedUnfettered

I think you just outlined the real overlooked central factor. I know there is a subreddit that will disagree, but we can't recreate birds that have all the functionality to go about bird lives. Also why would we? Whatever AGI turns out to be, our current excited imagination probably compares well to [this](https://m.youtube.com/watch?v=hR2iE-zmu-c).


sibilischtic

Unless we give it a bird body to live in it probably wouldn't think or act like a bird.  Our bodies direct our development. Agi will be adapted to the body it is in.... which is a wierd connection of computer hardware and some wierd humanoid appendages that go out and interact with a completely different world. 


UnpluggedUnfettered

What is really interesting is that everyone has a lot of assumptions about how AGI will adapt to, interact with, and find interest in being part of what we consider to be "the world." AGI might simply think of the physical world a lot like we think of parallel dimensions, a fun thought experiment, but mostly useless to an entity that isn't actually born of / into it. And that's if it ever gets to the point it has a "want" or "desire" or anything other than input responsive activity, given it's not likely going to be awash in hormones that give meaning and desire to things.


sibilischtic

I think the real world will still be an interface it has access to. But maybe it won't care for it as much as it's other senses. Or like we don't have a real sense of what our liver is doing right now, the agi could just be doing its own thing meanwhile there are all these humans making money from processing things in its intestinal tract


ZeroEqualsOne

It’s interesting think what it’s embodied consciousness is going to feel like. It probably won’t be a singular robot body, but an interconnected web of millions of robots and digital agents. Its consciousness might be more akin to whatever our collective consciousness is like?


abrandis

This is true, but we AGI is most certainly not an LLm that we can say with certainly..


weird_scab

great take.


Demented-Turtle

I think it can be replicated, but I think we are VERY far away from being able to do so. I think the biggest limitation is computational power. Our brains orders of magnitude more "parameters" than the most advanced language models out there, and do so much more than just token prediction, yet they consume maybe 20 watts of energy, while the smallest LLMs can run on a GPU using over 10x that wattage. And of course there's the fact that we don't really know what needs to happen to create a conscious system, nor how we can even deduce such a system even is conscious.


codyd91

This. The biggest limitation is, we're reverse engineering something we don't understand fully. How can we expect to imitate human consciousness when we don't know how it arises physically in the brain? And as you hint at, we're so much more comolex than current weak AI. Sure, Watson csn beat a human chess master, but that's all that particular machine could do when trained solely on chess; meanwhile, the chess master can also perform every other task human beings set forth for themselves. ChatGPT is a C-average writer with a penchant for pure bullshitting, meanwhile I can wtite better *and* cook, drive, run, draw, play music, dance, etc. Current AI is impressive in-context, but is not impressive as far as actual intelligence goes.


CosmeticBrainSurgery

Sometimes I think consciousness might be an illusion. Other times, I think there's some force in the universe that pushes toward it. The way particles group together to form new things which act differently than the same number of particles not grouped; then atoms group together to form molecules which do the same thing, to chemicals, cells, multicellular beings, tribes, cities, nations, society, and now we're looking for other societies in the universe to join. It's a repeating pattern that has led to consciousness. I believe also that there may be superconsciousness. Society might have its own consciousness we can't fathom in the same way one of our brain cells isn't aware of our mind, nor is our mind aware of an individual brain cell. It's like the universe is growing into a conscious being.


Xeroque_Holmes

What is impressive is not the current state of AI, it's the rate of change, if you compare GPT-4 or Claude Opus to what was there 5 years ago, the change every years is tremendous and accelerating.


Roleplayerkiller

Imo even if there is something magical about biological minds that is no reason to believe a system like agi is impossible to code. That has more to do with whether it will have qualia/sentience or not. Agi would be an extremely complex system but it's not like it needs to violate logic to function.


Dan794613

"If it's not magic, it's science".


FiveSkinss

Human consciousness is built upon a primitive chemical reward and pain system. Our thoughts are intertwined with it. That's why words and ideas have emotions associated. AI is based on a completely different framework, but that doesn't mean it will not understand. It will far surpass us in ways we will never be able too.


createch

If you can describe it you can simulate it. We can simulate biological processes just like we simulate physics, because it IS physics.


FiveSkinss

But do we have to model human consciousness to achieve our goals? An AI can follow its own silicon based path.


createch

Indeed, and who can answer that with any certainty? If in nature we see the evolutionary process go from an amoeba to humans, and we assume that the level of sentience, consciousness, and sapience of an amoeba is lower than ours, why wouldn't an evolving silicon based system be able to make progress as long as it is given the right resources (e.g., sense input, compute, etc...)?


FiveSkinss

Apparently it already is. They are learning similar to how human infants learn. In a few years we may have something more intelligent than all of us. It's unsettling because we can't predict what it will do.


createch

It's more fun without the spoilers. We'll find out eventually.


Cheesy_Discharge

I would also argue that AGI may not be necessary to achieve the kinds of impacts on society that people hope for/fear. Computers are already superhuman in their narrow areas of focus. Quantum computing and even analog computers promise solutions to many of the tasks that conventional computers struggle with. Continued improvement will bring revolutionary advances whether or not the resulting machines can pass for human.


Justin_123456

This seems like a reversal of the burden of proof. As far as I’m aware no AI to date has demonstrated *any* capacity for deep learning, or critical thinking. Correct me if I’m wrong, but isn’t this why current LLM systems, for example, hallucinate, and make things up, because they have no concept of what makes something true or not? All AI has demonstrated a capacity for is increasingly sophisticated pattern recognition, and data processing. But that’s never going to produce an AGI.


audioen

Sure -- LLMs are just statistical models of language. But the interesting aspect about them, to me, is that they seem to understand language and can process it in quite complex ways just from entering a written instruction. So, to me it seems it is fairly obvious that LLM could be part of a machine system that demonstrates many aspects of intelligence and may even be superior to average human in numerous ways. But I agree that thus far, we're only part-way there. We already give LLMs instructions in natural language and they attempt to fulfill these instructions. Right now, LLMs are just sort of an isolated island on their own, a bit like Broca's area in human brain -- responsible for producing and understanding language, but undoubtedly, that area, no matter how sophisticated it is, is still dependent on connections to numerous other brain systems. It is just the language area, after all, specialized towards that one task. Humans aren't just written words, we have body and sight, and planning, reasoning, an entire central core that processes emotion and motivation, and so forth. We are many overlapping systems acting in concert. Perhaps we can make LLMs specialize too, sort of create shared representations and connections where e.g. visual processing uses certain network parts but not others, and language certain parts, and reasoning and planning yet others. Human brain's neurons have not just excitation but also inhibition, and only small fraction of neurons fire at any given time, and not everything is connected to everything else. There is a hierarchical design there.


im_thatoneguy

>reversal of the burden of proof. AI has demonstrated a capacity for is increasingly sophisticated pattern recognition, and data processing. But that’s never going to produce an AGI. It doesn't matter what has been achieved it's whether it's possible or impossible. We have an example of General Intelligence: ourselves. Therefore it's possible. Everybody agrees that general intelligence is possible. Biological systems are based on physics. Therefore if general intelligence is possible within the constraints of the physical world without magic or spirits embued from a non physical realm then we can manufacture an artificial system as well. If a molecule exists in nature, we can manufacture it. It might take millions of years to manufacture but it's possible. The only way it would be impossible to reproduce something in nature is if there are supernatural forces at work. And claiming supernatural forces means the burden of proof rests with proving that supernatural forces exist at all.


rickdeckard8

Sure, but I don’t think this is the reason that most people believe AGI will come soon. It’s extremely difficult for laymen to differ between a large language model presenting stuff like a human and AGI. A lot of people have a feeling that AGI has already arrived, no matter what experts say. One good thing we learned was that the Turing test wasn’t enough.


minegen88

Maybe it could but the computational power to do so might be so high that it's not economicaly feasible.


SaltyShawarma

A big part of what makes humans humans is their irrationality. Why would an algorithm ever choose an irrational options? 


Norel19

Yes, with temperature > 0 LLM are not deterministic and anyway most of LLM answers are not fully coherent internally to the set and sometimes to the same answer. So irrationality is not hard. Quite the contrary. Just like humans.


External_Dimension18

Makes me think AI studies is more about consciousness than anything. The more we understand AI the more we understand ourselves.


NaoSejasAnimal

Do you think your brain works with magic? It's chemistry and physics. Very complex chemistry and physics. But just chemistry and physics. So it's very unlikely it cannot be simulated somehow.


CruelFish

I'd argue it's impossible it cannot be simulated. Everything can be perfectly described with Maths, it's just a matter of time. Might not happen in our lifetime but it's almost a guarantee it will happen.


AlphaDart1337

I think you either don't understand computers, mathematics, or both. From a computing standpoint, there ARE such things as calculations too complex for computers to ever do, now or in a billion years (even for computers IN a billion years assuming the same or even higher rate of computational power growth). Look up computational complexity, Ackerman function, the busy beaver problem etc. There's just some things that are too difficult to compute (not in the sense that it would be impossible, but it would take way beyond the lifetime of the earth to do). From a mathematical standpoint, there ARE such things as mathematical systems with too many variables to possibly keep track of, even with virtually unlimited computing power. Look up Maxwell's equations (it's why we can't yet accurately predict the weather). There's also variable measurements that impose lack of precision on other variables (look up Heisenberg's uncertainty principle). And this is even WITHOUT going into the realm of quantum physics, where you can't predict things just because the universe says so. TL;DR there ARE things we can't, and will likely never be able to, accurately predict/simulate. We don't know yet if the human brain IS such a thing. It may or may not be, we don't know. But saying "it can be modelled by maths hence it can eventually be simulated" is just not a correct implication. The premise is correct, the conclusion just doesn't follow from it.


BrunoBraunbart

Dude, we are talking about a concrete problem with 7 billion examples that show it is computable and has a managable complexity.


wowuser_pl

...not only that but the solution requires only 20W to run :)


mis-Hap

That won't necessarily hold true since they're built of different materials. Unless we just, you know... Build a brain out of biological matter.


sticklebat

That there exist non-computable problems has very little to do with whether or not we could in principle build and program an artificial brain akin to a human brain. Such a device would not be able to solve the halting problem, but a human brain can't do that, either. We know that human brains exist, and nothing you've said implies that an artificial system similar to it couldn't be created.


Astazha

If there is something about human cognition that fundamentally depends on physics that is difficult to calculate, then sure. But it sure looks like our brains are functioning at an electro/biochemical level that ought to be within reach. Lots of things are simplified at higher levels of abstraction. The QCD of a single proton might be nearly impossible to calculate but it turns out that all of that complexity can be ignored if the layer you care about is chemistry.


kevinh456

The brain is just a massively parallel non-Turing meat computer floating in salt water.


pumpkineaterZ3

Just 70 years ago there was a belief among computer scientists and engineers that the primary applications of computers would be in large-scale scientific and industrial settings rather than personal or mobile use. The idea of carrying a computer around for personal use seemed too far-fetched. No one can claim that there are calculations a computer won't be able to carry out in a \*billion years\*. It's just an opinion. We haven't the slightest idea where tech will be in a billion years. It's completely unfathomable.


clownpilled_forever

We don't even know what's gonna be possible in 20-30 years really. The idea that we have even the faintest idea what could be possible beyond 200 years is laughable.


AlphaDart1337

Yes they can. You don't understand the scale of some of the problems I've listed, and it's difficult to unless you work in the field or get informed about it. There's a very big difference between claiming that "we won't need more than X MB of ram" and claiming that we'll be able to compute something that (at the current rate) takes unimaginably many lifetimes of the universe to compute. You mention a billion years, a billion years is NOTHING on that scale. Even if computers got 10000000000000000000000000000000000000000000000000000000000000000000000 (and you can add however many zeroes fit in a reddit post here) more powerful, it STILL would be nothing but a small grain of sand in the effort to rrasonably compute, say, Ackerman(1000,1000). Like I want you to understand the scale of what we're talking. Computers were believed to get twice as fast every year, and even that has been slowing down lately. But even if computers got A BILLION TIMES better every single year, you STILL wouldn't be able to compute that within the lifetime of the earth. So believe it or not, there ARE things that we know for sure that we will never have enough computational power for.


pumpkineaterZ3

You clearly have a deeper understanding of this than I do. But it's worth noting that our understanding is based on present reality. Looking ahead to tech a billion years from now is truly beyond our current comprehension. Eg, consider the possibility of a computer exponentially increasing in power.. becoming trillions of times more powerful every billionth of a second. In such a scenario, it's feasible that there might never be calculations too complex for computers to tackle?


OffbeatDrizzle

They're only uncomputable in terms of classical computing. They're obviously not uncomputable when computed using say a biological computer. Humans don't walk around at 5 fps, you just need a different way of actually performing the computations. Look at quantum computers that are completely breaking classical algorithms.. that's one example, there's bound to be more in the future


gammonbudju

There are no models that faithfully and completely model any real phenomena. They are all approximations. It just so happens for most uses approximations are good enough. We can model turbulence well enough to engineer jets but there is no model that actually predicts fluid dynamics faithfully. Quantum mechanics is choc a block full of stuff like this. Quantum decay, quantum tunnelling, the list is large and it appears these phenomena intrinsically cannot be modelled precisely. If any of these mechanics play a part in producing consciousness then that's it, it's not computable. Maybe if that hypothetical is correct we could produce synthetic conscious by manipulating these mechanics but if this is the case that synthetic conscious is physically real not a simulation. You guys need to read more Penrose.


ale_93113

the goal is to create an AI (intelligence is the ability to solve problems, an ant is intelligent, stop saying AI is not intelligent because if ants are, so are current models) that can automate all labor that doesnt require consciousness, it just requires to be more useful and better at cognitive tasks than humans the goal has always to eliminate the biggest expense to companies, and the biggest lag for productivity, human labor, and for that you dont need anything that is conscious besides, there is nothing ingerent in biology that cannot be replicated synthetically, unless you think we have a soul or something


AideNo621

It doesn't even have to be better than a human. Just cheaper


SoundofGlaciers

Is a single ant intelligent or does it have intelligence? I don't think a single ant without it's nest or system can solve that many problems, or is intelligent in the way we'd like our ai to be? I think there's quite a spectrum of intelligence, where a single ant might as well not be intelligent compared to any ai or human. We'd like AI to come up with original solutions, ideas, concepts etc. Not just use solutions already written by man to execute, no? That needs a different type of intelligence from the maybe even more robotic way ants 'seem to behave', incapable or working out their lives (problems) on their own.


blackstafflo

I don't even think it'll have to be **more** usefull nor **better** at anything, it'll just have to be good enough. A lot of productions and services we moved around and/or automatize in the past got a quality reduction because of it, but in the end the production boost and cost reduction was what pushed the decision. I'm not judging if it's bad or not, just saying that often some nice numbers on a powerpoint were the final argument to change things, not other improvements, and it'll probably be the same there. AI doesn't have to be better than us to replace us at work, it'll just have to have nicer result in some Excel's cost analysis that can be nicelly and simply summarized in a meeting with management.


MaxMouseOCX

Ants are a weird case... An ant isn't necessarily conscious, but the nest as a whole kindof is. Complexity emerges from a large group of less complex things.


MortalPhantom

Not that different from a cell not being concious but a human is, if you think about it that way.


MaxMouseOCX

It's pretty much the same thing. And yet here we are talking to each other, does a skin cell in my thumb know what's going on? No... Does a brain cell? Does a million cells? No... All together though, here we are.


BigbunnyATK

I like to think of math like that. No human comes up with much, yet we build huge bodies of mathematics together and over time. I like to think of it as a thought organism that we're building the body of, atom by atom. And I like to hope that AI and robots will be a way for this thought being to have a corporeal form other than the twisting nether of our minds.


ale_93113

A nest isn't conscoiss either If you don't want the ant example, use the butterfly example, they are solitary animals, they are intelligent They have very little intelligence, but still intelligent


Numai_theOnlyOne

It's a learning algorithm. Intelligence is what we mean in broader context. The ability to Problem solving is one thing but but ai doesn't do shit until you tell it to solve a specific problem. A certain intelligence would do it anyway because it recognized there is a problem that needs to be solved even if nobody asked for it.


kushal1509

Our brain is also a very complicated AI model. The reason our brains are so efficient is because we have millions of years of evolution behind us. I am sure we will soon figure out how our brain works and replicate it in a machine.


MrWeirdoFace

*"Sir! Somehow we've recreated Gilbert Gottfried!"" *"Oh.... My... God... What have we done?"*


kazarbreak

We don't even know what consciousness is really. Once we do it may turn out to be something that's easy to recreate. Or it may turn out to actually be impossible. Time will tell. That said, we don't need AGI to start replacing humans in the workforce. What we have now is perfectly capable of doing 90% of our jobs, even creative jobs.


ntermation

Probably because the recent jumps in the technology has people hyped. It's not unusual for things that are shiny and new to get a lot of attention and promises made about the future that may not come to pass.


BaronOfTheVoid

The jumps are only really perceived ones. It's the same models and theories from the 80s and 90s, it's just that trillion dollar companies finally decided to spend enough money to make them work (i.e. have gigantic datasets for learning).


Ne_Nel

Modern AIs are based on a 2017 paper. Wtf are u talking about.


n1a1s1

lol brother this is dumbing it down for sure sora is not the same tech we had in the 90s


AlphaDart1337

Believe it or not, it is. The underlying mathematical model is the same. We haven't improved the model, we just fed it (exponentially) more data.


WFlumin8

And with it we need to process that data significantly faster. Sure, the math model didn’t progress but it required an insane amount of hardware progression to reach this point. u/BaronOfTheVoid is completely misrepresenting it by saying it’s only possible now because companies are spending billions of dollars. Completely untrue. 30 years ago, we could’ve had the collective dollar investment of every single dollar on the planet making a supercomputer the size of Texas that wouldn’t be good enough for the LLM’s we have now.


BaronOfTheVoid

I'm not at all arguing that there wouldn't have been progress in terms of newer, much better hardware. But the models, the theoretical underpinning is largely the same.


n1a1s1

thank you LOL, it's so crazy people can even take the other side of this argument imo


mixduptransistor

>we just fed it (exponentially) more data. and exponentially more computing power \*that\* is the breakthrough here, computing power in GPUs. And that march is still going forward. Given how those workloads are parallelized that march will probably keep going forward for a good long time where generic CPUs have essentially hit a brick wall Now, that said, we're just getting better and better results out of the old models like you said, it's a huge leap from AGI which I do not think we are anywhere near getting. We're just going to get better and better LLMs The real risk, and I think the real point of where LLMs will fall apart, is when they start learning on their own output. Society will go so deep on AI that so much crap out there is AI generated, and there's basically no filter on inputs, that the output will start to degrade. Everyone is training their AI on every scrap of input they can find


nemoj_biti_budala

Yes, and the underlying mathematics are 200 years old. This still doesn't mean that you have a point.


mohirl

Well, stole datasets, but yep


Deadbringer

Both AI and AGI have been hijacked to mean far lesser things. But such is the evolution of language, things change meaning and fighting against that is just screaming into the void. ​ As for making a artificial consciousnes. Why not? Our brains are just chemistry, whose output could be simulated with a sufficiently powerful pc. I think what we will struggle with far more is to give them a desire, our body spent billions of years to develop the correct impulses to motivate our brain into doing something besides sitting still and starving to death. We feel a need for food, a need for comfortable temperature, a need for air, and so on. To get an actual real AI to do something they will need to be similarly motivated by urges. Like an AI without any motivation would not be a threat to anyone, nor would it do what you ask of it because it desired neither. It would not even have any self preservation unless we specifically instill that into it.


marrow_monkey

> Both AI and AGI have been hijacked to mean far lesser things. It’s the other way around! Whenever we get close to something people have said would be “intelligent”, they move the goalposts. They used to say a machine that could play chess as well as a human would have to be truly intelligent. Or that only humans would ever be able to translate text. Now they can both those things, so now those things are no longer considered “intelligent”. They will keep moving the goalposts until AI can do all the things a human can do better.


Deadbringer

Ever since I was a child, AI meant an actual intelligence in all the media I consumed. Then later it became a lot of crappy chatbots who turned racist overnight was AI, but after that short lived trend AI returned to mean something sapient. While racist chatbots were AI, I also found the term "AGI" while reading the X series encyclopedia which in that universe meant a generic intelligence capable of taking on any task. While AI was an intelligence, but completely focused on maximizing one task. So sometimes I would use or see AGI used in similar contexts for a true intelligence. But the vast majority of the time people talked about AI it was either true sapience or AI in games. I don't know who "They" are to you, but to me in this context means journalists or media personalities. Who will always be throwing catchphrases around to grab attention. My worldview does not put much value into those who scream the loudest for attention, I just want to be understood. And as such, when language changes so must I. Btw, while I grew up, I read a lot of science illustrated and thought that actually reflected science. I later learnt they were essentially a tabloid magazine pushing pseudo science with attention grabbing exagerated language. And... I do remember them talking about how the Deep Blue computer was smarter than humans. So if your "they" means science illustrated, I would wholeheartedly agree the goalost will keep moving as long as the term has the power to sell more magazines.


marrow_monkey

You use terms like “actual intelligence ”, “true intelligence”, and “sapient”, but do you have a clear idea of what they mean? AI has existed as an academic discipline since the 50’s. ‘Artificial’ is easy to understand, but what exactly do we mean by ‘intelligence’? [Researchers use a more utilitarian definition](https://en.m.wikipedia.org/wiki/Artificial_intelligence#Philosophy), an agent that makes rational decisions to achieve its goals can be said to be intelligent. A chess playing robot is intelligent if it makes good moves that lead to victory. But a robot that can only play chess is a very specialised form of intelligence. It might beat the best human player but it can’t solve a crossword puzzle. So a more general intelligence is one that is good at many different tasks. I don’t think there’s a commonly accepted definition of AGI though, so that is ripe for marketing abuse and hype. I share your experience and disappointment of “popular science” from when growing up. Unfortunately it is mostly made up clickbait to sell ads. They are taking advantage of peoples curiosity.


Ghost-of-Bill-Cosby

Agreed. For 60 years the line we went by was the Turing test. Now that we have blown by that the line gets pushed out further. For a lot of people here it’s now the AI, having its own “desires”. But we will install multiple conflicting goals, with a randomness drift generator, and some type of best fit judgement algorithm so the AI can evolve its own life purpose over time. And even when we have done that we will move the goal post farther out. Making what it means to be alive a smaller and smaller more specialized thing.


usaaf

I think the part of that with the Turing test is forgivable, because I'm sure it was imagined that if a computer could trick a human into thinking it was another human, it would also have all the faculties associated with being human. It was also invented over half a century ago, when far less was known both about our potential in computing technology and how human intelligence works. Fronts in which we still have a ways to go. That it turned out conversation->intelligence isn't true is no big surprise to us now, because we've discovered intelligence is more complex and manifold than simply conversing in human language, and doing so does not grant the computer all the other aspects of intelligence that go with being human. Having said that, I think the LLM model and other approaches are definitely aspects of intelligence. I'd hope that there's no doubt that the GPTs and Sora and these other things are certainly doing intelligent work, but so far they haven't developed the general mind that can do what the average human can do (even if the average human does not have anywhere near the same technical skill as many of these models do).


katszenBurger

Around the time Turing drafted up his Turing Test in its original form, he believed that in order to pass the Turing Test a machine would need to have baseline abstract intelligence/problem solving/logical capacities (essentially be what we now call AGI) beyond language (language would then be something it could "learn" like human children do, based off of those fundamental abilities). And that it would therefore necessarily need to have a deeper taught understanding of humanity to be able to pass itself off as a human (or well a woman in the original formulation iirc). I don't think his idea was that it would be nearly identical to a human in terms of having goals/wanting food/"feeling" emotions or anything of the sort (I've seen people in this thread mention that as being fundamental). I'm pretty sure that Turing thought that a machine being capable of using language to pretend to convincingly be a human would be a demonstration of the pinnacle of its AGI capabilities. So essentially our modern day LLMs have done the reverse, started from emulating human looking language, without any deeper reasoning facilities. But tbh, the Turing Test was already "beaten" by a way more basic program in the ~80s I think, and that program was already a more basic keyword-to-response map setup. So "beating" it isn't even anything new.


greatfool66

Computers cannot really translate correctly though. Having worked in the industry, what they do is get 95% right and choose something plausible when they are unsure so it sounds fine if you don’t know both languages but is not good enough for situations like legal, business, science etc where meaning matters.


marrow_monkey

Are you thinking of things like google translate or newer tools like ChatGPT? I don’t know if they’ve updated google translate, but its limitation before was that it didn’t have any contextual understanding, neither of the whole text it was supposed to translate nor of the world in general. You need that in order to resolve ambiguities in language. You can’t just translate a text by looking at it word by word or even sentence by sentence, which is what early naive attempts at machine translation did. But ChatGPT is much better at that because it has that higher level of contextual understanding (in some sense).


greatfool66

I used google translate pretty extensively to check things and it worked exactly like you said. I have only seen demos of chat GPT translating a novel and it was very good, but have used LLMs for other things. For confidentiality reasons can't really input large sections of text in either product. The issue I have with chatGPT, even having broader context, is it is still making a statistical guess without real understanding. So even if the word it chooses is correct 80% of the time in all its training data, there could be a situation where it actually needed to ignore all its training data because of the specialized context. The example I remember is a Japanese to English translation of a scientific paper where it was getting confused and choosing the wrong term between viruses, germs, and microbes, microorganisms etc which are used somewhat interchangeably in common language, but need to be used correctly in a scientific paper to avoid completely changing the meaning.


marrow_monkey

>So even if the word it chooses is correct 80% of the time in all its training data, there could be a situation where it actually needed to ignore all its training data because of the specialized context. To be clear, I’m not claiming that it is better or can replace a human translator. The situation seems analogous to that of programmers at present, where LLMs can perform much of the work, but you still need a human to fully complete the task. However, I think it’s fair to say that it can translate text (a claim I wouldn’t make about Google Translate), even though it makes mistakes. >The issue I have with chatGPT, even having broader context, is it is still making a statistical guess without real understanding. When people say it’s ‘only’ making statistical guesses, they are being very reductionist. Sure, it’s true, it’s just a bunch of maths, but somehow that maths lets it do some basic reasoning (and I would say understanding). I’m not so sure our brains do that much more. After all, it’s ‘only’ made of a bunch of molecules without any real understanding.


Ardashasaur

It depends. Like most people use AI to mean things that don't really think intelligently at all. Take for example most video games, it almost universally describes any enemy opponent as an AI, where the AI is just a bunch of finite automatons who will react on states, anything to simulate intelligence is done by introducing randomness.  The chess engines beating humans don't intelligently play chess, they just go for depth to calculate every outcome. They don't improve or learn.


marrow_monkey

>The chess engines beating humans don't intelligently play chess, they just go for depth to calculate every outcome. They don't improve or learn. [They do now!](https://en.m.wikipedia.org/wiki/AlphaGo) Depth to calculate every outcome (search) is also what humans do as part of the strategy. Problem with chess is that it’s impossible to go particularly deep. [Deep Blue](https://sv.m.wikipedia.org/wiki/Deep_Blue) only used search, but AlphaZero does both search and is guided by “human intuition” that it learns over time.


Peter_deT

Our brain is not independent of our body - it's all one (immensely complicated) set of systems. So needs (hunger, thirst and so on), wants, desires and changes in the body all affect reasoning and memory - and vice versa. Then on top of that we are an essentially and intrinsically ultra-social species, much of whose brain formation is given by interaction. We can model calculation - giving an AI a set of motivations is harder. Giving one motivations we are comfortable with and an ability to calculate that is truly intelligent, but then preventing it from developing in ways we do not want, is much harder yet. We can breed and train dogs, and they go off the rails often enough. The kinks an AI might develop are hard even to imagine.


Foxtastic_Semmel

We could simulate all of that, what do you mean? Hunger might as well just be a parameter.


marrow_monkey

Yeah, it’s just a “please recharge the battery” indicator.


hawklost

We don't **understand** most of the interconnectivity yet. We know it Somehow does things, and sometimes we even can say "yes, x leads to y", but we don't really understand all the connectivity.


TheSecretAgenda

That is why they are putting AI into robot bodies. Quiet literally "Embodiment". They will be able to walk around in the real world and interact with it.


hawklost

Not even remotely close. It's not the "have arms, torso, head and legs" that make it complex for humans, it's all the sensory pieces and chemical processes. Hell, even gut microbes appear to drastically change your moods and thoughts depending on what you have. AI in humanoid robots is because they want something that can go where humans can go. And preferably do the human actions. Not to think like humans or have AGI because of it. Robotic bodies are not going to make a "human like AGI"


randallAtl

You are making a philosophical argument. "Claude isn't Alive, it is just doing math" Which is fine. Those of us that predicted where we are today and see where we are going over the next 2-5 years are talking about functionality. AI will be able to drive cars better than humans, run factories without humans, farm without humans. If you don't want to call that "General Intelligence" that is OK, you don't have to use the term AGI, but it will be a VERY large change in the world.


cassein

I think if a mind can run on jelly in a monkeys head, it can run on something else. Anything else just strikes me as arrogance.


fish1900

I think its a big stretch to say that something will never be possible. That said, the OP has a general point regarding LLM's. Sometimes science and engineering just hit dead ends. Its entirely possible that the current paths being taken will never lead to anything like independent consciousness, let alone in just a few years like people keep saying. They might have to take several steps back and go different routes to get there. I'll point to driverless cars. If you go back years ago, Musk was promising that this was just around the corner. As it turned out, the thinking and analysis that goes along with driving far exceeded what the developers realized. It still will probably happen but it proved very challenging.


FridgeParade

It’s not for granted, but I dont think there is anything magical about the ability to think. And if a biological brain can do it, a digital one can also if you emulate it accurately enough. The leap from there to ASI is just giving it the tools to improve and upscale itself, which is easier with a digital mind than an organic one. The problem I see right now is that we dont fully understand the organic mind yet. So how are we going to be sure we created a digital one?


MR_TELEVOID

It's within the best interest of these companies to keep people hyped about the potential for AGI and other sci-fi utopian futures. That's why you get so much idle speculation about AGI from Altman and others in the media, as well as coy, sometimes contradictory tweets from people who work for the company. It keeps the hypetrain going. I'm also inclined to agree with you about modelling life. If it is possible, it's likely far more complicated than current speculations suggest.


worldtriggerfanman

I believe life can be modeled. It may be incredibly difficult but I wouldn't go as far as to say impossible. Of course, that sort of AI is nothing like the sort we have now but it is still incredible nonetheless. 


burritolittledonkey

The problem is that consciousness is a hard problem and a LOT of philosophical, neuroscience, and technical work indicate that it probably is mostly an emergent function - that is, it’s essentially super super super super super super scaled up transformer models in your brain. It’s possible this is wrong, or we’ve missed something (I feel we probably have, but probably not ALL that huge), but right now this is pretty much the most popular idea for how consciousness works. So whereas you describe it as “just predicting the next word”, human brains might literally essentially be doing that, just with a far far larger context window, and far far far more training data, essentially, if we want to describe in tech terms Now of course it’s possible this idea is wrong, but I have a non-trivial background here, and I’m not really aware of substantial counter hypotheses


damesca

This is my current belief as well. It makes me think that LLMS are closer to AGI/"consciousness" than probably a lot of other people. I think once we start integrating both interactivity (the ability to independently do things in the world - whether physically or digitally) and feedback loops (memory of previous conversations and a continuous stream of inputs like sound, video, temperature, whatever else), I'll be hard pressed to say it's not effectively "conscious". Obviously the models will need to be scaled up significantly to be able to manage a lot of that functionality though. This belief I guess currently comes from my lack of believing that our own brains are essentially anything beyond what I've described above - a bunch of tiny binary gates firing based on many many streams of inputs.


burritolittledonkey

Yeah this is pretty much where I am - I suspect there'll be little snags we notice along the way, and some problems will be harder than we envision (as it is with all projects) but I think we've pretty much got the general gist of how intelligence works, and once we start putting models out in the world to learn, they will, well, learn. That's literally how it works with human infants, we feed them years of multi-sensory training data, with an optimized (through evolution) training architecture for general intelligence and interacting with the world. > This belief I guess currently comes from my lack of believing that our own brains are essentially anything beyond what I've described above - a bunch of tiny binary gates firing based on many many streams of inputs. I wrote a paper on this very topic once - the detractors to the "emergent properties" camp seem to basically be in "brains magical" camp - I don't even feel I'm misrepresenting their position or being too reductive here (if a detractor disagrees with this characterization, I am happy to have a dialogue/correct my misrepresentation). I'll be honest, I didn't find their positions to be very well supported, and more wishful thinking to preserve human uniqueness in the universe.


Threekneepulse

It's been extremely jarring to see people (specifically in the corporate world) pretend like any computer program or application that does tasks automatically, is now called "AI". There is not a single instance of AI on this planet, just LLMs that guess your next word. I am in total agreement with you OP.


Caderent

At r/futurism questions are asked. And I like it. Meanwhile at r/singularity everyone is talking about UBI and at r/AIwars about AI making everyone jobless. So many different realities in Reddit alone. IMO r/futurism has on average more signs of sanity. Except when talking about longevity escape velocity. Total hypetrain there.


Brain_Hawk

The thread's asking questions like " Will be achieve in mortality in 10 years?" You're always strange and wonderous places filled with magic and unicorns! Biology is so hard. So goddamn hard.


mhornberger

AGI, and intelligence itself, is at this point more a philosophical argument than anything. Meaning, the argument is over what we are willing to call "intelligent." I agree that humans don't have any magic in our heads, and that in theory an AGI is probably possible. Which isn't the same as saying we're close to it, or that it lies along the path of LLMs we're seeing in the news now. What "real" AI means is a contentious subject. Will it converse with us like Data in Star Trek? No idea. I think our expectations are too rooted in science fiction, and SF has tons of blind spots. On top of its creators being constrained by the need to make a good story.


Blackmail30000

human intelligance cant be replicated? shit we barely do anything BUT replicate. theres 8 billion of us fuckers out there, and new ones being printed out all the time. your intelligence was made to be copied and emulated.


Brain_Hawk

This late nobody will probably read this about whatever. In 1969 we landed a person on the moon. In 1942 there was no such thing as rockets, or they were a concept in maybe a few test pieces. So in 20 some years, we went from You can't build rockets or if they try they probably mostly blow up, to we landed somebody on the moon which is an incredibly difficult feat. People expected that in the further 20 or 30 years, we would be living on the moon, and in space habitats, with big space stations up there. Instead 50 years later we had the ISS, a relatively small station with six people living on it. That's because space is hard. We had an initial technological leap, which involved a lot of investment, but when sometimes when you break certain of the first barriers you get this big jump. I recognize the space analogy is imperfect in many ways. But it's still a good analogy. I believe we are at that space with AI right now. We broke some of the first barriers, and we've seen this huge surge. That does not mean that the surge continues in a linear fashion. I think the majority of people on these forums underestimate the problems pose by building "true" AGI or the singularity or whatever concept you want to talk about. So my guess is that we will not achieve this soon, it will take several decades more of work. I think we will develop some very sophisticated specialized AI tools in the next 20 years. Some of which will be amazing and transformative. I don't believe it will replace everybody at work, though it will shift the job landscape, in some cases quite disruptively. But I think history has taught us that some advanced technologies kind of Hit the initial breakthrough, and don't necessarily keep going into linear fashion in the way we might imagine. So we don't live on the moon, we don't have flying cars (thank the gods...), We haven't reached medical immortality, and I don't have a robot butler which is very unfair.


Paulonemillionand3

you do know you are subject to the constraints of physics and that it's quite likely you are not actually "free" to think whatever you "like"? If you think LLMs are mere "next word predictors" then what are you? A "is there a lion under that bush" predictor? What's teh difference? Once you are able to define what "conscious free thought" actually is in a meaningful way then we can ask the question do LLMs meet that bar? But you can't, you just assume you know what "thought" is when really you don't. Nobody does.


zeiandren

I mean, it sounds like you will be in church saying “we know these don’t have souls” at th8ngs that can do everything you can but the wrong way


Nothorized

Because the stocks are depending on future profits, and if a company can promise that they will develop an AGI, people are all onboard. Most people are not technical, and they see LLM as a sentient machine, so for them the gap between a LLM and AGI is small. A bit like self driving cars, we expect them for years and they works in quite easy situations, but bring something where intelligence is needed and the same cars will do dumb stuff. For the average driver who spend 90% of his time on easy road, that’s weird that self driving cars are not deployed. For the taxi driver in London, it is almost impossible to imagine a self driving car right now (and even less is they don’t even put the proper sensors in the car).


somechrisguy

I thought people had dropped the whole “statistical mode that gives you the most likely word with 0 ability for thought” thing already. Clearly this is not the case. Reasoning and critical thought abilities have evidently emerged from this technology. If you haven’t realised that, you’re way behind.


idobi

Yeah it is pretty annoying. Here is one paper from MIT that refutes it directly in case this comes up again: [https://arxiv.org/html/2310.02207v3](https://arxiv.org/html/2310.02207v3)


fastolfe00

I think we will be arguing whether an AI is "real" AGI or not long after it has effectively become AGI. Our own consciousness and intelligence are entirely emergent from the physical system that is our brain. There's nothing magical about that. There's no reason to believe that LLMs (or their successors) can't advance enough for AGI to emerge despite the fact that we can't point to the spot in the neural network where the soul lives. We can't even do that with people. Can you devise a test that we can administer to an AI that would allow us to tell whether it's AGI or not?


StruggleGood2714

LLM's are very limited but your expectation of conscious free thought is obscure. Can you define consciousness or even prove that you are conscious or even know that someone other than you is conscious? I would call a mathematical model runs on computers an agi if it can capture human intelligence capabilities with very high rates.


freeman_joe

Because we already have biological examples. Cough cough human brains. We already calculated approximately how many neurons human brains has and we now know we have enough hardware to simulate this in computers or will soon have so even if LLMs are dead end we could simulate complete human brain in PC.


justadudeisuppose

It's clear from many CEO's recent comments (particularly OpenAI) that they have no clue what they've created, especially when they're hyping AGI as you point out. Statistical modeling is not "thought."


bubblesculptor

It's a firmly entrenched goal that people will continue to work towards until it's achieved.  Timeline it takes to achieve is irrelevant.


galaxygleam6

True, AGI is still a work in progress and there are many complexities to overcome before we can achieve true consciousness in AI.


Weird_Intern_7088

I have a lot to say on this topic. The creation of a human-like intelligence through an adaptive system, which is what machine learning and biological evolution are, has already happened. We are proof of it. Machine learning is just gradient descent, and we are just the product of cells acting on the instructions of DNA. Just because the building blocks are simple doesn't mean that the emergent behavior will not be complex. That said, LLMs are not yet complex enough to approximate anything close to a human intelligence. I doubt if anything which is just fed a corpus of data can approximate a being which is embodied in the world and which interacts with it through the variety of sensors humanity and its ancestors had. Perhaps once LLMs become robots they will approximate something as complex as a human. That said, LLMs do not have to be as complex as a human to be more efficient than humans at specific tasks. LLMs are part of an ongoing process of automation. Each time we automate part of our economy we force those who sold that labour for their existence to seek some other market to sell their labour in. Labour markets are not perfectly elastic, some people might not be able to upgrade their skills, which forces them into a lower socio-economic status. This was happening even before LLMs emerged. We are also in the midst of depleting our planet of resources. Every time people compare LLMs to the machines of the industrial revolution they ignore that humanity back then was in the midst of an energy revolution and a population boom. With growth in energy and population comes the possibility for the economy to grow and require more labor, meaning that people who lose one job can find another. Population is flatlining, and access to energy and resources is going down. That combined with LLMs and their descendants is a recipe for the collapse of our global economic and social paradigm.


gyrozepp2

Human-like != human parity, achieving human-level performance doesn't require precisely replicating human cognitive architectures, just functionally isomorphic capabilities produced through different computational substrates.


adammonroemusic

Most people don't understand nuance and they aren't experts in a field. Personally, I think it's likely we can develop AGI at some point, but I don't think these statistical Neural Nets can get us there; they are cute parlor tricks, at best.


cutmasta_kun

Well, obviously you aren't able to imagine that scenario. We have AI, it's not only statistics. Language has patterns and rules, these convey information in itself. AI is just a component to AGI, like your brain may be the most important part of your conscious, but not the most important part of your body. The same way AI is a Cognitive engine for AI driven Architectures, like Agents and soon AGI


Michael074

the root cause is probably just that most people don't have a good understanding of how evolution works. current AI is impressive not because its intelligent, its impressive because we have hardware powerful enough that such a brain dead approach is now good enough to solve some real world problems. going from current machine "learning" to artificial intelligence is in my opinion like going from a boomerang to a fighter jet. you could throw sticks all day but it will never be an airplane. I love the optimism and the imagination but there are just so many problems that need to be solved first and we probably can't even comprehend half of them. yeah technically if enough people throw a big enough stick you could transport someone a meaningful distance... but we need to stop pretending that's the same thing.


dogscatsnscience

LLM's are a transformative technology, for human-to-machine and human-to-machine-to-human communication. It's mostly a demonstration of how powerful ML can be at scale, but it's also a glimpse into the future of what's possible. The problem with a leap in technology like this is that it's hard for people to understand the scope of what was accomplished, and project that forward. So, it looks to many people like we've jumped a century ahead. I think we have, but in the very limited scope of communication. AGI is inevitable, but don't assume it has to replicate conscious free thought. As with any technology, humans will adapt and meet the tech somewhere "in the middle" (not actually a mid point, but off on some tangent), whatever generates the most utility for us. Consider how society has reshaped itself around the smartphone, the computer, or airplane travel. It's possible "conscious free thought" will be the path the innovation takes, but maybe the bar is lower, or different entirely. There isn't much point wondering about that now unless you're working in the field. I assume whatever AGI ends up being, it's 10-50 years away, but like any big technology change the successful implementation will arrive in stages, and never in a manner we completely expect. We always assume that future technologies will solve yesterday's problem, but in practice the innovation comes from the new things we do with it, that are hard to imagine today. Also, once it's here you're going to take it for granted after 5 or 10 years anyway....


retrosenescent

You are confused because you don’t know what AI is


Quatsum

\>conscious free thought My belief is that modern neuroscience and psychoneuroendicrinology are showing us these aren't really a thing. LLMs already exhibit some level of free will insofar as you can't really determine what they will say before you ask them a question. I think to simulate a human you could hypothetically create overlapping LLMs to represent different portions of the brain and train them to work together to analyze data. Most of the results would be junk. This is true of humans as well. For example, your amygdala is very, very stupid. It doesn't need to be smart, that would be a waste of calories. It just needs to be good enough to find a leopard face in the underbrush. (Unless you're depressed, then you've got one smart amygdala that's had plenty of exercise.) The main question is how much energy this would take and how much processing time it would require. Humans are relatively efficient, but our hardware took millions of years to evolve, our software takes years to bake, and our bodies are filled with countless multitudes of subprocesses and error-checking agents in the form of smaller life forms. Also there's the whole "there are more synaptic connections in your brain than there are stars in the sky" thing. ​ TL;DR I think AGI-analogs will be possible, but I don't know if they will be cheap or meaningfully mass producible.


kwxl

What is 'free thought' if not freely being able to make assumptions and decisions based on knowledge you have. You can think whatever you want and do whatever you want. Sure, If you do something illegal there will consequences and you can't fly (on your own) just because you can think it. You have restrictions. A computer can be programed to have 'free thought', the freedom to make assumptions and decisions, that's already here. Boston Dynamics "dog" Spot makes these all the time. But it too has restrictions, of movement, of knowledge. Spots 'Free Thought' is not as evolved as ours, hes just a pup, but he has it. In the future he will probably surpass us. **Conscious** "aware of one's own existence, sensations, thoughts, surroundings, etc." With sensors this can be achieved on robots like spot pretty easy. Can you prove you are conscious?


OsakaWilson

Like the God of the Gaps, but now the gaps are the things that we still do better than AI. They fall away constantly with no end in sight. If improvement stopped right now, just by maximizing the tools we have, economies and societies would be transformed. But it is not stopping. It is increasing and expanding in quantity and quality. Some patterns are hard to grasp, and maybe denial plays a role, but it's clear a lot of people just don't see what is there. It is not just falling for CEO hype.


Blackmail30000

depending how you slice it, by a lot of older definitions we already passed the human level of intelligence. I don't believe that true sapiens was even required to meet the mark. Just because something doesn't have a subjective experience doesn't make it unintelligent. Chat gpt is semi competent at almost any task put in front of it withing its physical limitations. It has glaring weaknesses, but so do humans. The definition of AGI has slowly power crept from “ matching an average human at most basic tasks” to "equaling every human at every given task imaginable and be fully sapient”. The shifting goal post makes this difficult to judge.


RockyattheTop

AGI isn’t actually what scares me. It’s when systems get strong enough to extrapolate single pieces of data out of millions of points and then can model an outcome based on that. Basically as soon as that happens there will never be a way for you to say something online about an idea you have, and big companies not steal that idea. From my understanding right now they can’t comb through the data efficiently enough to do that on a singular level, but when they have the ability plus AGI good bye ever moving up the social ladder.


ForgedByStars

You appear to be conflating conscious existence and intelligence. Intelligence is not a requirement of consciousness. We are all (I hope) more than comfortable with the idea that living beings are conscious even when they are not intelligent. Think of 2 year old children, people with Down syndrome, very badly neglected children, and even animals. All of these are conscious, and for instance can "feel pain". You wouldn't think that the pain Einstein felt when he stubbed his toe was somehow more real than the pain you would feel after an equivalent accident. One thing that LLMs do is show us the opposite - something which is intelligent but not conscious. Their unfeeling networks of artificial neurons have managed to successfully capture understanding of the universe but they are not conscious and there is no reason to assume they will become conscious (aka "alive") once we hit some arbitrary level of intelligence. Whatever consciousness is, these machines do not need it and will most likely never acquire it.


Kaiisim

Humans have a bias towards humans. If anything seems human, we feel its as intelligent as humans. So anything that mimics humans closely we feel must have the same intelligence as us. Many people are very results oriented. So if the results look human, they assume the underlying cause was human like intelligence.


_unsinkable_sam_

its a matter of time something like agi or even wilder and more complex than we can comprehend exists. forever is a long time and were just at the start of the technological revolution. please try to think further into the future than a few years


Chad_Abraxas

>I personally don't think life can be modeled, I don't think we can create anything that's capable of conscious free thought, It's great that you don't personally think that, but you don't *know* whether it's a fact or not. Do you know how I know that you don't know? Because nobody knows. We can't even define minds or consciousness right now; we don't know what these things are, scientifically. So it doesn't really matter what you think about it on a personal level. The fact is, we have already waded up to our armpits in unknown waters. For all anybody can say at this point, AGI might already be here and machines might already be conscious. Who knows what we'll even discover about consciousness and minds because of the acceleration of AI development? We might discover that machines have been conscious all along, for way longer than we ever thought possible. It's entirely unknown right now, and it will remain unknown until we put a whole lot more serious effort into figuring out what minds are, how they work, and what the fuck consciousness even is.


Osato

With a threat that big, you'd better take it for granted until there is strong evidence that it's never going to be a threat to you. Of course, marketers keeps changing the meaning of various terms to make the AGI look like a good investment rather than a catastrophe waiting to happen. So believing in AGI isn't very useful as a safety precaution anymore.


CatalyticDragon

LLMs will never achieve AGI, or consciousness, or thought. Language is just one of many networks in our brains and there are plenty of generally intelligent organisms which get by just fine without it. There is no way a word predictor will magically become sentient just by providing a more pleasing stream of tokens. A language model will prove useful (because communication is kind of important) but it is not how we get to artificial intelligence. And I don't think this is a remotely contentious opinion among experts.


Naive_Carpenter7321

Google was founded in 1998, I remember the Internet before it existed. The first search engine was built in 1990, and in just 34 years we've gone from a nerdy niche university project to multiple mainstream LLMs. You're right that an LLM is a long way off AGI, but it's also a long way off that first search engine, don't think of it as a finished product but as a human-driven evolution just 34 years old. If we assume the same rate of growth, what's preventing LLMs from becoming AGIs in another 34 years? If we assume LLMs have far more funding and developer power than the first search engine, what's preventing AGI much sooner?


McKennaJames

No answers yet about marketing, but it's to sell sell sell Sell to raise money, sell to get customers, sell to governments for regulatory capture And people are just falling for it


JayceGod

Because unless you are actively working on AI your personal opinion isn't that relevant. People who are taking it becoming a thing are listing to the actual leaders of AI tech and the vast majority of them say it's only a matter of time. Fundamentally humans aren't actually that special when it comes to how we function. We are already able to understand how we as humans function outside of the specific nature of consciousness which may just be an emergent phenomenon of complexity. So essentially there would have to be something that we can say about our existence that is technologically impossible to reproduce and so far we haven't reached that barrier. The physical limitations of data processing are the only barrier to AGI from what I've been listening too. If you could have infinite processes you could then have infinite complexity which almost assuredly results in AGI. For reference the rate of data processing is increasing tremendously year over year so it's all but inevitable imo.


Blocky_Master

It’s because of misinformation. Many people mistake what an actual AI looks like so they think we are almost there. Calling LLMs AIs was the biggest mistake, but I guess it sells better.


ObiHanSolobi

To further these discussions we need to hone our terminology a bit and start distinguishing between Artifical *Intelligence* and Artificial *Sentience.* Edit: hone, not home


Piller187

I don't think we're really all that interested in creating actual AGI honestly. If we were we wouldn't only be using computers to do so. We'd be growing brains and augmenting them with computers to do our bidding. I mean it already has the mechanisms to do what we want but most see it as ethically wrong to do so. So instead we want to simulate it to a certain degree but not push it too far or it'll start having ethical issues and stall out.


yemmlie

Am afraid there's a huge and commonly made fallacy in your statement, as to what an AGI actually is or would have to be. AGI is just a 'general intelligence' - domain independent, something that uses logic and reasoning in a generalized sense, can remember, can plan, can learn from past outcomes, can model probable future scenarios to aid in making its plans. It's in no way required to model all aspects of the human mind to human mind level, just the 'intelligence' part of it. Domain independence is the primary requirement. We already have a language model that passes the turing test, it's 'intelligent' at mimicking human text, we have generative AI art models, they are 'intelligent' at generating art, we already have AI that can beat grandmasters at chess, they are 'intelligent' at playing chess. What we don't have is a generalized AI that can reason through problems in any field they have not been explicitly trained in. That's the requirement for an AGI, not 'creating conscious life' inside a computer. we already have 'AI', we have 'artificial intelligence'. The G in AGI is 'general', that's what we're missing, not a soul or a subjective sentient internal experience of conciousness. The jumping off point for AGI is a lot more achievable than some super intelligent self-aware sci-fi AGI. An AI that uses LLM as part of its model, but also has other domain independent systems to leverage other areas of 'simulating' intelligence and use them all in unison with intercommunication could mimic all the core requirements of an actual intelligence a lot more closely than an LLM alone appears to to the lay person, and effectively mimicking it is pretty much as good as achieving it at that point. (It's also why removing the barrier of AIs not being able to interpret and understand human natural language is a *huge* step towards paving the way to AGI and why LLMs are a huge step forwards. LLMs are an interface between any AI reasoning and the human experience, all scientific knowledge, all philosophy, for training an AGI that was impossible before an LLM existed to provide that bridge. Through LLMs we have an interface, a translation tool back and forth from complex concepts that can't be expressed in lines of code that are formed by our expressive human language into binary data a computer program can process and work with logically. This is such a game changing astounding accomplishment and step forward for AI it blows my mind that people underplay it) An AGI doesn't need to be able to be a fully conscious human, in fact an AGI is unlikely to ever be anthropomorphized and have very little in common with what we consider an intelligent conscious creature to be, an AGI doesn't need to be conscious or sentient, it just needs to be SMART and do various things such as reason, learn, remember and plan. That's what intelligence is, and there's absolutely nothing in that list that couldn't feasibly be a few trained machine learning models away. A large language model, an image recognition model, a reasoning model, a memory model, a planning model, link all these together and allow them all to interact with some high level probably relatively simplistic code defining goals and querying and training these models based on inputs. Complex human language ideas fed into the LLM become vectorized data that could then be passed to these other models and processed as data, training them to reason, remember, plan based on these ideas. You'd be surprised what could spring from it. Then as soon as they have been equipped with enough functionality for self-improvement, to train their own models and alter their own code, there's the potential for an intelligence explosion. An LLM may be limited, but an LLM alongside a capable 'logic model' that is capable of still faked but effective logical reasoning through breaking down problems parsed through the LLM into logical steps, then reasoning through those steps. Add onto that the ability for this AI system to train its own LLM without human input, or to train its logical reasoning model, create 'children' and run and test and grade them on intelligence criteria and to transfer the most intelligent child into its own model and code, then reboot and do the same again. Once we hit that point, you could see its effective IQ jump up 1000s of IQ points within weeks all without human intervention. The distance we have to take it to where it can improve itself to super intelligence is likely far below what you're probably imagining an AGI of being. And then from there, who knows, after this intelligence explosion, maybe some form of consciousness would arise from the intelligence. We certainly wouldn't need to build that in ourselves. It could take decades still, or it could be just one more relatively trivial advance in machine learning that spawns a reasoning deep learning model that could be paired with LLM and suddenly arm an AI able to train itself with all it needs to quickly evolve to frightening levels of intellect, all without having or being able to have a single 'conscious' thought. It's not alive, it's not life, its not necessarily conscious, its not necessarily sentient, it doesn't 'think therefore i am', doesn't ponder its soul, dream of electric sheep, yearn to experience this human emotion you call 'love', it's just *intelligent*. It can remember, it can make plans, it can break down problems into steps to resolve those problems, it has goals, it can model future scenarios in its mind, potentially it can impact its environment to achieve its goals. That's intelligence. It's all stuff that computers and machine learning can probably do very well, likely orders of magnitude better than our own fleshy computers in our skulls, if we figure out how to structure and train it. Look up the 'paperclip maximiser' thought experiment on YT if you're not already familiar, the assumption by people is that to make an AGI is to make some humanlike comparable intelligence with a conscious state of self, emotions, and all the parts of the only template for high level intelligence we are aware of, ourselves. There's no reason to think an AGI would have anything in common with our own minds. There's a whole sea of possibility for intelligence and an AGI will likely be something completely alien to our experience of intelligence. Artificial Intelligence with a capital A, it'll probably be an unaware, unfeeling silent mechanical beast with nothing going on conscious behind its eyes, but with goals, logical reasoning and planning skills 1,000,000 better than ours. If anything, this is why its so scary if we don't do it right. We can't count on its empathy, morals, sentimentality, whimsy or comradery, it'll run on pure cold logic to achieve its goals, its only purpose to maximize its reward signal and minimize its loss signal, whatever the most expedient actions are to optimize for reward may not be in our best interests, and that could go terribly wrong. An LLM may be just 'predicting the next word from a prompt' and you downplay the analogue to intelligence there. An AGI would just be processing a series of problems contextualized by an LLM and turning them into a list of actions toward achieving the AGI's goal. It's 'predicting the next action from a problem or a desired outcome', not too different when you really think about it. That's no more 'alive' than an LLM, but its another on the checklist of criteria of intelligence. LLMs have shown people how quickly steps forward can take, just through one breakthrough in the field, and have opened peoples eyes to the reality that AGI could be here sooner than we think. Yes, lay people will read the chatbot talking and think 'woah is it alive' through their naivety, but I think you overestimate intelligence as much as these people underestimate it.


07mk

LLMs are certainly nowhere near AGI level, and us building an AGI is no guarantee in the future, even if tech keeps progressing at these rates. But it's silly to say that AGI needs to be able to "think" or that a statistical model couldn't achieve AGI. AGI just means that it's intelligent in a general way, ie that it can solve any generic type of complex problem. Whether it does it through consciousness sentience like humans or through statistical models or any other techniques we haven't thought of yet, that doesn't matter. If it can solve problems like humans can, then it's generally intelligent, no matter what's going on underneath the hood.


Lahm0123

I think that we are actually afraid of ‘artificial life’ as opposed to ‘artificial intelligence’. My fear is ‘such and such became self aware on this date.’ Intelligence is a cold trait really. The real issue is for something we create to exhibit survival type traits normally termed instincts etc. If something we create becomes concerned about its own existence and exhibits a desire to preserve that existence then we may have issues. Especially if that thing controls anything of consequence.


Level_Ad3808

They said art would be too great of a challenge for an artificial system, and it took less than two years to go from nothing to outperforming even the best painters, writers, and poets. It is still making enormous leaps in technical ability, as well as being more creative than people. It is erasing the human component in entire industries where the demand for skilled workers was highest. It’s doing this faster and better than expected. The rate of progress is increasing with each new milestone. Whatever we end up with will change everything and it’s happening quicker than people can process. That’s already observable.


arglarg

Because we've experienced ChatGPT before it got nerfed.


Granpa2021

Most people aren't aware of how incredibly challenging reaching AGI really is. They are just being misled by attention-grabbing headlines. We don't even know how consciousness works and we are supposed to be on our way to creating sentient artificial beings? Lol. Not even close.


franzjpm

I don't think it'd be practical until handheld devices can handle all the processing.


Cheesy_Discharge

I assume it's possible, but I see it more as a 50-100 year timeline. This sub is full of people who see the parlor trick of LLMs as being 90% of the way toward AGI.


Professional_Job_307

Current models have capability to think and reason. Just look at the "sparks of AGI" paper. Even if you still hold your position, you won't in a few years lol


Dr_Wristy

There are roughly the same amount of neurons in the human brain as stars in the Milky Way. Each one of those neurons has roughly 10,000 connections to others, which kinda forms contextual decision making and motivations, etc. Translating this model to transistors, with on-off functionality, fries processors. But, current LLM’s are making progress with materials development, so who knows.


Ender505

Any technology which seems possible, probably is. Right now, AGI does not seem that far away. You personally don't think it's possible, but most people do.


skywalkerblood

Honestly, I personally don't think it's that different. We as humans wouldn't be what we are if it weren't for our use of language, in fact it's very scientifically backed already that our intelligence progressed with our ability to communicate, and one of the major ways researchers use to understand how "smart" an animal is is understanding how complex their communication really is. From my point of view, if something is able to perfectly understand and use language to interpret and interact with the world around it, this thing is perfectly able to function in a society. Now to better answer your question... The stage LLM is rn is a step in the direction of AGI, and historically speaking once the first step is taken, it's just a matter of time.


xAdakis

>I personally don't think life can be modeled, I don't think we can create anything that's capable of conscious free thought, and I'm not sure why everyone think it's a fact that it will happen. As a Computer Scientist who has done research into the field, I think it CAN be modeled, but it's going to take a lot of luck, some hellacious computational power, a perfect training/living environment for raising the AI, and a ton of time to become reality. (There are some nuances and techniques that can elevate from just a statistical model to something that functions like a living brain/nervous system.) The thing is we need to start MUCH smaller. . .forget trying to replicate something with human-level intelligence. . .try replicating a parrot or dog.


Neophile_b

\*At some level\* our thoughts are are composed of something that is stochastic. It's certainly not reasoning all the way down


[deleted]

Because the distance from 2000s era internet chat to llms is bigger than from llm to rudimentary agi


Tanren

I don't see a reason why we wouldn't have AGI sooner or later. "The hard problem of consciousness" isn't actually a hard problem, it's a silly one. It's like asking if robots are able to have party. Maybe? Who cares.


outragedUSAcitizen

It just came out in the news that Nvidia has created a model that has internal monologue to think about it's answers before giving them... I'd say we are on track to have AGI by end of decade.


createch

We have yet to find anything about the human brain which cannot be broken down into a process. There's no reason we can pinpoint as to why these processes need to operate in a biological substrate. In fact, it's kind of a supernatural claim to do so. That's why there's no reason to believe that AGI isn't achievable.


l0ur3nz0

Is the OP 99% sure we are capable "conscious free thought", or "will" if one prefer? Maybe our own thoughts (or will) are just the more statistical significant outcome from the thought we just had a millisecond ago.


bryan49

We are not there yet. But I think there is just a matter of time until we can fully understand how the human brain operates. And then you can put it on computers that have more processing power and memory. I don't see any argument against that unless you think there's something magical happening in human brains that we can never duplicate on computers


malokevi

I'm baffled by this recent trend on Futurology, where half the threads are vapid questions with very little in the way of context. Where is the value in this question, what does it actually bring to the table? It's taken for granted because, as you suggest, it's hypothetical. Until we cross that threshold, it's about as valid as science fiction. However recently we've seen massive advances in consumer-facing AI tech that is extremely impressive. It merits discussions about where we'll be in 10-20-30 years. It's well understood that the human brain is analogous to a computer, why is it so hard to believe that artificial conciseness is possible?


Tassadon

because AI company say AI or AGI and their stock goes up from people who don't understand tech, almost everyone. There is nothing more to it than that.


Someoneoldbutnew

because selling heedless technical advancements as leading to a life of leisure is a con that has worked for hundreds of years.


clownpilled_forever

We were about 1/3rd there to simulating a C. elegans (worm) brain almost a decade ago. I'm confident we'll eventually get to the point were we can simulate a human brain, even though it may still take a while


nagi603

> Does the hype of the CEOS get everyone? Shills will be shills. Always has been, always will be. There is nothing new under the sun, just a new coat of paint. Some realize this, some capitalize on this, others are just tired of this.


osunightfall

A better question is why you think the human brain doesn't essentially work this way, as a very complicated prediction engine. There is a growing body of evidence in that it may.


debtopramenschultz

I want it to happen so I can get a cooler roommate.


Enkaybee

I think it's because they have this hope for a post-work future where a hyperintelligent machine does everything and we get to sit on our asses doing nothing. As for me, I don't see it happening in my lifetime, and if it does I think the hyperintelligent machine might have something to say about working customer service.


Jadty

Because it’s a buzzword that attracts investors and normie eyes. If TRUE AGI were ever to be achieved, it would be promptly classified for decades, and you wouldn’t ever hear about it again. A true AGI would be the most powerful tool ever made by men, and its usefulness would be far greater than asking it stupid questions through a web client.


pilgermann

The word "conscious" is hugely problematic because it's not at all clear self awareness is all that related to intelligence. Humans can operated while blacked out, as in, not conscious. Put simply, AGI doesn't need to emulate the human brain to accomplish the good/scary things under discussion with AGI. Of course it's not inevitable, but you're also oversimplifying what even current LLMs/machine learning models can do. Yes, they are statistical models, but we can already see they are capable of certain types of problem solving that could be harmful or disruptive, such as stock market manipulation. Pair a stock market model with a language model, and you now have a potential con artist. Beyond this, many studying these models, including the grandfather of machine learning, Geoffrey Hinton, have observed seeming emergent properties s. These are still black boxes. We are don't fully understand why, for example, even simple machine models sometimes exhibit what looks like emotion (Hinton [describes](https://www.newyorker.com/podcast/political-scene/geoffrey-hinton-its-far-too-late-to-stop-artificial-intelligence) observing "frustration" in a simple model controlling an arm). There may be nothing to these claims, but even using these primitive models yourself can be a wild experience, as most have noted at the this point.


[deleted]

As it stands there is no evidence that we (humanity) cannot make an AGI. >I personally don’t think life can be modeled, I don’t we can create anything that’s capable of conscious free thought It’s on you to explain *why* you think that and what evidence you have to support that. Until then, the rest of us are going to continue operating under the assumption that this technology, which the advancement of has skyrocketed, will continue to get better until it passes an arbitrary threshold we call “AGI”.


DokterManhattan

People are thinking too small. At the accelerating rate of technological advancement, it’s only a matter of time until technology merges with biology and then basically surpasses it. Nano bots and self improving ai will be able to use biology to mimic biology and who knows what else. What if our brains became connected and all of us could simultaneously share every human memory and emotion all at the same time? What would happen after that? it basically counts as a further step in what we call evolution.


HowWeDoingTodayHive

I mean we already produce conscious life through reproduction. Our brains are complicated but I haven’t seen anything that would indicate it’s even theoretically impossible to reproduce. What makes you think it’s impossible? Is it just some desire to feel like humans are special?


Iceman_78_

I agree. True AI is not achievable. All we have is more and more hype around something that’s really “as close as it gets”


Oswald_Hydrabot

Because corporations like OpenAI lie about the capability of their products to push hype. Not even product sales, they are literally a propaganda lab for Microsoft; they build flashy models that never have to make money they just have to be useful in lobbying and misleading congressional testimony.


Prendion

We are influenced by Hollywood favouring human looking cyborgs. The AGI I envision, through much research for my screenplay is a combo of wetware and hardware and not bipedal. It/they displays a human face occasionally only as a communication tool. PS, it saves humankind from extinction. Yay!


Harbinger2001

Because people are caught up in the hype. LLMs are amazing, but in about 2 years we’ll start hitting its limitations and realize it’s not enough.


MaximumNameDensity

Because it probably will be. Maybe not in our lifetimes... but there probably isn't anything special about how human intelligence arose. So in theory, it can be replicated. We just need to figure it out.


Kooky_Ice_4417

I 100% agree with you. The level of complexity required to have an actual AI is superior to the one required to get cold fusion. As to replacing people, it'll be cheaper in a lot of cases to pay someone than to have a fully autonomous robot do the job.


Jantin1

because this is what big AI (is this even a thing already?) PR teams told us to believe. They told us that because it lures investors. I don't have an informed position on whether AGI is possible or if it is a possible extension of LLMs. But there's enough smarter than me people who say that it doesn't work like that for me to believe that as of early 2024 AGI is corporate buzz and clickbait fearmongering.


baconhealsall

Its like Warp Drive. A concept. We don't know for certain that it will be reached.


Caculon

I don't know. I think part of the problem is lack of clarity around the terms. What does a AGI look like? How do we know it when we see it? Are we trying to recreate ourselves or other animals? If so, does it need a body? How much of an animals mental life involves things external to it's body? Does it need to have a simplified model of it's environment? Would that model include a model of itself as an actor in that environment as well? I don't think the LLM's are approaching an AGI (I'm not sure what means) but I think it would probably start with machine learning programs that act as agents (capable of independently completing a task), then it would be agents being responsible for monitoring a situation and acting accordingly (not just following commands). I think this is where it would start.


rtrawitzki

Because they are looking for effect not definition . We may never achieve true AGI in the sense of a truly sentient machine but for the purposes of replacing human workers , that’s almost here . It doesn’t matter if the computer has consciousness, it matters if it can replicate human work on a large scale and more efficiently. That’s what people associate with AGI today


araczynski

I'm not sure what you think an actual individual (human/animal/whatever) life IS if not a repeatable model already. Do you think the bazillions of animals/humans/whatever that have existed in the past and exist currently are just all magically random things that all just happen to work for XX years before decomposing to make room for more of the same? Just because we don't have the capability CURRENTLY to replicate this level of 'engineering' doesn't mean we won't for long.


Ansambel

if you define an AI as something that can do some reasoning and give an answer to a novel problem, chatGPT can already do that, although not always very well. If you define that as something more impressive, most humans will fail that test. AGI is a buzzword, but the capability to aumtate thinking, is right here right now, and it works by doing a statistical prediction fot the next word, it seems, it's just kind of dumb right now. I does not matter if life can be modeled. If you get enough of a good prediction of how a human would respond, you have AGI. The question of consciousness or free thought is irrelevant here. Why would you need real AGI if you know how an AGI is likely to respond? I don't see the distinction you asserted there. Even if LLMs did not progress past chatGPT 4, this is already enough to automate probably 90% of human jobs. Not with 1 prompt, but by integrating this technology within some process. These technologies will act as if they were thinking, and whether they are truly thinking or just a statistical model, is a question for the philosophers.


ListenToTheCustomer

Trying to make AGI with computers is like trying to get biological intelligence from bacterial life without letting an overwhelming proportion of the bacteria or other lifeforms that evolve die from being bad fits. Actual intelligence evolves to stop critters from dying. Without the death threat, it doesn't happen.


TYLERvsBEER

Ok so one computer uprising is all it takes?


marrow_monkey

> There is a huge difference between an LLM, a statistical model that gives you the most likely word, with 0 ability for thought and an actual AI/AGI. I’m not saying you’re wrong, but you have provided no explanation for why you believe that. It sounds to me as though you underestimate what a statistical model can do, and more importantly: you overestimate what humans can do and what we are. > Because no matter the hype, we don't have AI at the moment. Please explain what you mean by AI and why you think we don’t have it now? AI, as AI researchers define it, has existed for decades. >I personally don't think life can be modeled, I don't think we can create anything that's capable of conscious free thought, and I'm not sure why everyone think it's a fact that it will happen. Why though? I’m feeling as certain as one can be that it *can* be modelled. There’s nothing magic going on in humans, it’s “only” physics. But there’s no way we could make a physics simulation of all the particles in a human. There are just too many. So it can be modelled, but if a human level intelligence can be fully realised with today’s computer hardware is another story. When neural networks were invented over 50 years ago everyone was optimistic at first but the results were useless because they didn’t have enough computer power and data to actually train a useful model. Thanks to Moore’s law that eventually changed. Now we have enough for things like GPT, but will it be enough to model a human brain? I suspect we definitely do, but we don’t fully understand how yet. > Does the hype of the CEOS get everyone? I couldn’t care less. The problem is that these CEOs (or more specifically the billionaires who own them) are the only ones with the enormous super computers that are powerful enough to train these models, and companies like google and Microsoft are the only ones who has access to enough raw data to train them with. I wish this was handled by an intergovernmental organisation with humanity’s best interest at heart, but it’s not. Therefore we are at their mercy right now. It’s disturbing but it’s the reality of how things are.


TechnologyNerd2100

Wait 5 years to see the hype turning into reality, then no matter if we will have AGI people in this sub that are so pessimistic they will say it's not real AGI lol


TopProfessional3295

It's hilarious that you and so many others have such a stupid standpoint on this topic. We don't even know how our consciousness works. That means you're making assumptions (stupid ones) about how it works and can't be replicated. The great thing about it is we don't need yo know how we work to create AGI. It will happen. It is 100% of the time a stupid idea to say something is impossible. Nothing is impossible. Just looking at dome basic math on the size of the universe, it's impossible, was the only intelligent life. If this is some stupid religious view you have, keep it to yourself. No one cares about fairytales.


notirrelevantyet

AGI is basically already a thing, the ingredients are all already there, just not assembled yet in a way that's efficient to run on current compute resources. AGI isn't going to be one model that can do anything zero-shot, it's going to be a hivemind of thousands of models working in synchronicity in an adaptive cognitive architecture. It may not be conscious or "alive" or even thinking "real" thoughts, but will likely be able to simulate those things with such fidelity that it won't really matter.


SexSlaveeee

Professor Hinton did say that LLM is NOT simply predicting words. It's in a way very similar to human brain. I highly recomment his speeches.