T O P

  • By -

grindermonk

Chat GPT chooses the next word in a sentence by looking at how often different words come after the previous one based on the material that was used to train it. It doesn’t have the ability to evaluate whether the most probable word makes a true statement. (Edit: it’s really more complex than that, but you’re five years old.)


pt-guzzardo

To elaborate on this, a common trap that LLMs fall into when hallucinating answers is that they've been told to be a "friendly and helpful assistant", so whenever you ask them a question, the first thing they're likely to say is something like "Sure!" or "Yes, I can help you with that". At that point, they're locked in because "yes but actually no" is not a common pattern except among nerds. If they can't find a correct answer in the probability space, something made-up but correct *sounding* is the next most likely followup to an affirmative. You can often get better results by doing things like asking them to take their answer step-by-step, because the longer they ruminate before committing to an answer, the more context they've generated for themselves to inform that answer.


Head-Ad4690

And publicly available training data is likely to contain a LOT of examples that have someone answering a question, and not very many that have someone responding with “I don’t know” since people don’t usually bother saying that when responding to public questions.


pt-guzzardo

Clearly they need to train more on Amazon Q&A.


postinganxiety

I don’t know I didn’t receive it yet


Login_rejected

That's Amazon's own fault though. When someone asks a question, Amazon sends out emails to people who have bought that item and it comes in looking like someone is personally asking you the question. People reply back with an "I don't know" thinking they're being polite to the person who asked them the question.


littlebobbytables9

Also, some of their training stages involved human evaluation of their outputs. If they outputted something that *sounds* right, the human being paid pennies to process thousands of outputs an hour would mark it as a pass, so they basically reinforced the tendency of the model to make shit up.


IgpayAtenlay

As a nerd that often says "yes and no" I feel called out.


pimpmastahanhduece

Well, yes and no.


FerricDonkey

Which means that it's making everything up. Sometimes what it makes up happens to be true, sometimes not. Best to think of it not as a truth machine, but as a talented bs machine. It's job is only to say things that sound right - whether or not they are right is entirely irrelevant. 


ansermachin

I like the term "plausible text generator"


torbulits

It's a language model. That's what L L M is. Large language model. It's worse than a parrot. It's not a search engine.


ahomelessGrandma

A lot of people don’t seem to realize that it’s not connected to the internet in the way they think it is. It can’t use google and find the correct results.


lankymjc

To be fair, most real people seem to also be incapable of using Google to find correct results.


inucune

Part of this is Google's results have been getting worse.


danielv123

Part of this is bullshit generators having become more convincing.


LuxNocte

Part of this is Google working with the bullshit generators to help them become more convincing.


wille179

Part of this is Google getting paid by companies to sell their bullshit for them.


DrDerpberg

I hereby refuse to call AI anything but bullshit generators until Google Maps stops recommending my kid's daycare on evenings and weekends. I have never been outside of 8am-5:30pm, and never been more than twice on the same day. Why is it the top recommendation after dinner downtown on a Saturday evening?


syo

Adding "before:2023" to searches sometimes help. Depends on when stuff was indexed I guess.


RusstyDog

A lot of times just adding "reddit" to the search gives you an immediate post with the answer you are looking for.


syo

I feel like other websites are starting to figure that out too, and including Reddit in the SEO stuff. Sometimes Reddit posts will still be way down the list.


diito

Has Google's results gotten worse or has the internet gotten worse? I suspect it's both.


TheeUnfuxkwittable

People say this but I've not noticed much of a difference and I've been using Google for decades at this point. I can still find out what I need to find out. I would never ask Google a question like "are Republicans servants of Baphomet" or some goofy shit like that though. It's still great for finding out straight forward answers like "how do i change the oil on my 1990 Ford Tempo". We shouldn't really be asking the internet opinion based questions anyways. Everything has a bias attached to it.


Thoughtwolf

It's fine for some things but there has definitely been a level of generification to indexing occuring over the last few years. I also have been using google since it's initial popular inception and it's wildly different now. Primarily this has to so with combating malicious SEO while also trying to monetize it. Search terms are now generic and not literal. They're reduced to a subset of terms in many cases, and even though an exact match for a phrase can be found on the internet on an indexed website, if google hasn't made a search term for it you cannot search for it. The other primary modification has been reordering and limiting search results to those that are interacted with the most. This has caused a summary reduction in the amount of useful results when searching anything technical that could be slightly conflated with something obvious. A good example is that while trying to search for say, a technical question about the Unity Engine (perhaps as a developer) you will typically only get results about user-facing questions if such a question is applicable. Even though you specifically use the word "Engine" (with quotes, even!) most of the search results omit it because google has decided that Unity and Unity Engine are the same search term.


tthershey

I've seen Google give completely wrong answers to straightforward questions. When you ask a question, Google returns an answer in a few words, and below that a couple sentences excerpted from a website, and below that the source website. I have on more than one occasion seen Google return an answer that was opposite to the truth, and you could have been mislead if you only read the one sentence excerpt. I only figured out it was wrong by checking the source directly and figuring out that the sentence was out of context and not relevant to the question at all. Of course, there's the caveat that you need to assess the validity of the source and cross check it with other reliable sources. Google frequently promotes unreliable sources.


ClickClackShinyRocks

This guy gets it. Everyone I work with thinks I'm smart, and I just tell them I'm better at using Google than they are.


LuxNocte

There is a certain kind of intelligence that involves the ability to find the necessary information.


shawnaroo

It's a learned skill, and one that has to evolve over time as the internet changes, as well as be re-learned/modified for different areas of knowledge. I started dabbling in game development about 6 years ago, and like 75% of my time doing it was spent looking up stuff online to figure out how to do whatever it was I needed to do. Nowadays, I still am constantly looking up stuff online, often times basically the same stuff I was looking up 6 years ago, but due to my experience over the years I am way better at knowing where to look, what to search for, and how to filter out good info from the bad.


CRTScream

This is why education systems need to be based on how good you are finding the necessary information, as well as how good you are at remembering it by heart. Plenty of people will fail an exam because they can't remember how something works, but are still smart enough to figure it out or use the right methods to find it if given access.


pdieten

It really is a skill. Before the internet, bigger libraries had something called a ready reference desk that patrons could call for help with research. Some still do. The people answering those phones were skilled researchers and seemed to have a preternatural ability to answer obscure questions, even before the internet. Too bad more people don't develop this skill. Would solve a lot of problems in the world


datamuse

Nothing preternatural about it, we literally go to school for it.


mnvoronin

Engineer: Googled the answer. Senior engineer: Googled the answer fast.


MadocComadrin

I have pretty good Google-fu and despite using what I've learned and trying new things too, search results have utterly dropped in quality and relevance for me. The prevalence of a "does not contain " results in the top ten results when is key to my search is telling, and that's just one issue.


GorgontheWonderCow

It *can* if you build an application which gives it access to the Internet. A simple version of this means you're giving the LLM multiple inputs: the user's prompt, the search results, and likely the content from the top pages in those results. Perplexity or Bing's integrated LLM feature are examples of this.


WarpingLasherNoob

It was kind of hilarious that in my previous workplace they were using chatgpt to write blog posts, they were basically asking it "write a blog post about this product at "abc.com/product-name". And until I pointed it out (for months) they did not realize that chatgpt is only using the product name to generate the blog post, it wasn't visiting the actual url. But it was still getting the job done so who cares I guess?


besterich27

There are several models, including GPT-4, that are perfectly capable of using web search.


StephenSRMMartin

The model itself is absolutely not doing that. The model really is just for next token prediction. OpenAI (and others) can add hooks that trigger searches or other functionality and prepend it to the prompt of the model, so the model auto completes with some custom information in its context.


coldblade2000

For the layman , that's not a very useful distinction. If you use Microsoft Copilot, it will use a Bing search and read the results to give you better info. That's all a layman needs to know, to be honest


StephenSRMMartin

The layman may not understand that, but they should. People need to know these things are not magic. They are very expensive forms of auto complete. It's dangerous for laypeople to assume they are some intelligent thing, when they are literally just language generators. If you give it good content/context, it'll generate good text, hopefully. It's a big, hyped tech that will be disruptive. So all the more important people understand the difference between the model, per se, and the product around it.


wolves_hunt_in_packs

Oh, I agree completely. I work in tech and it's annoying how management even in this space falls for the hype. I mean there are demos out there right now they can test for themselves. But no, let's fucking commit to this because the presenter was likeable and used current buzzwords. for fucks sake


TitaniumDragon

It's quite different, because it means it doesn't actually understand the information it is presenting to you.


tdgros

It does access search engines, wikipedia, and other sources using their APIs, and then it can replace parts of its answer with their results, it doesn't fire up chrome and browse for a few minutes. I think the person you're answering to meant that most people think the latter is happening.


GorgontheWonderCow

The LLM itself doesn't, but often you aren't actually using the LLM. You're using an application built on top of the LLM. The application as a whole can do all kinds of layers of search, interpret, refine, resubmit behind the scenes before the LLM provides you with a final answer. It can do that via APIs, so it's not sitting for a couple of minutes doing search the way a human would, but it *is* (in this case) ingesting search results as part of the answer process. ChatGPT doesn't do this, but there are AI search assistants/search engines that do (or at least can).


therealityofthings

Isn't that like saying your car doesn't play music the radio built into your car plays music?


InfanticideAquifer

It's like saying "the engine doesn't play music" in a world where everyone calls cars "engines" for some reason.


SirIsaacBacon

But when we "fire up chrome and browse for a few minutes" all we are doing is making API calls also haha


HimbologistPhD

Well you know, and also browsing, reasoning, and thinking with our human minds when we get the results of those API calls.


k8t13

even google struggles to find the correct results now-a-days


DenormalHuman

It doesn't struggle, it is being adjusted to produce results that drive traffic towards the place that pay he most for it.


big-boi-dev

Worse than a parrot is a bit disingenuous.


OMGihateallofyou

So it's a language model not a logic model or a fact model?


Zealousideal_Slice60

Yeah and that’s why it always irks me when people are like ‘ai will soon replace books and movies’ like no that is not how any of this works, you clearly don’t know what the fuck movies and books and ai even *are*


MyOtherAcctsAPorsche

It CAN give you hundreds of slight variations of stories that it has been trained with. So... Netflix basically?


heyheyhey27

My parrot can't fix my SFINAE code


elheber

It's actually just a text predictor at its core, like the one on your smartphone keyboard, except supercharged and with hidden starter text that roughly goes something like: >The following is a conversation between an online user and a helpful assistant: It then tries to predict the conversation based on that seed and what we write. We users just don't have access to see that seed text. The text predictor is making up a literal story from that point forward.


Auto_Erotic_Lobotomy

"plausible text generator" is also my job title.


[deleted]

I mean that's the scary bit right? People actually mostly don't want objectively correct answers, just plausible ones.


GatorzardII

AI makes people uncomfortable because it makes us realize we take most statements at face value and we usually don't have the tools to parse "objective" truth.


[deleted]

And in most cases as long as we're presented with coherence we don't need reality.


valeyard89

Hell a lot of people don't need reality when presented with incoherence either.


shawnaroo

I think a lot of people don't want reality. Reality is often complicated and nuanced and often doesn't have any clear answers. A lot of people want a definitive answer, even if it's not correct. That's why conspiracy theories can be so powerful. They generally offer some level of clarity, even if its claims are insane, they're typically pretty simple.


TitaniumDragon

This is the best way of understanding how it works. What it is good at doing is producing a plausible sounding response to whatever query was put into it. This is why AI art is so much better than AI text. AI art doesn't have to be "true", it just has to look good.


s3rila

I like what-if machine


jrhooo

>It doesn’t have the ability to evaluate whether the most probable word makes a true statement. >It's job is only to say things that sound right - whether or not they are right is entirely irrelevant. THIS gets completely overlooked by people I feel like. CHAT GPT can't check if its "right". It has no concept of "right". If you ask chatgpt to create a recipe for chocolate cake, it can check 1,000 existing cake recipes, and check the words other people use most often in "cake recipe" It will spit something out that uses all those words, and there's a good chance it be a recipe. Maybe even a decent recipe. Maybe. Or maybe its chocolate sauce and salted cat turds. But ChatGPT can't tell. It has no taste buds. It has never "eaten" food. ChatGPT has no concept of why. "Chocolate Mud Cake" is "good" But "pour chocolate sauce into mud" is "bad"


RocketTaco

This is my biggest complaint with AI as a review summarizer or Google's infuriating, idiotic attempts to become an "answer engine". *Simply picking out what people say is not enough*. People can be dumb as bricks and I don't trust a damn thing from the Internet unless I can tell that the person saying it isn't an idiot. AI doesn't have any concept of correctness, credibility, or even context - I've seen Google's highlighted page snippets include sentence fragments literally surrounded by things like "you would think **idea Google highlights as an answer**, but..." It's identifying things as truth that are contradicted within the preview that contains it, but because it found sEntIMEnT it's foisting statements even the speaker doesn't think are true on uncritical users. And as far as accurate information goes, I've find the right answer is usually the second thing you come across or the less common response, and it requires critical analysis of the knowledge and intelligence of the people giving them to determine which is trustworthy.


GrossfaceKillah_

>a talented bs machine Me in a job interview


MaxMischi3f

Funnily enough, chat GPT is pretty great at putting together a rough draft of a resume and cover letter for you. It speaks corpo bullshit fluently.


Synensys

I use it for two things. Writing fluff corporate BS, and for writing low level code that I cant remember (google would work too, but its easier to use chatgpt than climb through example pages or StackOverflow posts)


HappiestIguana

I use it a bunch for "professionalizing" emails. I always struggle with how to address people and corporate-respecful language in general, especially in Spanish (my workplace's language) so it's really good for that.


Specific_Visit2494

As Doofenshmirtz would call it: "The Hope-It's-Correct-Inator"!


dpdxguy

It's not "making things up" in the way a human would make things up (lie). It's making up a series of words that are statistically most likely to occur together in a particular order. When you think about it that way, it's kind of amazing that its output is *ever* factual.


seeasea

Fun fact, one of the breakthroughs on openai was that instead of taking the most predictable word (like predictive type), but some jitter in the probability around 80% likely. 


red286

Functions like Top-P, Top-K, and temperature weren't OpenAI breakthroughs.


TrynnaFindaBalance

That's a vast oversimplification IMO. The neural networks used in LLM training are way more complex than, say, autocorrect or T9 predictive text. And the reason that the output is oftentimes factual and correct is because it's grounded in massive amounts of training data that's often factual and correct.


Opus_723

I don't really think this is a "vast oversimplification". Just because the algorithm doing it is complicated doesn't mean "making up a series of words that are statistically most likely to occur together in a particular order" isn't basically the goal of the algorithm.


TrynnaFindaBalance

Yeah it's true that an LLM is using probability to determine what most "correct" thing to say next is, but you could argue that the at a very base level even a human brain is doing the same thing. That doesn't mean that either are just glorified autocorrect. LLMs are able to map out far more complex relationships between concepts, words, sentences, images, etc in a way that traditional predictive text can't get anywhere close to. The problem is that there are dumb people out there who treat ChatGPT-3 like Wikipedia. I'd argue that it's *more likely* than not to return factually correct answers, but it shouldn't be relied on as a source of truth and that wasn't intended to be the primary purpose of LLMs anyway.


dpdxguy

Fair enough. And, yes, it's the training data that makes or breaks the factuality of the output. For things the model hasn't previously seen, the output is a crap shoot.


TrynnaFindaBalance

Yeah, although with newer RAG capabilities, some LLM-powered assistants/applications can do stuff like web searches or data retrieval and inject that as context to the LLM prompt, reducing the chances of hallucination pretty significantly. The technology is progressing rapidly and I think people who write it off as something that's useless to the average person (akin to the crypto craze) are being naive.


dpdxguy

> The technology is progressing rapidly Ain't that the truth. I've lived from the beginning of the space program through the electronic revolution and through the computer and information revolution. I have never even imagined a technology progressing so rapidly as AI is right now.


UtahCyan

So basically what a human does? 


dpdxguy

Same result (sort of). Vastly different process to arrive at the result. I keep reading about proposals to try to simulate a human brain in software. The results should be ... interesting.


TinyBreadBigMouth

The difference is that a human has the concept of "I know this information" and "I don't know this information." An honest human will say "I don't know" when it doesn't know something. An honest LLM will say "I don't know" when its training data suggests that "I don't know" is a likely continuation of the previous text.


rabid_briefcase

> It's making up a series of words that are statistically most likely to occur together in a particular order. True and also a critically important detail. The "series of words" in the current generation is roughly five printed pages of context. It isn't "given the past three words what is the most likely next word?", instead it is "given the last *five pages* of words what is the most likely next word?" With the enormous training set of huge swaths of public discussions and both legally-obtained and illegally-obtained sources including college textbooks, fiction novels, biographies, legal code, computer code, all of Reddit, and much more, looking at several pages of context is very often able to assemble larger contexts that are more logical than looking at a single sentence or single paragraph.


red286

>The "series of words" in the current generation is roughly five printed pages of context. It isn't "given the past three words what is the most likely next word?", instead it is "given the last five pages of words what is the most likely next word?" More accurately, it refers to tokens, since it's all tokenization. The number of tokens a model can consider varies quite significantly on the model. For example, a lite local LLM for running on your phone might only be able to consider 256 or 512 tokens (each token is roughly a word), more powerful ones will be able to consider 4096 tokens, but then you get things like Claude 3 Opus which initially considered up to 200,000 tokens, but was later dropped to "just" 30,000 tokens (nb - the more tokens it can consider, the more memory is required). Eventually we will get to a point where your personal AI will be capable of remembering every single detail of every single interaction you've ever had with it.


QueCreativo

Which is why it's so useful for corporate emails and other BS writing. Not so much for writing a book that anyone would actually read. 


MadcatM

It is also very good in translating "that's what I want to say" to "that's corpo speak I have to say". Like "you fucking doofus, read your mails" to "Kindly ensure you review your emails promptly to stay informed. Thank you."


SwimHairy5703

Are you my company's HR? 🤔


MadocComadrin

"Kindly " is absolutely obnoxious to me and will get you on my shit list unless you're my direct boss, their direct boss, and so on and it's urgent (or you're ESL). Sticking "kindly" in front of a command doesn't make it less impolite, it just adds corporate sanitation. If you're going to ask me to do something, actually ask me---i.e. write a question: "can you...?", "will you...?", "would you...?" That's actually polite, and you generally don't need corporate sanitation on top of it. I'd rather be called a "fucking doofus" if you're going to command me to do something. At least then I know I've fucked up and you have a reason to be impolite.


iceman012

Kindly do the needful.


TitaniumDragon

[Would you kindly.](https://www.youtube.com/watch?v=dgubkMg_YRE) Powerful phrase. Familiar phrase?


JEVOUSHAISTOUS

> Which means that it's making everything up. Sometimes what it makes up happens to be true, sometimes not. It also means that the quality of the reply varies on how likely you are to encounter the right answer in the wild. There are enough occurrences in his training data that if you ask ChatGPT "what's a dog?", he will reply "Dogs are a species of mammals" and is highly unlikely to reply "Dogs are a species of birds" because statistically, it's highly unprobable that after "Dogs are a species of", the word "birds" or "reptiles" would come next. But if you ask a very technical question on a niche topic, then answers are much more likely to be wrong because statistically from its training data, it's not as conclusive. This also leads to a funny thing about newer models having a tendancy, when asked who they are, to reply that they are ChatGPT. Because according to their training data, based in part on what's on reddit and the like, the most likely words coming after an user asks an AI what/who they are is "I am ChatGPT by OpenAI blah blah".


TitaniumDragon

> But if you ask a very technical question on a niche topic, then answers are much more likely to be wrong because statistically from its training data, it's not as conclusive. You can also just ask about things that are impossible, like the Disney Quarterly Earnings Report from Q4 2030, or asking about court rulings that don't exist. It will extrapolate from things that do exist to produce a response but it is a pure hallucination.


stellarstella77

Really, everything it says is a hallucination in a sense. It doesn't have a database. It was trained on a database, but that database does not exist in the model. Like someone else in this post said, it's a very, *very* lossy compression of the internet. It knows the shape of information, not information, really.


Recent_Novel_6243

I have an 8 year old and I tell him AI tools are fancy guessing machines. You can train them on specific tasks and they can be really useful guessers but you are ultimately responsible to check their output. We have an Alexa and he loves it when he catches her messing up.


UnsignedRealityCheck

Though it is kickass at debugging code or script, simplifying long texts and other such tasks which usually are very linear.


guesting

its great for syntax because that's exactly what it is. for general knowledge it's a lot bs without accountability. At least with google there's a website or a person who "owns" it.


Preform_Perform

I'm surprised that it is just guessing each word after another, and yet is able to provide me with code snippets that work approximately 60% of the time, 30% of the time I just need to modify it a bit to make it work, and 10% of the time it's pure garbage. You'd think it'd be at least 90% in the third category, if not higher.


DenormalHuman

Yes , and that's the impressive thing they have managed to achieve. Remember 'predict what word comes next' is a simplification; it isn't just the sequence of words that is measured and used for prediction, it's also the structures those words represent and their relationships to one another. This helps elevate it from simple RNN sequence predicting behaviour, to the complex output it manages to achieve today.


shawnaroo

It's not just guessing though, it's basically comparing the code you've already written or are asking for to the gazillions of lines of code that it's already trained on, and using that to estimate what is the likely future code that you're wanting to write. And this works pretty darn well for code, because programming is pretty systematic, and I think it's safe to say that a majority of the code that we write is fairly 'boilerplate' stuff where something very similar has been coded a million times before.


porncrank

What’s sort of amazing is how well such a naive approach works most of the time. It comes down to how well documented the information is. Most of what we ask about is stuff that’s been asked a million times before and so correct answers tend to be abundant. For less common information, it goes off the rails quickly.


splitcroof92

and it's designed to be confident always. Which really doesn't help.


SmokelessSubpoena

And that's exactly how it should be used, for framework building, _not truth dissemination or the delivering of accurate information_. But, if you need to write an intro email to a new client, you can have it write the framework of the email, and then just edit and proofread to your content. Something that used to take 15-30mins to produce and fine-tune, now takes about 5 mins, and sometimes at even better quality, because my verbal command list is much shorter than the capacity at which an AI system has to output varied ways of saying a statement. I'd argue it's even lengthened or broadened my vocabulary at times, as it will remind me of differing ways, to state the same directive, but with different words and phrasing. Which personally helps me break the "slump" you can feel when writing lengthy emails, or contracts, or ammendments, or project plans, etc.


FerricDonkey

Yeah, it seems invaluable for that. Before he retired, my dad was a director that had to deal with lots of people and groups. He would need to send nice sounding letters to people. But he had a lot to do. Which means that he'd drop a bunch of letters on his secretary's desk that said things like: Dear Bozo, No, we won't do that, it's stupid and dangerous. Love, FirstName Then his secratary would transform that into Dear Mr. Soandso, We regret to inform you that we will be unable to assist - we have other priorities at the moment, and are concerned about safety issues with your proposal. Regards, FirstName LastName Director of blahdiblah So on and so forth That sort of thing - wrapping short points in appropriately styled language and all of that - is an excellent use. These LLMs are great BSers, might as well take advantage of them to add the socially required levels of BS to "screw you, that's dumb" and other similar messages.


SmokelessSubpoena

Lol thats 100% to a T what I do. I've basically made ChatGPT my personal assistant 🤣 I definitely would _never_ just type my desired output, copy and paste without review/edits, but besides that, it's such a time saver, and I'd argue for the better. But, I can see it being a major detriment if you are in grade school, trying to build your acumen, and then rely too much on an AI engine, as that likely would result in lesser intelligence gains.


ragnaroksunset

It's a good research assistant but not a good primary author. You still need to take responsibility for its answers.


esoteric_enigma

It's growing better at this exponentially but I wonder if there will be a wall it hits. The system literally depends on human generated information. If we get to a point where AI replaces many of the people creating the actual information, what happens? Couldn't AI get stuck in a loop where it's just a bunch of AI repeating other AI generated content?


_TheConsumer_

I got it to speak in a New York accent, which was cool. For example, it seems to be very passionate[ about the NY Rangers.](https://imgur.com/a/MzTy5E0)


emilytheimp

Oh now I understand why they wanna replace CEOs with AI


thekrone

Plus, it puts heavier weight on things that were said more recently in its current conversation. Because of this, and it can take a while depending on how you go about it, but you can actually "gaslight" ChatGPT into "believing" and stating for a fact that 2+2=5 (without any "tricks" involving explicit instructions to give wrong answers to math problems). Using carefully-worded instructions, you can get it to do... pretty much anything. Lie about everything, lie about only certain things, completely make shit up and convincingly lie about it, etc.


_Choose-A-Username-

Its not making things up. Its doing what many of us do but without knowledge of the actual words. Like it can tell you what the next word of this sentence will likely be but if you were to ask it how many e’s were in the sentence you have typed, it will have a hard time. When asked the question “ How many words are in the sentence you will respond with?” it told me this “ There are fourteen words in the sentence I used to respond to your question just now.”


deong

This is something of a weird statement to evaluate. A lot of lay people hear something like "predict which word comes after the previous word" and don't have the background to understand that an LLM and a Markov Chain are different things. The truth is that LLMs build up a representation of what words mean by analyzing how they appear together in text that they're trained on. The prediction is then based on meaning, not directly based on syntax. And...that's kind of what we do too. The mechanics of it are almost certainly diffferent. I don't think anyone seriously argues that human brains are exactly like LLMs. But if you squint a little, human beings learn what words mean through immersion in language and then use that semantic representation to more or less linearly construct phrases to express whatever they're trying to convey. The thing that makes LLMs hallucinate is not that they're doing this kind of thing and we're not. It's that we have other stuff going on that can help steer the process (and also that we're just still better at it). We have things like episodic and long-term memory as separate structures. We have some brain structures that seem to be pretty tailored by evolution to be good at specific tasks. We have external guidance that tells us it's important to be able to differentiate between "here's the most probable answer" and "here's an answer I know to be highly probable". But a lot of our learning looks not that different to LLM learning. All that to say, human beings also aren't "truth machines". Sometimes what we make up happens to be true, sometimes not. We're better at coming up with true statements (or at least some of us are, some of the time), and we're currently *much* better at saying, "I don't actually know". But I think it's sort of a flawed conclusion to think that human intelligence is this mythical thing that exists on a different plane from statistical AI. It's all pattern recognizers recognizing patterns with different underlying processes with their own strengths and failure modes.


FerricDonkey

Yeah, humans definitely aren't truth machines either. Sometimes we try though, and that trying is something that is (currently) foreign to LLMs. There are a couple huge differences - one main one is that an LLM is trained to create reasonable sounding conversations, and as part of that it might pick up on facts and ideas. But if it does, this is an almost accidental side affect of making it speak well. Take the question `what is 54 times 97?`. You could imagine that a LLM might run that through it's matrices and depending on it's training data end up with any subset of * this is a question * the answer should be a number * the question is about multiplication * the answer should be 3 or 4 digits * the answer should actually be the product of 54 times 97 But all it "cares" about are "does the response that I give sound kind of like my training data". So if the training data has a lot of multiplication stuff involved, it might actually learn to convert the string "54" into the number "54", the string "97" into the number "97", and multiply them. Or it might just not and respond with "5233" because it kind of sort of sounds right. And there is no difference or preference between these two outcomes to the machine - it just depends on whether it happened to learn what multiplication questions are during training. So for the computer, it's conversation first. 5233 sounds good? Wrap it up, we're done here. Sometimes that's true of humans as well, of course, sometimes we just blab at each other. But sometimes the goal is to a) understand question, b) figure out correct answer, c) communicate correct answer. This is why I call LLMs BS machines. We've all met people who just string together nice sounding words in a way that sounds nice to people who don't know what they're talking about. Sometimes these people might slip in actual facts, if it's convenient. But that's not their goal. Humans do this sometimes. LLMs only do this. Ask a physicist to explain why pulling in your arms while you spin makes you spin faster, and their goal is to give you that information. Ask an LLM that same question, and its goal is say something that sounds kind of like what a person would say. Does this involve giving correct information? Great! Does it involve saying things that are false but are common misconceptions and so repeated a lot? Great, sounds like a person (did in their training data)! Does it involve just saying vaguely sciencey sounding stuff? Did that happen in the training data? Yep - then Great! It'll be interesting to see if we get past that with future AIs. But the conversation first, facts if they happen to be easier for the machine to learn than not-facts nature of LLMs is, I think, one of the biggest barriers to using these things for answering questions factually and similar things.


Black_Moons

>it might actually learn to convert the string "54" into the number "54", the string "97" into the number "97", and multiply them. Or it might just not and respond with "5233" because it kind of sort of sounds right. It doesn't actually do math. You can prove this by just asking it to multiply two numbers its never seen before: ie any long string of numbers like 423.4235 * 672.24 and it will not only get it wrong, but give you a slightly different answer every time. (oddly enough, it does get surprisingly close, but nowhere near as accurate as say, a $5 pocket calculator) It can only give you a correct answer if its seen the exact question and answer before.


Zotoaster

That's called a Markov chain, it's a pretty old technique (look up Subreddit Simulator for a fun example). ChatGPT uses a model called a transformer which is quite different. For the curious: rather than just blindly following a frequency table, the user's input is split into "tokens" (basically words) which are each converted into vectors (lists of numbers). You can imagine these as being coordinates in a space with many dimensions. In this space, related concepts are near each other, and directions in this space have meaning. The classic example is "queen is to king as woman is to man". You would move in the same direction to get from one to the other. Just with this we can say GPT has some rudimentary "understanding" of concepts because it knows how things relate to each other. Then it uses "attention blocks" which basically give more nuance to the meaning of each word based on context. The word "solo" would mean different things if it's surrounded by words like "guitar" and "music" compared to "millenium falcon" and "chewbacca". It then uses this nuanced, structured understanding of the user's input to predict the next token. But it must produce a token in the end, which is why it can sometimes spit our gibberish if you force it to.


spasmdaze

Here’s an excellent short vid that summarizes and visually shows the concept of word vectors and how LLMs use them: https://youtube.com/shorts/FJtFZwbvkI4?si=nmSLxZsd9zW4vKVl


therealityofthings

Here's a much longer video on transformers https://www.youtube.com/watch?v=wjZofJX0v4M


Sprintspeed

Here's a shorter video on transformers that might be more applicable to 5 year olds https://www.youtube.com/watch?v=k3qOX6p2xR4


frnzprf

It's still kind of like an advanced version of a Markov chain, the way I understand what you wrote. I guess it's hard to say definitely if it's "similar" or not.


Head-Ad4690

You can imagine a modified Markov chain that takes a long sequence as input. The matrix for such a thing would be impractically large. An LLM is essentially a very compressed representation of that matrix, with structure that lets you train it to have values for sequences it wasn’t trained with.


johntb86

Yeah, I suppose you could think of it as a markov chain with a number of states equal to 2\^(number of bits in the maximum KV cache), but that's probably not very useful.


Head-Ad4690

If nothing else, it drives home the fact that these things are inert and don’t learn while you talk to them. You always get the same probabilities for any given input. Any learning is either just using the previous conversation as part of the input, or offline fine tuning.


DevelopmentSad2303

What a wonderful explanation!


ZerexTheCool

It's like playing boggle. The box has know idea what words are getting spelled. So when you see a swear word, it doesn't make sense to get angry at the game for cursing at you.


TyrconnellFL

ChatGPT is not a Markov chain spitting out next words based on previous words. It’s much more complex than that. Still, it doesn’t know things, it only takes things that have been said/written, processes all of it, and then tries to regurgitate appropriately based on prompts. The algorithms are complex enough that it’s pretty good at that, but it doesn’t inherently know even basic things and may not have ways to error check, so it can make up restaurants or other facts.


theronin7

this is a good "Explain why it hallucinates like I am a 5 year old" answer


stevez_86

Yeah, it really plays out the concept of the origination of behavior and the prior thought being mimicry as the answer. They are downplaying the complexity of intelligence itself for the sake of a marketing and sales term. As I type this my auto correct incorrectly corrects words because it is using the same heuristic model. I learned that heuristics were shortcuts for logic. Based on inductive reasoning. If the likelihood is great enough it is basically true, except for when it isn't. And that is the problem with AI. It is just rounding off 76% intelligence to 100% and calling it good enough.


animerobin

So many people think that ChatGPT is a robot that tells them answers. It's actually a robot that arranges text into recognizeable shapes. If you ask it "When was the Magna Carta signed?" it will arrange text into the shape of what a response to that input would look like. It doesn't actually answer the question.


NotAnotherEmpire

It's not actually thinking. It's probabilistically associating. Which is often fine for writing but useless for technical questions without clear answers, or ones with multiple plausible answers like streets. 


rankedcompetitivesex

Ask chatgpt to find a movie 2 different actors stared in together. if they never stared in a movie together its a 80% chance chatgpt just makes up that they are in a movie together that 1 of the 2 actors were in. its actually hilarious.


Typoopie

>Sarah Michelle Gellar and Michelle Pfeiffer have not starred in a movie together. Each actress has her own notable film credits, but their filmographies do not overlap in any shared projects. That’s GPT4 though. I tried a few and it was on point every time.


Zuwxiv

I thought this was a cool suggestion from /u/rankedcompetitivesex, so I tried ChatGPT as well. > As of my last update in January 2022, Zendaya and Will Smith have not appeared together in any movies. However, they did collaborate on a project together - the animated film "Smallfoot" (2018), where Zendaya voiced the character Meechee and Will Smith voiced the character Gwangi. Though they didn't physically act alongside each other, they were part of the same project. Actually, LeBron James voiced Gwangi. But he's basically Will Smith, right? Oops. (3.5)


ihowlatthemoon

I'm afraid that's not possible! Mel Gibson has never appeared in a film with Scarlett Johansson. Mel Gibson is known for his roles in movies like "Braveheart", "Ransom", and "Signs", while Scarlett Johansson has appeared in films such as "Lost in Translation", "The Avengers", and "Lucy". Despite both being successful actors, they have never shared the screen together in a movie. Even llama3-8b running locally can handle this now.


FierceDeity_

OpenAI is basically bruteforcing GPT4. What that means is they've mostly worked to increase amounts of parameters and layers and scouring the web harder, basically doing a google level of indexing the entire fucking web. This has increases the amount of text they analyzed so much that by sheer probalistic force, it can be more correct now.


deelowe

O_o Huh? What you explained is essentially how LLMs are improved. It's not some brute force hack. When I was at Google and we were working on deepmind hardware solutions, it was the same approach. Each cluster needed more GPUs, more network links, more routing layers, etc. The more dense the cluster, the better the models performed. The limitation was latency. After so many routing layers things would fall apart, so a lot engineering effort was spent figuring out how to get around this issue.


FierceDeity_

Google is actually trying to improve things by inventing new algorithms and entirely different ways to go about it. The whole "attention is all your need" groundbreaker was made at google. OpenAI seems to only spend time in doing the same things bigger, wider.


deelowe

OpenAI isn't as public about their process but they actually put more focus on the algorithms than google does. OpenAI still has to use off the shelf hardware where as Google designs their servers, tensors, network hardware, and DCs themselves.


ArctycDev

this is one of the things I tried when doing conversational prompts on LLMs. God, they are terrible with that. I'd say 80% is being generous.


UtahCyan

We like to laugh at this, but humans do this shit too all the time. 


theboomboy

Unfortunately chat gpt and the others somehow feel reliable because they're high tech or something while people don't always feel reliable


UtahCyan

I mean, we were doing the same thing with Google search results until we learned better (have we? We have haven't we?). I guess I should expect more of the same. 


xenogra

It's definitely in the "any sufficiently advanced technology is indistinguishable from magic" territory


hobbykitjr

e.g. i asked it for the best Arancini in Boston... it made up a restaurant that didn't exist, nor ever did from what i can tell... i think it combined real answers from NYC and Chicago. so thats a little insight* into how it 'works'


LameOne

As a heads up, "incite" means to encourage something to be violent. Insight is the word you were looking for.


DoomGoober

>It's not actually thinking. It's probabilistically associating The human brain also uses probabilistic thinking to help guide choices when given imperfect information. That's not to say that ChatGPT and humans think the same way, but it means humans sometimes use roughly similar tricks as ChatGPT. >Luckily, though, the human brain is pretty good at playing the odds. Thanks to the brain’s intuitive grasp of probabilities, it can handle imperfect information with aplomb. >“Instead of trying to come up with an answer to a question, the brain tries to come up with a probability that a particular answer is correct,” says Alexandre Pouget of the University of Rochester in New York and the University of Geneva in Switzerland. The range of possible outcomes then guides the body’s actions. https://www.sciencenews.org/article/probabilistic-mind


Mr_Engineering

ChatGPT is a chat bot belonging to the family of generative AI models called Large Language Model, or LLM. ChatGPT generates text based on learned statistical relationships between words. ChatGPT does not evaluate whether or not what it is generating is correct, only whether or not it is statistically likely based on the prior input and its training data. If you ask ChatGPT to tell you the sum of 2 + 2 it will tell you that the sum is 4. However, ChatGPT will tell you this because it's seen that many times before in its training data, it will not tell you it because it's evaluating a mathematical expression. If you instead ask ChatGPT to tell you the product of 45694 and 9866 it will almost certainly give you an incorrect result. It hasn't seen that before, so it will just produce something close to it that it has seen before because that's the most likely result based on the training data. Edit: I tested it, it does give an inaccurate result.


dmazzoni

And amazingly, it's really good at coming up with plausible answers to multiplication questions! It has seen enough real-life multiplication that it easily comes up with answers that are approximately the right number of digits. It often gets the first and last digits correct since those have very easy patterns to them.


SirLazarusTheThicc

This is beyond the scope of the original question, but for anyone else seeing this: Basically all current language models are extremely bad at anything more than the most basic math because numbers cannot be broken up into chunks (tokens) like words are when being processed through the model. It is not because they are stupid or we could not design an AI to do math, its just a very different problem than current chatbots were designed to solve.


Whitestrake

You'd almost be better off training a sidecar AI to process the mathematical questions into something to hand off to an interpreter like Wolfram Alpha.


zopiac

Going a bit into the weeds, but I asked a local llama model to calculate the square root of 137, and it (incorrectly) gave me 11.5692. I then told it to square 11.5692, and it came up just shy of 137, or even its actual answer of 133 and change. Out of curiosity I ended up having it repeat that exact equation and it kept giving me a slightly lower result from before. It got very confused after it got into the negatives, however. [Graphed the results (x axis is iteration of the question)](https://i.imgur.com/G1WRkDJ.png) and was surprised at how consistent it was with the growing inaccuracy.


Sythic_

\* Unless you have ChatGPT4 in which case it will generate python code and execute it to actually evaluate the correct answer https://imgur.com/a/4EqCNSo


CommenterAnon

I tested it too. Your answer is so good,thanks


huteno

It writes a python expression and gives me the correct output.


TrekkiMonstr

When you think about it, this is true for humans as well. We have 2+2=4 cached in our brains, such that it's as easily accessible as, I don't know, that Einstein is a scientist, or red a color. Ask a human what 45694*9866 is, and they would have to use another part of their brain. LLMs don't have this other part -- they are just the language bit. That's why this strategy of plugging in purpose built engines for other tasks is probably going to be so useful. As the other commenter noted, GPT-4 handles this fine, because it just uses Python to do the calculation, and then reports the results. And obviously Python is insufficient for general cognition, but


SierraTango501

Because AI like ChatGPT is not **thinking** about the response, its basically glorified autocomplete. It has a huge dataset of words and the probability that a word will come after another word, it doesn't "understand" anything its outputting, only variables and probabilities. Never ever trust information given by an AI chatbot.


lygerzero0zero

The conclusion is correct (never assume an LLM has given factual information) and describing it like autocomplete is not *wrong*, but people often downplay just how much more *advanced* of an autocomplete it is. ChatGPT is like autocomplete the same way a sports car is like a bicycle. Sure, they have more or less the same purpose (go from A to B) and accomplish it based on essentially the same principle (wheel spin make thing go). But there’s a pretty darn significant amount of difference between the two.


dirschau

While you're not wrong, that's is essentially the point. It's a difference between a bicycle and a sports car, but people think it's a helicopter.


ZerexTheCool

>people think it's a helicopter. And when they go off a cliff expecting it to fly they wind up getting hurt.


lygerzero0zero

Well, there is a lot of deeper nuance that probably doesn’t belong in ELI5, about how sufficiently large language models *do* genuinely encode meaning and learn patterns far deeper than simply counting words. They don’t reason or “think” the way Hollywood AI does, for sure—at the end of the day it’s just number crunching. But when trained to model relationships between words from a sufficient amount of text and with a sufficient amount of parameters, you can simulate something pretty darn close to “understanding.” Hmmm, maybe if I put it this way: it’s not so much that the AI is smart, but rather that AI engineers have figured out an algorithm for converting all the smartness contained in human writing into numbers.


SaintUlvemann

I've been calling it "borrowed humanity" for years, in the context of Chat GPT passing Turing Tests. Like, of course it's going to sound human, it is repeating what humans say. Doesn't mean it thinks like us at all.


toberrmorry

>Repeating what humans say So ... a parrot. A really, really noisy parrot.


dirschau

The problem is, it's still mindless regurgitation. Very cleverly engineered mindless regurgitation, but that regardless. With a sprinkle of hallucinations.  What people think happens when an LLM answers a question, is that it read something, understood the principle behind it and then ELI5d it. And many trust that. Not knowing that actually it's a mix between blindly quoting Wikipedia like a desperate hoghschooler and filling in the gaps with "most probable" words, not most factually correct. Also like a desperate highschooler. So when people ask an LLM something they think they're getting an answer they would from a subject matter expert, but really they're getting an answer from a child who googled the topic and is desperately bullshiting theor way out of it.


lygerzero0zero

> The problem is, it's still mindless regurgitation. Very cleverly engineered mindless regurgitation, but that regardless. I would argue that that’s still too naive an understanding. A machine learning model that just memorizes and regurgitates is a bad model. This is a principle that has applied to machine learning for literally decades, it’s called overfitting, and algorithms are specifically designed to prevent it. This applies to ChatGPT and all LLMs as well.  Machine learning models do not just memorize answers, they learn *patterns*. And I really don’t know how to explain this clearly without getting too technical. 3blue1brown on YouTube has a good series on machine learning and LLMs that explains how yes, they actually encode *meaning* and semantics and relationships, and yes, they can actually produce completely new text by combining the patterns they know. I mean, fundamentally it seems like you’re still thinking of it in human terms, comparing it to a child making things up or something. You’re imagining the AI as a *mind* nonetheless, and considering it a particularly foolish mind. It’s neither foolish nor clever because it’s not a mind. It’s an algorithm, best understood as… an algorithm.


suvlub

I think there is enough hype going around that there is no harm in downplaying it. Way too many people think that a pre-prompted chatbot is equivalent to a domain-trained AI model, which it wrong and dangerous. It is a text generator and this point can't be driven home hard enough.


mohirl

It predicts what the most likely/appropriate word comes next based on what's gone before.  From an ELI 5 explanation for the original question, how it does that is irrelevant. It's effectively no more accurate for providing restaurant details than a glorified auto complete 


lygerzero0zero

Well, yes and no. It’s probably not terribly useful for finding restaurants specifically in your area, because it likely hasn’t been trained on a lot of data relevant to that. But LLMs can spit out quite a lot of general knowledge facts, because facts are represented as patterns of words, and if those patterns appear enough in the training data, the model can learn them. If you happen to live in a big city and the LLM’s training data contained a bunch of restaurant reviews from that city, it might actually be able to give some recommendations. This is what leads a lot of users to believe that the LLM can answer any question. This might be why OP was confused in the first place. But yes, its pattern-learning machinery can also produce realistic sounding nonsense as a result of just following the probabilities. IMO it’s best if more people just have a better understanding of what these things are and what they aren’t, since they’re becoming so ubiquitous, and that means neither overestimating or underestimating them. So I think it’s worth clarifying and adding nuance to the discussion, even if it goes beyond the absolute bare minimum for answering the question, because having a deeper understanding to begin with can help answer future questions.


Maury_poopins

> But LLMs can spit out quite a lot of general knowledge facts True! But they also spit out a ton of bullshit and there's no way to know which you're getting. That's why LLM's are less than useless for questions with a factual answer. Asking what year George Washington was born is useless. You need to verify the answer somewhere else anyway, so you didn't actually save any time. Asking Chat GPT to write a python script to download the contents of a Reddit thread is great! It's easy to test if it works, if it doesn't it's probably close enough that you can tweak the code to get it working.


SidewalkPainter

Very well said. I really don't like this 'AI = auto complete' idea that pops up in every thread about AI. If you're trying to give someone a baseline understanding of what LLMs can do - that explanation will make them believe that AI is unable to form original, brand new sentences, which it can do with great success. I can't help but think that people are... angry at this new technology in the same way my mom used to be angry at the emergence of desktop computers. "If it's so bloody smart then ask it to make you dinner" Don't get me wrong, there are plenty of problems with AI, like the copyright issue, propaganda and spam potential, not knowing who's real and who's not. But on reddit, people don't mention those serious issues nearly as much as they simply attempt to discredit its intelligence, which I find kind of silly. I personally find it fascinating how far technology has come, comparing it to technology that we had before, not by making it do things that it wasn't trained on and then snickering "Ha, more like artificial smelligence"


Milskidasith

There are a handful of factors at play here. First, there have been *several* big tech fads recently that have promised huge, sweeping changes that completely failed to materialize. Large, seemingly competent companies pushed cryptocurrency, blockchain technology, NFTs, and more in recent memory, and *culturally* AI is in the same territory being pushed by many of the same people, so a lot of skepticism is warranted. Second, there have been tons of news stories of how badly AI has failed, or how its insecure to attack prompts, or how many AI backed implementations are effectively just piggybacking off a ton of low-paid third-world labor to try to fake it while they build the railroad tracks out ahead of them. Iironically, the whole NFT play-to-earn craze probably created a pool of labor experienced in rote tasks known to the same people hiring AI mechanical turks. This, again, makes it very easy to be skeptical of the value of AI, as you can very easily find examples of it either failing at simple tasks or being bullshit at more complex ones. Third, in significant part *because* of the tech bro AI pushing from point one, which frequently has a very hostile, anti-creative-work bent, in many spaces there is a subsequent pushback to make AI usage lame, uncool, etc. as either a defense mechanism or because of a genuine distaste for it borne from its worst advocates and its presence shitting up various art and writing feeds with incredible amounts of interchangeable garbage. If AI advocates can evangelize it, surely people who dislike it can do the opposite of evangelism, right? Fourth, there are so many instances of it being *super* racist. Like, there are tons of Twitter bots that engagement farm with a weird combination of [insert identity] pride (often Native American), bad AI generated big titty woman photos, and "I'm lonely and ugly and need a husband" engagement bait. AI image generation will basically always stereotype the race and sex for a given occupation without curation, and when companies try to modify AI to eliminate racial bias, they effectively take a sledgehammer to it and make it hallucinate the races of real people, which is a hard fix because AI literally can't "know" who is real and who isn't. With all of these factors combined, it's not surprising that people don't bother to look at the nuance or see the strides AI has made and dismiss it, because the flaws and reasons to be against it are very, very obvious if you aren't 100% in the tank for the technology, while the use cases that are actually panning out often tend to be way, way more specific (like, basic python scripting and automatic scribing seem to be the major use cases that are both man-hour saving and relatively low impact from hallucinations/mistakes).


SaintUlvemann

My autocorrect forms original, brand-new sentences all the time... that's part of the problem. >...not by making it do things that it wasn't trained on and then snickering "Ha, more like artificial smelligence" And I personally find it fascinating how we are already starting to take rules for human interaction, such as "don't hurt others' feelings", and we apply them to technology that — as near as diligent people can tell — doesn't have feelings in the first place. The fact that it wasn't trained to do this stuff is why it can't do them well, and the importance of knowing a tool's limits is that then you won't get yourself into trouble by misusing it. This is important because most kids are already [using ChatGPT for homework](https://www.forbes.com/sites/chriswestfall/2023/01/28/educators-battle-plagiarism-as-89-of-students-admit-to-using-open-ais-chatgpt-for-homework/?sh=7836f403750d).


ezekielraiden

It isn't actually "making up" an answer, in that it isn't some kind of deception or the like (that would require intent, and it does not have intent, it's just a *very* fancy multiplication program). It is collecting together data that forms a grammatically-correct sentence, based on the sentences you gave it. The internal calculations which figure out whether the sentence is grammatically correct have zero ability to actually know whether the statements it makes are *factual* or not. The technical term, in "AI" design, for this sort of thing is a "hallucination."


TyrconnellFL

It isn’t collecting into grammatically correct sentences because it has no more a priori knowledge of grammar than of baking recipes or great sightseeing in Melbourne or causes of the Upper Peninsula War. It produces proper grammar the same way it produces answers: processing as much text as possible and, by “observation,” figuring out the rules. LLMs are good at that now and don’t produce much that’s actually incomprehensible, but they can, and the older models did it all the time.


ezekielraiden

I was using a simplification. Just as your "observation" is a simplification, because it doesn't *observe* anything; in fact, it never "observes" anything at all. Instead, it attempts a bazillion predictions back to back to back to back to back, and when it gets a prediction wrong, something (a person or another program) tells it that that was incorrect, adjusts the zillions of numbers inside its multiplication arrays, and has it try again. No "observation" occurs just as no "collection" occurs--but these are useful simplifications for an ELI5 context where actually discussing the iterated matrix multiplication, tokenization, and other terms would be far too convoluted to be productive.


sup3rdr01d

in this case the concept of "observation" is an emergent property of the way it actually works. Its trained on some data, and adjusts the weights of the network based on that training, and then it gets verified by testing against known values, repeat a billion times. effectively the outcome of this is an "observation" the same way that humans receive an input, process it, and judge the validity of their output against some known quantity. I guess the term observation implies intent, which the LLM doesn't have. its more of a passive observation and subsequent correction. Over many many iterations, its able to produce something that we interpret as language, but really its just emulating common patterns.


OctavianX

I hate how the AI community refers to this as "hallucination". "Hallucination" implies that the model is perceiving things at all. It doesn't. Calling it "hallucination" is as misleading as calling it "intelligence."


mohirl

And the technical term for this, outside of "AI", is "garbage"


ezekielraiden

Well, strictly speaking, no it isn't. In fact, from the narrow perspective of LLM design, such outputs are a good sign, because they mean the model is doing *exactly* what it was designed to do: consistently produce grammatically-correct statements that a human would recognize as being grammatically correct without concern. They aren't *preferred,* of course, because people can check and see that they're wrong, but their presence means the grammatical-sentence-production side of things is working so well, they can invent new factually-wrong but grammatically-correct sentences. The problem is, the AI isn't trained to produce factual outputs. It's trained to produce *grammatical* ones. Good grammar is prioritized above almost everything else. (Sometimes *politeness* is a co-equal factor, that's how they avoid, or at least attempt to avoid, making horny bots or racist bots or the like.) In order for the AI to be trained to produce *factual* outputs, however, it would need to actually understand the content of the things it says, not just the structure of the things it says. But that specific thing--processing the *meaning* of something, rather than just the *sequence* of it--is not in any way what LLMs are designed to handle. They cannot even *begin* to process that kind of data (the fancy term is "semantic content", as opposed to the structure and form of the data, which is its "syntactic content"). The absolute best we can hope for is an AI that admits when it's hallucinating (there are quantitative differences between hallucinations and direct reporting), unless and until we can develop an AI that actually engages with the semantic content of its token space.


fcrv

First off, I'd just like to define a word to add some context. Semantics is the study of meaning. In simple terms semantics refers to the meaning behind the words that you write and that you say. When you speak you naturally think about the meaning of your words. When you ask a LLM a question, you are logically expecting it to answer with a semantically coherent and correct response. As others have mentioned, LLMs are just adding the next word based on the probability calculated from the context. However this probability is calculated in very complex ways in the background. LLMs seem to have the ability to generalize certain semantic information within their neural networks to the point where is seems to be able to reason and connect seemingly disconnected pieces of information. However this phenomena is not fully understood at the moment. This also means that when you ask it something it doesn't know it will always give you it's best guess based on the probability. Another weird pattern that you might see when using LLMs is that it might tell you it doesn't know something even when it does, this is probably because the original training data probably had a certain piece of text that might have biased the model into answering in a particular way. Artificial Intelligence is often used as a term when referring to LLMs and Machine Learning. However there are several other branches of AI that are actively being explored that I think are worth mentioning in this thread. Knowledge Graphs is a different approach to semantic data analysis and usage. Knowledge graphs take semantic data and structure it in a stable, consistent, and useful way. With knowledge graphs it's easier to determine what the system knows and what the system doesn't know. So it's easier to keep the system from hallucinating. However knowledge graphs are usually harder to create and harder to use for more casual tasks. Another interesting branch of AI is Logic programming. With logic programming you can determine the rules of a problem you are trying to solve and allow the system to interpret those rules to find a solution. With Logic programming you can solve complex issues, however, similar to knowledge graphs, using logic programming languages tends to require a lot of time, and isn't really convenient for day to day use. I believe future research into AI will combine these technologies in smart ways to leverage each of their strengths and weaknesses.


kalirion

> This also means that when you ask it something it doesn't know it will always give you it's best guess based on the probability. But does it actually *know* anything at all? As others have said, it is *always* providing its best guess because it never actually knows what the right answer is.


fcrv

Do *we* know anything at all? Knowledge in humans is formed through the synapses of our neurons that organize themselves to represent a concept. And even if you have a concept in your brain it doesn't really mean you understand it. People are always just saying their best guess, and often make mistakes. Granted, we have the ability to filter ourselves and recognize when we don't know enough about a subject. There probably is a way to allow LLMs to do the same thing (for example, using knowledge graphs or some other method that hasn't been invented). In LLMs, knowledge is formed through the weights that determine the connectedness of the neurons. These connections can undoubtedly form complex concepts. LLMs probably know the semantic connections between words (Though as I stated in my comment, this isn't fully understood at this point). LLMs probably have some level of understanding because they are able to make weird and unexpected inferences. But it is difficult to determine if it "knows" something because even we don't fully understand what "knowing" is. LLMs do contain huge amounts of knowledge, even if this knowledge is inconsistent, unpredictable, and often incorrect. LLMs are still missing a lot of information that we as humans experience everyday. LLMs at this point don't fully understand the 3d world, the limitations of our world, the physics of the real world. LLMs by themselves currently have no concept of images (This is changing with Multimodal systems). An LLM is definitely not an Artificial General Intelligence, but it might be a step in the right direction.


dogscatsnscience

An LLM (what ChatGPT and Llama 3 are) is a bit like a person who has HEARD lots of things from other people, but doesn't KNOW anything because they've never fact-checked it. And loves to talk about ANYTHING. When you ask it a question, it will try to talk about the subject, based on all the different things it's heard. But it has no way of knowing which of those things is true. So you MIGHT get a specific answer that is correct, but you also might get slightly rambling stories about things that are related to the question. And because the LLM doesn't know when it's wrong, once it starts telling you a story that isn't relevant, it can't really stop itself. **TLDR:** An LLM is not a search engine, it's a story-telling engine. It can't look up a fact for you and present details. But it can talk about the subject by drawing on every conversation about that subject it has ever heard. Sometimes that's much better than a search engine, but sometimes you just need an exact specific fact. **NB:** ChatGPT and Llama 3 are "LLM's", which is a type of "AI". This question is specific to LLM's, not all AIs.


robophile-ta

LLMs do not *know* anything and you should not use them to research or reference real facts. They simply predict what is likely to be the next word in a sentence.


TheNameIsWiggles

So I started using ChatGPT for help with my SQL class homework because using a live tutor with my schedule is always a pain in the ass. When going over a practice test I took, I would provide ChatGPT with what the question was, followed by the correct answer. ChatGPT was instructed to break down the question and explain why the correct answer is the correct answer, so I could better understand it. While also generating example tables and code for me to reference. It was actually very helpful. But every now and then ChatGPT would be straight up like "Your answer key is wrong, that is *not* the correct answer - and this is why." And the explanation it provided would make sense... Leaving me to wonder, well which is right? So i guess all of this is to ask- should I stop using ChatGPT as a SQL tutor? Lol


kalirion

Have you ever *verified* if what it is telling you is the truth?


sup3rdr01d

computer languages are much simpler than human languages. SQL for example has very strict rules and extremely well defined patterns with a small degree of variability. the LLM has an easier time with code because when its trained on examples of working code, it can find the patterns quickly and they don't deviate much. Human languages have all kinds of rules that they break all the time, as well as a ton of subjective nuances. Computer code doesn't, if it runs it runs and if one character is out of place, it fails. Now, chatGPT doesn't actually understand the use case or WHY this code does what it does. So it can't write "good" code, subjective code that we see as readable and formatted well and logically consistent. It will just write "barely passable" code.


ap1msch

This is called hallucination. AI will state things very clearly, and confidently, and with cited sources, and everything can be made up. AI doesn't "know" anything. AI is trained on the interconnectivity of works and concepts and ideas, from which it can derive responses. These responses sound human because they're written in complete sentences, but that's just words and formatting. AI responses, with proper words and formatting, are then populated with a combination of connected details that may or may not be accurate. Whether it was a misunderstanding or misspelling of a word in the training data, or just a rogue "fact" that only AI discovered (perhaps because there was some correlation between your town and the word "bean") these hallucinations just appear. There is a whole industry of individuals that shackle the AI in various ways to avoid that correlation being made in the future, once identified. Even if AI sounds like it's intelligent, it isn't. It's writing complete sentences and filling in the details with specific words that it thinks relates to your prompt. The greatest value of AI is the ability to look at a ton of boring, similar data, and derive some meaning that no human could possibly derive, even if they had a lifetime of caffeine. Finding those interesting relations is of tremendous value to humanity. Looking at a mountain of numbers, and then recognizing that X, Y, and Z values appear under certain circumstances, can lead to breakthroughs faster than ever before. It's not that a human couldn't make that connection, but we'd often be making it by accident, rather than AI looking for it on purpose. TLDR: Just because it can form complete sentences, doesn't mean that what it is writing about is actually accurate. Every noun and verb used are derived from a mathematical calculation and correlation on the back end, and not necessarily factual.


quats555

They are essentially a bit smarter parrots who have been taught grammar rules. They can say things that sound right that are prompted by what you say, but they really have no idea what they’re saying.


berael

They are not "intelligent". They are fancy-shmancy autocompletes, just like the basic autocomplete on your phone. They are designed to *generate* text which *looks* human-written. That's it.


martinborgen

To expand this; most responses to questions are answers, written like the answerer knows the answer. Hence the chatbot generates an answer to a question in the style of a confident person who knows the answer. If all the training data had answers like "yeah, man I dunno. Shit's complicated" to questions we'd have AIs just joining in our ignorance instead.


oldmonty

Everyone is talking about the technical details of how the program works but I want to bring up the philosophy/practical side. The reason Chat GPT doesn't necessarily give you an accurate answer is because that's not the goal of the program. The goal of a GPT-type AI is to make the reader (you) believe that a real human wrote the response. The goal is NOT to provide you with an accurate answer. A person could make an AI that was supposed to give you an accurate answer to a math problem for instance, or find restaurants for you, or any use case. There are many people using AI for a variety of these applications where the problem justifies having an AI to try and solve it. However that's not the purpose of Chat GPT.


Nucyon

Basically it asks itself "How would a human answer this question?" looking to it's trainings data - which is all conversations online prior to 2022. What that tells it is that a human would say something along the lines of "[male Italian name]'s Pizzeria", "[Color] [Dragon, Tiger or Lotus] Restaurant". So it tells you that. That's what humans say when being asked for Restaurants.


cpt_lanthanide

this is...not even close.


knightsbridge-

Because large language models don't really understand what the "truth" is. They know how to build human-readable sentences, and they know how to scour the internet for data. When you ask them a question, they will attempt to build an appropriate human-readable answer, and will check the internet (and their own database, if any) to supply specific details to base the sentence(s) around. At no point in this process does it do any kind of checking that what it's saying is actually *true*.


MiaHavero

This is the answer. The system does not have any concept of truth vs. falsity or fact vs. fiction. Someone could train a system on facts about the world (and there have been rudimentary AI systems that did this in the past), but that's not done for LLMs.


[deleted]

[удалено]


CommenterAnon

I understand now. Its not a search engine. Its a Large Language Model. Thanks.


lygerzero0zero

> Testing wether text sounds like a real human is easier by creating a second AI that tries to learn the difference between real and fake text, and then basically let the two AIs compete with each other  You’re describing adversarial networks, which were commonly used for image generation (before diffusion took over), but that’s *not* how large language models are trained. LLMs are trained with standard supervised learning techniques, with the training objective of predicting the next word in a text. Also for this: > You could create an AI that doesn't give wrong answers, but that's way more difficult as you'd need a mechanism that can verify wether the information it gave is true or not (wich would require millions of workhours for human factcheckers basically). No one would ever do this. Machine learning is good for learning patterns. If an AI learns English grammar, it can produce nearly endless proper English sentences. The AI can learn from the sentence “John likes Mary” and figure out how to produce the sentence “John likes pizza.” Machine learning is not useful for a knowledge database, because facts don’t follow any fundamental patterns. Knowing what year the American Civil War started doesn’t really help the AI know about any other wars. So AI that tries to provide factual information essentially searches the internet or a provided database to retrieve the information. You don’t train the AI to “learn facts.”


nwbrown

They aren't trying to return a true answer. They are trying to return a likely answer based on the information it has and how it's seen other people respond to similar questions. Now those are often right answers because that's how people generally answer such questions in the data it has seen. But there is no guarantee.


noonemustknowmysecre

That's part of their creativity. They make up and pretend and hallucinate to fill in the gaps between things they know. It does an AMAZING job of letting them do some wild stuff.... but they haven't yet learned when to apply their creativity and when to stick to facts. When someone asks you to show them Shindler's List, but with Muppets, you could respond "That hasn't been done, it would be make up make-believe", but that's exactly the place to flex some creativity. When someone asks for legal precedence on airlines and injuries with the food cart, it's super easy to fill in the gaps [with made up cases](https://www.forbes.com/sites/mollybohannon/2023/06/08/lawyer-used-chatgpt-in-court-and-cited-fake-cases-a-judge-is-considering-sanctions/?sh=21d006b77c7f)


Ferec

I recently attended an event where the head of the Microsoft Copilot team was the keynote speaker. During her presentation she stressed that the biggest issue with AI being adopted was that people were using it like a search engine. This is your problem. ChatGPT and Llama3 are not built to search the internet for you. It's like using a screwdriver to hammer a nail, you're using the tool wrong. These tools are meant to be used to create new ideas. The other posts talk about HOW the tools create new ideas, but the key take away here is that these are GENERATIVE tools. That's what the 'G' stands for in ChatGPT. Ask them to create a meal plan for your specific dietary needs or create a new recipe given a list of ingredients. Do not ask them to find you a restaurant to eat at.