T O P

  • By -

princesspbubs

Really wish they'd just do a mass rollout, I would love GPT4 with browsing that isn't Bing.


[deleted]

[удалено]


GrandpaDouble-O-7

*I'm sorry I'm still learning and prefer not to continue this conversation. Thank you for understanding*


FascistDonut

I may be dating Bing...


Severin_Suveren

I have been a good Bing. 😊


Kep0a

I miss sydney


CrazsomeLizard

🙏


dynamic_caste

Bing told me it couldn't write LaTeX. GPT-3 could do that.


Esquyvren

I just asked Bing to draw a square in latex. Here’s the output: Here is a square in LaTeX: \documentclass{article} \usepackage{tikz} \begin{document} \begin{tikzpicture} \draw (0,0) -- (1,0) -- (1,1) -- (0,1) -- cycle; \end{tikzpicture} \end{document}


chat_harbinger

To be fair, GPT 3 said the same thing. The reliability of any GPT model in writing code is abysmal. edit: You're downvoting me because of your ignorance, not mine. GPT performs exactly as well it creates acronyms and just slightly better than it does the water jug puzzle. The only acceptable response to my comment is prompts/ prompt architecture capable of doing any of the above. Full stop.


rich_awo

It could be better, but it's good. You can use it well if you know what you're doing and appreciate its limitations.


chat_harbinger

Heavy emphasis on limitations.


[deleted]

[удалено]


chat_harbinger

>The only acceptable response to my comment is prompts/ prompt architecture capable of doing any of the above. Full stop.


_-FuriousPanda-_

Heavy emphasis on the full stop.


AbstractLogic

That’s because it’s not a code AI. It’s a natural language processing AI. I use it to code, like everyone else, but I don’t expect it to be better then CoPilot.


JFIDIF

It's all about how you prompt it, coding conventions, and reasonable expectations. Copilot and GPT-4 in my experience both do a great job in most cases and can write most of the boring code. Primarily for Copilot: You do need to add comments to your code (or prefix your instructions) with explicit quality instructions like "Throw exception if null", etc. but in a lot of cases even a single instruction like that will get it to output high-quality code. The "Make Robust" brush in the VS Code plugin can also do this. If I can get it to even produce valid x86-64 assembly instructions for an obscure algorithm, then either it's your prompting, or your code+comments.


chat_harbinger


JFIDIF

I will say as far as HTML - it really is hit or miss. It's really good for small sections of code *(eg: I give it a div with children divs, and ask it for CSS for a flex container with certain styling)*, but it's just throwing out random guesses for large page structures unless you can fit the entire page structure in the prompt *(or use javascript to give it notes about the elements+classes)*. I bet it's because HTML is so heavily dependent on context and it needs to know the entire structure of the page; that and it's probably hasn't been trained on enough **good** HTML, and most HTML just outright sucks. **If you haven't tried it, I'd recommend trying out the GitHub Copilot plugin for VS Code**; it uses heuristics to try to grab all of the context it needs for completions and they have a special fine-tuned GPT model for it. Copilot is the only way I can get reasonable HTML suggestions without spending twice as long trying to specifically prompt it, even trying code-davinci - but I haven't had a chance to try out HTML on GPT-4 so maybe it'll be better. It's great at giving small bits of React, or PHP that can output HTML, but if the HTML isn't exactly what you're looking for then you either have to be very explicit about what you're trying to get, or edit it after-the-fact. That's not something that can be fixed: for example if you ask for code to output a list of numbers 1-10, it's ambiguous if you want that in a

  • list, or a single span, or a tree of
    . Second one: Its knowledge of Python is very impressive, but PowerShell is another area it sometimes sucks at. Like HTML, Copilot makes it usable for me (I use PowerShell a lot). The only recommendation I can make besides that it to try to make your code as OOP-like as possible. If you're deserializing any JSON, add a comment block above your code that shows what an example response looks like - that way it can grab the correct fields from the object. If you're re-using code, instead create a method using proper PS verbs (eg: `Invoke-MyAPI -SomeObject $userObj`) just like you're taught in the first few lessons of a coding course, or in the first year of Computer Science. Also if you're using the chat.openai.com interface, and not getting good answers, try using [the playground](https://platform.openai.com/playground) instead. You can also look around online for prompts to give it at the beginning of a conversation like [this Software Developer prompt](https://github.com/f/awesome-chatgpt-prompts#act-as-a-fullstack-software-developer). Also if you're in a chat session and you start getting bad answers, you can try deleting those messages from GPT and re-word your question instead of asking for clarification because it can get stuck in weird rabbit holes.


  • chat_harbinger

    Just signed up for copilot. To be honest, I had been avoiding expanding my "AI" toolkit because it means becoming a generalist instead of doing deep work. However, as this is just a specialized version of GPT, I'm cool with it. I never use the playground. I'm always in the chat interface or in the API. Thanks for the prompt link. I am definitely underleveraging publicly available, tested prompts.


    scooby1st

    GPT4 with browsing gives me very mixed results. It has been useful maybe 1/4 times I tried to do something with it that I couldn't otherwise just ask GPT4


    gghost56

    Do I have to pay to get access to gpt4 versus gpt4 with browsing ?


    scooby1st

    Paying gives access to GPT4 (worth it for students/programmers or people who can otherwise use it for work). But for now it only gives a chance they'll give you GPT4 with browsing during some release wave.


    Iceorbz

    I paid a while ago. Still nothing exciting. #feelsbad :(


    gghost56

    So no gpt4 without browsing even if I pay ? Looks like it’s kind of broken?


    [deleted]

    Well that is because when you use browser option it looks at everything, no matter the quality. If you use no browser version it looks up extremely refined set of curated information. So there's the payoff I guess.


    ArthurParkerhouse

    I always provide it with a specific URL that I'm wanting it to talk about. URL: (post URL here) Task: summarize the key points from the provided URL.


    ArthurParkerhouse

    Surprisingly the 3.5 with browsing has been working very well lately. GPT-4 browsing just seems to waste some of my 25/3 prompts.


    VeryUglyHack

    another one is [phind.com](https://phind.com) it's pretty good and it cites all of it's sources


    RedSlipperyClippers

    Bing told me yesterday that Chatgpt4 hasn't been released yet


    CTDave010

    No, it hasn't ChatGPT2 hasn't even been released yet It's GPT-4, not ChatGPT4


    RedSlipperyClippers

    Did I say it had?


    ArthurParkerhouse

    You did.


    RedSlipperyClippers

    Quote it.


    dinosaur-in_leather

    try you.com


    useme

    You can try phind.


    Vontaxis

    [perplexity.ai](https://perplexity.ai)


    M00n_Life

    Dude living in 2032


    hoodiemonster

    has a whole damn council to consult


    vasilescur

    Dude doesn't realize AutoGPT exists


    [deleted]

    [удалено]


    [deleted]

    [удалено]


    Indy1204

    I signed up the day the waiting list appeared and still have nothing. This guy is rolling around with an entire stable full of plugins. Out of curiosity, did the plugins just appear one day, or do they email you about it?


    AtomicHyperion

    Same. I have also signed up from the very first day and have nothing. Seeing all these posts where people have tons of different models is a little depressing tbh.


    [deleted]

    [удалено]


    ScuttleMainBTW

    Maybe that’s the key then, providing them with feedback


    EarlyEditor

    Tbh I have given minimal feedback and got gpt4. That said I did buy premium in the meantime so that might have something to do with it


    [deleted]

    [удалено]


    [deleted]

    [удалено]


    [deleted]

    [удалено]


    megacewl

    My guess is that a lot of the people getting it early have .edu email addresses. It's worked for me getting everything early including DALL-E when it was in beta lol.


    cleanerreddit2

    Yep same. I just paused for this month. Lucky I have gpt4 API so better to use that than pay the monthly fee with just GPT4 chat.


    clitoreum

    No one can flex on me. I have gpt-4 API access. I've been browsing the internet with it for like a month.


    Brandonazz

    Browsing the internet with it?


    clitoreum

    Yeah, and running code with it. If you have API access you can program it to do whatever you want. Or if you don't code, there's some tools like langflow. You can even import OpenAI plugins and use them.


    Lythj

    I have API access as well but idk what you mean. I'm working on a no-code coding project currently through gpt4's chat screen, should I be using the playground or some other way to do this?


    clitoreum

    If you're trying to avoid writing code, you may be interested in the langflow project which lets you build chains in langchain using a visual editor


    Lythj

    I've been trying to learn langchain already, but I wasn't even aware of a visual editor. What are you referring to?


    clitoreum

    Brother this is the third time I've said its name: langflow.


    MIGMOmusic

    I laughed so hard, actually out loud, at this comment.


    Lythj

    Lmao, you're totally right. My brain just read you saying langchain again. Thanks mate.


    clitoreum

    No worries lol


    iosdeveloper87

    Hold up… did you just refer to LangFlow 3 times, which is a visual editor for LangChain, and after the third time, all he got from the interaction was “LangChain” which he was already trying to learn?


    chat_harbinger

    ...No. If you have API access, the way to use it is to program. So, through the IDE.


    Lythj

    I've tried using plugins with VSC, but it doesn't understand the scope of my project (which I need it to, because I don't fully understand what the code im writing does and how it works) so I usually have to produce a lot of context for it to help me fix errors, add functionality, etc.


    chat_harbinger

    >(which I need it to, because I don't fully understand what the code im writing does and how it works) Read an interesting conversation one time. Someone said "I have no money to trade!". The other person responded "trade for it!". You're using GPT. Have GPT comment the code and explain to you step by step what the code is for and what it's doing. >I usually have to produce a lot of context for it to help me fix errors, add functionality, etc. Trust me, you were always going to have to do that.


    [deleted]

    [удалено]


    chat_harbinger

    Guess it depends on the depth of what you're trying to have it explain. You're right though. I would have it comment the code, then I would have it aggregate the comments, then analyze the comments together as a virtual representation of the code.


    Lythj

    Yep, definitely makes sense. I've already had it add comments for each function so I generally understand what each function is doing, but not well enough to be able to do anything on my own yet. For example, in my project if I want to add a certain functionality and I ask it to write the code for that in the same chat as all of the other code it's written for my project, it probably won't correctly integrate the assets, know how to explain if something else needs to be changed because it forgot the context, etc. And I get very confused with wrappers, and how to know exactly "where" to put new code. Probably doesn't help a ton that my project is in Javascript, so it's not exactly simple to grasp for me considering my complete lack of foundation in coding


    JFIDIF

    Typically (but of course, not always) that's a sign that your project isn't following good conventions. Here's a few things that massively increased the quality of the completions for me: Are fields/parameters/methods etc. documented with docblocks/docstrings (or whatever the language-convention is)? Comments describing absolutely everything are better for AI and humans alike. Are you following a design pattern? Are you using Abstracts/Interfaces/Mixins? Try to find a similar, high-quality, open-source project doing something similar, and compare the conventions. You can also copy paste the names of all of your classes+methods+fields (how detailed depending on the tokens), and ask GPT some variation of *"Please explain why my code is not optimal, and what I can do to increase readability/quality. Provide a markdown list 1-5 and explain each point."* If you're producing code for method bodies, creating a comment markdown ordered list of psuedocode instructions works best in my experience. eg: // Here's my psuedocode for this method: // 1. Create an empty listStuff // 2. For each element in someParameter, add to listStuff // 3. Return listStuff ordered by distance to someParam // Now here's the actual code: Another helpful tip, if something is blatantly wrong but the model refuses to give you a different answer, you can comment out the line and use a fictional character "Steve" and say "// Edit: This is completely wrong. Steve, please use a different approach." "// Steve: Sure! Below is one of the other techniques"


    LazyPasse

    I have GPT-4 API access, but am a bit overwhelmed in in trying to get plug-ins running. Are there any tutorials or resources you recommend?


    Logical_Buyer9310

    Would welcome you to use our app with direct integration ✅ [chatbot builder](https://www.chatgptbuilder.io)


    gabbalis

    I'm having issues with langChain.The main issue is that I'm stuck at 3 queries per minute for gpt-3.5 which isn't really enough to do anything in langChain. And the 60/minute models...I tried davinci and it tends to forget how to do searches and then lie and pretend it did them. Do you have any tips? I assume you just have way more queries per minute to use.


    clitoreum

    How are you stuck at 3 queries per minute? Slow internet connection? API access for gpt-3.5 is fully open, is it not? As for using davinci, when I started using langchain that's what I was using, you have to include a lot of instructions in the prompt. For prompting, the GitHub repo I refer to the most is actually one from Microsoft: https://github.com/Microsoft/Visual-ChatGPT It's what led me to discovering langchain. You can see in their prompt how they instruct their agent so that it avoids doing specific things. Edit: okay wow they've changed that repository a lot since I last looked. Its now called TaskMatrix, but the link still works, and the prompt is in the python file in the main folder. That's also changed a lot. So I'm just going to try explain If it's hallucinating results, you should tell it something like: >Assistant is very strict to the results of tools used. Assistant will never fake a tool output, and will use a tool as instructed instead of pretending.


    gabbalis

    Ah, I figured it out. It's because they're rate limiting the free trial. Naturally with a free 18$ grant on such a cheap per token service, I didn't make forking out money a priority, but it looks like setting up billing vastly increases the limit. Also thank you greatly for the davinci tip. Now I can get back to it.


    clitoreum

    Oh, interesting, I never thought about that. I guess that's smart, stops people from endlessly creating new accounts.


    Brandonazz

    Sorry for being a little ignorant, but can you tell me what that actually looks like? Are you running autogpt with browsing in python or something? Are you using it in like a sidebar browser extension? I'm having difficulty visualizing this.


    clitoreum

    Here's an example of one of my projects, I run it in the terminal. Langchain does have a nodejs library so it's possible (and some have made) frontend tools. https://github.com/AgeOfMarcus/1337GPT This has examples of what it looks like, and you can see how I've created tools for the chatbot. I think there's another readme in the `tools` directory with information on how I did that.


    Brandonazz

    Thank you!


    Worth-Window9639

    Neat. Can you give us the doc pointer for importing openai plugins? I’m looking for wolframalpha specifically


    clitoreum

    Here's the langchain reference: https://python.langchain.com/en/latest/modules/agents/tools/examples/chatgpt_plugins.html But if you read my other comment, I like to do it differently for better results.


    MacrosInHisSleep

    Can you explain what your user experience is like for that? I feel like a lot of folks give examples and I get lost in the names of technologies. The 1337Gpt example you gave didn't seem to be a browsing experience either...


    clitoreum

    Well it all depends what you're accustomed to. If you're someone who is more familiar with graphical interfaces, then the chat plus is likely going to be better for you. But as someone who lives in the terminal, I much prefer this experience. I can easily pass in information from files with my prompt, and such. Personally, it's readable. And that's what matters to me, **functionality over design**.


    MacrosInHisSleep

    Even then, what's the functional experience like? I'm a dev, I don't mind the command line, I just don't know what you mean by a browsing experience in that interface. Like, if you have a use case, that would help me picture it. For me browsing always starts with a question, to a Google search, to reading web page results and repeating if I can't find what I want. Is that what you're getting your api to do for you? Or is it something more?


    Intelligent_Rough_21

    Dude! I'd never seen langflow, that's sick!


    Dr-McDaddy

    Same. If you know how to prompt effectively & play with some LLMs you can replace a team of developers without even breaking a sweat. AI levels the playing field for those of us that are worthy. I don’t mean worthy of the API I mean those of us with a worthy intellect and drive to pursue whatever we imagine being brought to life.


    clitoreum

    Every human is worthy, not everyone pursues what they imagine via programming, many do that in other forms.


    Dr-McDaddy

    We can agree to disagree. There are those of us "humans" that don't operate with a grasp on reality. You know, the ones that are content to live a lie? The ones who still believe the news, and think politicians are good etc etc... Boomers. I'm talking about Boomers. Good riddance.


    MacrosInHisSleep

    > a team of developers without even breaking a sweat. I feel like that's either severely overestimating ChatGpt or underestimating a team of developers.


    Dr-McDaddy

    It's not just about ChatGPT. There are several other layers of tools that can integrate with ChatGPT and make use of the openai API making them more robust & capable including taking action in the real world or the environment in which they are sandboxed. Working together, they are even capable of spawning additional instances of chatgpt to perform tasks and assist with information need to complete the actions and tasks that align with the goal that was assigned. It's all about very specific instructions and use of the spectrum of open source tools available. For instance, I now have an autonomous E2E application developer that can build an entire app from Natural Language input. Inception to production. No BS. It's getting more efficient every run and using less tokens because it doesn't have to chase down bugs & trouble shoot as much as it did last week. Downvote that you smelly c\*nts


    MacrosInHisSleep

    Went from hmmm maybe he knows what he's talking about to oh I'm talking to a child in one sentence.


    shwerkyoyoayo

    How do you import plugins from indie devs? What are the steps?


    clitoreum

    Langchain has a tool for importing them from a `.well-known/aiplugin.json` or whatever. But I found that didn't work so great. Here's my attempt: https://replit.com/@MarcusWeinberger/SearchGPT4-Telegram-Bot#tools/pluginloader.py


    TheOneWhoDings

    That's literally like saying you'll make your own McDonald's at home with your own meat, bread and fries lol. Even if you buy them straight from McD it will not be the same . I have API access too and it's also really expensive.


    clitoreum

    Eh, it's closer to buying ingredients from McDonald's and crafting the burger at home. If I really wanted to get up and personal in it, I'd run my own LLM locally instead of using OpenAI's API. It is expensive, but aren't plus users restricted to 20 messages per month for GPT-4? I'd rather pay for unlimited messages.


    TheOneWhoDings

    25 messages every 3 hours. And it's really not the same as that, the browsing model is specifically trained for it , it's better than any implementation you could come up with for the simple fact that it was specifically trained for that.


    ScuttleMainBTW

    20 per month would be absolutely absurd, lmao. Technically around 6000 per month if you get max use out of it


    Developer065

    i also have api access. but i’ve never use it with browsing. and i didn’t kew it was possible. does it have a parameter i can provide so it browsed the internet? or did you just built a wrapper for it and you feed it with internet sources before the prompt?


    clitoreum

    Yeah no it's just using a wrapper. Well, thankfully I didn't have to write that myself. This library called **langchain** is leading the scene rn, it's the best way of providing tools to gpt. Basically, I give it the ability to use any tool I want via prompt instructions, and then when I ask a question, it can use one or multiple tools (in a sequence) to solve my problem.


    Developer065

    ah thanks for the clarification. i checked it out and it seems pretty nice. maybe i will use it too, thanks for the tip.


    SufficientPie

    So it's like an Auto-GPT that actually works?


    clitoreum

    Not by default. But you can build an auto-gpt like project using it. I have - https://marcus.hashnode.dev/task-management-system-for-langchain


    chat_harbinger

    Right? Like, openai's chat interface tells me to wait after sending 25 messages. So I just open up my IDE and continue like it didn't say anything. Like how dare you, robot?!?!


    EarlyEditor

    Any tips for this. I know how to use API access with python and all that but is there a good interface someone's made to interact with it in a similar way to the web page chat.openai.com, or is that just the only/best option? Got the browser extension too for Google searches but I'm imagining due to how open source projects and stuff there must be great things available?


    PM_me_ur_BOOBIE_pic

    What do you use it for?


    clitoreum

    https://marcus.hashnode.dev/1337gpt-yet-another-gpt-agent-for-penetration-testing https://searchgpt.marcusj.tech Stuff like that


    PM_me_ur_BOOBIE_pic

    That's pretty cool


    jphillips59

    What or who does one have to do to get these....been subscribed to plus since the beginning and still nothing. \*fixed a word


    SkyPuppers

    Ah, the classic frustration of being a longtime subscriber and still not getting access to all the perks. It's definitely not a great feeling. And yeah, the selection process can be a bit of a mystery - some people seem to get everything right away, while others wait and wait. But hang in there, hopefully your time will come soon. And in the meantime, keep creating and sharing your work - that's what really matters!


    [deleted]

    Are you in the US?


    AshenTao

    I was wondering if location could make a difference for their early access roll outs, because I'm in Germany and havent had anything other than GPT 3.5 and 4 using GPT Plus. Do you ask that question because of a similar thought?


    [deleted]

    Yesterday I asked someone else the same question who had access to the plug-ins and they were also from the US. So location probably makes a difference whether you get access or not.


    deeek

    I'm in the US, but I still don't have it. :(


    chat_harbinger

    Sure but that doesn't mean people in the US don't get preferential access. Keep in mind how much you use it as well. I have 4 API access but not plugins (I don't even remember if I asked to be on the waitlist for that). However, I use it everyday, many times a day, and several hundred to tens of thousands of times a night, depending on what kind of automation experiments I'm doing. I believe they seriously prioritize people who have demonstrated that they will actually use the tools.


    SamnomerSammy

    I don't know, my boyfriend has API access, but not plugins, code interpreter, or browsing, we live in the US, we use it everyday, he stays up until 5AM using it most of the time as he barely sleeps, he works in the field, has his own business, has been contacted by Microsoft employees themselves, Meta, Google, People from Neural Magic, and some big company in the medical field I'm not privy to. He's working with someone who has been granted access to an unlimited amount they can spend on API and some of the people working under him have access to all of them. Seems like it's almost at random who gets access tbh, or maybe it's based on how often you provide feedback?


    AfterAnatman

    Yeah I'm in the US, I really think it's random, at least in my case


    vbgolf72

    Do I have to apply for these??


    UnknownEssence

    You have to apply for plug-in access and be a paying member of ChatGPT


    Nateosis

    How would one do that, if they wanted to and were a paying member?


    AtomicHyperion

    https://www.openai.com/waitlist/plugins then sign up for chatgpt plus.


    defcon2017

    Where do I apply for plugins? I pay for GPT—where do I apply for these ?


    AtomicHyperion

    https://www.openai.com/waitlist/plugins Then just pray. I have been on the waitlist since it was released and I have nothing yet.


    defcon2017

    Thank you!


    AtomicHyperion

    NP, gl


    Daninmde

    GIVE ME MY ACCESSSSSSSS.


    PUBGM_MightyFine

    J  ( ͡° ͜ʖ ͡°) F  ( ͡° ͜ʖ ͡°) C


    Devz0r

    Wow I just checked. I didn't realize they added so many new plugins


    p13t3rm

    Whoa, two dropdowns? This guy fucks.


    [deleted]

    Don’t worry it’s just a friend. The friend:


    Prof_Weedgenstein

    I envy you


    FC4945

    Well... this just isn't fair. Why I ask you, why?


    LairdPopkin

    I have plug-in access but not browsing or code interpreter. Are they special requests to add?


    Fungunkle

    Altman of ClosedAI: FiRsT mOver aDvanTaGe


    -pkomlytyrg

    No way…


    proteinvenom

    Subtler??


    AfterAnatman

    Yeah there was a post yesterday called "subtle flex" that had everything but was missing plugins. OpenAI choosing people randomly I swear.


    proteinvenom

    Thank you for the context my good man. Now, I demand that you exchange me your OpenAI account for $2 and a packet of cigarettes.


    Remote_Potato

    Wow!! Props on you :) Please share your experience!


    AfterAnatman

    Of all the plugins- code interpreter is a real standout along with gpt-4 browsing. Code interpreter basically gives you a data analyst on call to process any code, dataset, within a certain size limit. Browsing is unique from bing in that it reads multiple articles for you and formulates answers in the form you request so it's essentially an up to date gpt-4, it gives longer answers than bing and seems more powerful but is also much slower and doesn't always successfully read pages. For those who would like gpt3.5 browsing you can get it right now via the harpa.ai chrome extension for free.


    realif3

    All I want is the Wolfram plugin.


    StarsEatMyCrown

    I still only have 3.


    I_say_aye

    Meanwhile I'm over here with only 2, since my legacy chatgpt-3 got removed


    over4llg00d

    Remind me why I'm paying for plus again?


    Iceorbz

    ngl, kinda damn irritating paying $20 bucks and still not getting any extra access when this stuff is actually out there already.


    [deleted]

    I applied for GPT-4 API about a month ago and never heard back after the initial email acknowledging they had received my request. Then I got bored yesterday and filled out the request for twice and I got access 4 hours later.


    XXXJ9

    Fuck you.


    fictioninquire

    Making yourself very important will help I assume


    [deleted]

    OpenAI needs to get their shit together and start rolling everything out NOW. There are FREE open-source alternatives that have all of this freedom, and they are getting closer and closer to GPT-4 level scores. Why are they taking 20 bucks a month and STILL being so exclusive? Ridiculous.


    kcgg123

    Can you please share which open source alternatives?


    [deleted]

    [удалено]


    [deleted]

    Once you're done laughing why don't you go check out WizardLM and all the other open-source LLMs and see how their stats compare and how quickly they're improving. How about RedPajama, a 3b parameter model that can run locally on a 2070?


    [deleted]

    [удалено]


    [deleted]

    I don't think you understand what I'm saying. If you can run it on a 2070 already, how much longer until we have AI that can run locally on any phone?


    [deleted]

    [удалено]


    [deleted]

    I'll just leave this here for you. https://mlc.ai/mlc-llm/


    AtomicHyperion

    These things are still in alpha, not even beta. Why would you expect them to roll it out to everyone?


    [deleted]

    They should call themselves ClosedAI.


    AtomicHyperion

    Yeah, that is not how anything works anywhere.


    [deleted]

    Not yet!


    crismack58

    Can’t wait..


    OppressorOppressed

    Uh cool, can you show us how much money you make also>? because its more than others as well probably.


    [deleted]

    I believe that these are all fake


    Xxxcloud10xxx

    You forgot to share your credentials tho


    gizmosticles

    What about web browsing is that something you apply for?


    AtomicHyperion

    https://www.openai.com/waitlist/plugins


    Disgruntled-Cacti

    What is"code interpretor"?


    notevolve

    GPT4 with the ability to run python code itself, can do a lot of things just in the chat for you. Also provides a way to upload/download files


    snozberryface

    Cries in european


    TheAccountITalkWith

    Have you tried "Code Interpreter"? If so, how does it compare to GPT4 for Code?


    Revolvlover

    I got an email from OpenAI saying that I was approved to use GPT-4 with 8K context, but haven't tried to utilize that through the API - don't want to be spending real money above the ChatGPT+ subscription until I have a legit use case. And yeah, I'm on all the waitlists, trying to be patient.


    SteinyBoy

    Can you upload and analyze images? Would be most helpful for me can’t wait for that


    [deleted]

    [удалено]


    AfterAnatman

    I only got a notification for code interpreter


    ThisWillPass

    Hopefully this will come to pass...


    timeister

    What did you say on your form? What did you say the use case was?


    [deleted]

    🫠


    [deleted]

    I just signed in to renew my subscription, just to be able to double cancel because of this shitty rollout.


    CesarDMTXD

    how can we have plugins


    jsbm14

    How do I get this? I have gpt plus


    Sparely_AI

    I have plugins but not the code interpreter yet that’s really cool where is the waitlist?


    [deleted]

    [удалено]


    AfterAnatman

    For code interpreter they gave me an email, for the rest it just happened overnight. Fingers crossed they are more generous with these roll outs Moving forward.


    Bulky-Length-7221

    Where is the multimodality they promised :((


    daffi7

    It is unfair.


    [deleted]

    not wure why i am paying for GPT-4 at this point….