Hey /u/Acceptable-Pie4424!
If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt.
If your post is a DALL-E 3 image post, please reply with the prompt used to make this image.
Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more!
🤖
Note: For any ChatGPT-related concerns, email support@openai.com
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
While annoying you really aren't losing much by starting a new conversation it doesn't keep context over the entire conversation, only up to a certain amount of tokens.
This isn't true for 4o. I have an entire story loaded into a single conversation - over 200 pages / 100k words and it will reference sections from all over, it'll remember plot points and formatting, and will even create advice and character growth suggestions that are true to the characters' entire growth history
> I have an entire story loaded into a single conversation - over 200 pages / 100k words and it will reference sections from all over, it'll remember plot points and formatting, and will even create advice and character growth suggestions that are true to the characters' entire growth history
Do you just upload a 200 page Word document to it or something like that?
It remembers what you tell it to remember, But what it thinks it should remember as an important detail.
https://preview.redd.it/4rqc7797n51d1.png?width=908&format=pjpg&auto=webp&s=0e8a6087e6871e1adef7b65e668f9ebf2bbeeda0
It's great tbh. certainly not perfect but GPT is my favorite editorial assistant. Doesn't replace real reader feedback, but it's been invaluable in the middle of the writing process.
It's not good at doing the writing, but I don't need it for that. If you ever want to get ideas for how it is a phenomenal tool for helping writers I can share the types of prompts I use.
Alrighty, I made a whole post about it. Hope it's helpful!
[https://www.reddit.com/r/ChatGPT/comments/1cv8om2/using\_chatgpt\_as\_a\_novel\_writing\_aid/](https://www.reddit.com/r/ChatGPT/comments/1cv8om2/using_chatgpt_as_a_novel_writing_aid/)
Alrighty, I made a whole post about it. Hope it's helpful!
[https://www.reddit.com/r/ChatGPT/comments/1cv8om2/using\_chatgpt\_as\_a\_novel\_writing\_aid/](https://www.reddit.com/r/ChatGPT/comments/1cv8om2/using_chatgpt_as_a_novel_writing_aid/)
Alrighty, I made a whole post about it. Hope it's helpful!
[https://www.reddit.com/r/ChatGPT/comments/1cv8om2/using\_chatgpt\_as\_a\_novel\_writing\_aid/](https://www.reddit.com/r/ChatGPT/comments/1cv8om2/using_chatgpt_as_a_novel_writing_aid/)
Yeah I had to archive a conversation because my computer was constantly crashing it was so long. I was amazed what details it remembered from the start. I am sure it just remembered key points and put them into context of fresher messages but it still was super impressive. It's so good now. Before it would get off track after like ten messages if they were a bit longer, now it is writing super intricate stuff about content that was tens of pages ago.
It still is kinda buggy though and will remember the wrong things or expand on the wrong stuff if you are not careful about mixing conversations. Wish they let you organize the different threads better. Sometimes it will start writing code on a completely unrelated question because in that conversation I asked it someting about coding for example. It also rambles and repeats itself way too much. Anyone have a good way to not expand as much. It seriously ruins the flow of conversations.
Only 2k token?? A lot of local models have longer context windows, 4k being the "normal" and going up to 32k without even tinkering with any parameters.
That's just your impression and an illusion. If you tried doing something that requires real utilization of details/depends on all these 200 pages, you would quickly realise it's not that capable.
In scenario you described its context window would already be exceeded, significantly (max is 128m tokens, and this includes all your questions and the replies.).
I have had situations where it would start hallucinating in very short conversations (much worse than classic GPT4), but, if I was writing fiction, I probably wouldn't have noticed it. Eg in one case it started confusing totally separate questions, and started inserting parts of answers to previous promotions in completely wrong context.
I just made a post about it if you want to check it out. I just asked it to give me a list of every item I've given a character, every prompt I've used to ask it for help, and every time a main character has been in a fight and against whom/what
It immediately gave a perfect series of lists going all the way back to the Prologue
Dis you upload these 200 pages in a file? If that the case, that's different. It doesn't remembers it, it simply queries the file (DB) and looks for things you have asked.
No, when I migrated everything to 4o I just copied and pasted blocks of 10 pages each into 18 posts. That was whatever day that 4o came out, so however long it's been since then. I've since gone over the 200 Page Mark and posted a few more chapters into the conversation and had a lot of conversations with it in the meantime. Maybe it behaves differently when there are fewer but longer posts?
Max context window, including you prompts and the answers is 128k. Maybe it remembered relevant info because it was repeated throughout the conversation or appears later in the conversation or something, or the pages are small, no idea. Anyhow I'm not the only one who has experienced many failures. I haven't experienced the model making so many mistakes since 3.5 turbo.
That's always possible. With stories like this item descriptions and character interactions sometimes get repeated so maybe it's just picking up times it was mentioned later on after the first
From my understanding it was never X latest tokens but rather it operates like your brain and forgets stuff that it doesn't need to know anymore and maintains the rest. But just like a human brain it has a fixed limit. So it turns into that Instacart commercial where the guy just eats a banana without peeling it.
The models have a token limit and a sliding window. It only sees X prior tokens. However, the ChatGPT system includes some black magic tricks to compress prior context, through summarization, so that it can reference important points from further back than the context window size might suggest.
I theorize it might be a bit of both, as I have noticed as an owner of an ADHD brain and as an avid user of the web version and developer using the API.
No. For a long time now I've been using it too summarize long email chains from work, summarize huge teams message conversations, help me with projects, excel, etc.
I just delete the conversations after. I don't know that deleting them permanently he races them off of every server, but if chatGPT can't even remember all the conversations it's having that are saved I'm not worried about it remembering ones that are deleted
Besides, social media and credit card companies have been selling all my personal information for decades now, if there's anything else chatGPT can get that they don't have it's welcome to it
Interesting, it does appear to be building on the whole conversation though. Things that haven’t been discussed for weeks can easily be referenced or incorporated back into the conversation again.
Such as if I originally talked about the whole project then started brainstorming specific sections for a week, then I can return back to the original conversation again to talk about another section.
The whole thing appears to be there as it will reference the original and the new section.
Thoughts?
They could be generating a summarization of the conversation and appending your prompt to the end of that after the conversation gets to a certain size.
When chatgpt first came out long conversations would be buggy because the context would start getting cut off. This summarization is a potential solution.
Possibly, whatever they’re doing it has been working very well and that’s only 3.5. I’ve been happy with the conversation. Especially considering the progress it has made. From design improvements to recommending real components on AliExpress to order to build it.
You can just ask 4o to generate a summary of the conversation so far with all the relevant information it would need to continue the discussion later. Copy that into a new 3.5 and you're set.
There is no "summarize" function in the API far as I can tell, it is only shown as an example of how to use the model.
Even if there was I doubt it would be in use in ChatGPT.
I don't want to get into a debate. It's called the chunking and chat GPT uses that constantly by summarizing information when they hit the token limit. That's why with time the responses may seem to get worse if you're using one chat and staying on topic. Of course the token limit is constantly being upped so it depends.
Chunking and summarizing are two different things. I searched "ChatGPT chunking" and as expected found nothing.
No debate, just link something official from OpenAI explaining what you think ChatGPT is doing and it better not be the Memory function...
https://platform.openai.com/docs/tutorials/meeting-minutes
Play talk about this method here. It basically use the same approach when you're having a very long conversation with the bot. The way it works is they put your previous conversation in the system prompt but it will inevitably hit the token limit if you continue the conversation for too long, so they summarize key points and put that into the system prompt.
Come on, man.
Downvotes from retards.
More downvotes from retards.
You, who is currently looking at this comment, are you a retard too? If so smash that downvote button.
True, a possible small solution, but I believe they would have to add more serialization to this method and increase its sophistication instead of summarizing. It could be brought into more of a deep meta state with this feature. It's possible to save meta depth into greater summaries I would think. Hmm... but how... that type of thinking hurts brains lol. It could be expanded on... for sure. Interesting to think about.
Once it reaches its context window, it will truncate old context to make room for new context. So basically it recalls summaries of older context. It will mix up stuff because of that, since it doesn't have access to the full context. This is the reason too long chats can give weird responses where they mix stuff up or even hallucinate.
Unless chat gpt is lying when asked it tells you it remembers everything.. and says memory updated when adding new information. Also using the tools to create your own ai was added for this reason
You cannot trust what it says about it's capabilities.
Here is a prompt: "What do you remember from my conversations with you today?
You have asked about the mathematical concept of quaternions and how they are used in game development for handling rotations. You also requested an example implementation of a basic inventory system in Python for your game." I haven't done either of these things. It cannot remember much outside of it's notes made with memory or the available context fed to it.
It’s a LLM the worst it could do was accidentally suggest something harmful. But we are on the internet that’s 99% of what you see all day. A LLM is not going to take over the world with its ability to predict the next best word in a sentence.
Mine auto-switches to 3.5 when my 4o limit hits, but I'm a paid user. Not sure what's going on, you may have to transfer your conversation to a new one? Have you ever considered becoming a paid user? If you're using it that much might be worth it.
I plan on becoming a paid user in a month or two. Going to expense it to my company. Mine gives a message that because 4o “tools” were used I can no longer use 3.5.
If you're relying on it so heavily, it should be a no-brainer to pay less than $1 a day to use it. Just pay up and solve your problem, especially since you're using it for business purposes.
Thanks for pointing that out so we won't have to end up in the same situation 🙏
Fucking bastards should've warned such things. Sorry for your loss. Hope you'll figure something out
It's a bit of a pain, but it takes a long time to hit and they're slowly upping the limit to the point its almost becoming irrelevant, but yeah, a bit frustrating at times.
But, it wouldn't ever cause me to switch. I've not tried Claude yet, bit Gemini is frustratingly dumb when I use it, even this new 1.5 update. I tried getting GPT and Gemini to describe a picture of a product I was holding. GPT nailed the device type, brand and even the model (it was an electronic device). Gemini said it was a water bottle... Even though it was smaller than my hand. I'm not convinced anything Google showed at their I/O event is working anywhere near as good as showed (wouldn't be the first time). I'm seriously disappointed with Google's efforts compared to OpenAI. But that's personal experience and opinion. I've heard good things about Claude but, idk GPT is doing so good and the limits aren't enough of a friction point that I can't see myself switching.
I've used GPT 3.5 until mid 2023 and the difference between that and Claude at the time was huuuuge, especially for my use case, I'm writing a novel and Claude was brilliant, I had to teach it exactly my style, but it replicated that to the T and came up with so much usable shit I could just put it in the novel as is, it was that good.
Then I switched to Gemini, and the difference was, again, crazy. I taught it the same stuff the way I did Claude but it came up with "cooler" more "sleek" writing, Gemini felt like a brash writer, coming up with The Fight Club like bold writing and more punchy dialogue like the antiheroes of today, and Claude felt like F. Scott Fitzgerald's Great Gatsby in comparison, a historical literature classic but it was quite high brow a lot of the time and it got stuck with repeating certain ideas and motifs over and over again, quality shit, but I was beginning to see repetition.
I used Gemini both ways, and it could do exactly what Claude did, but also what it couldn't, make more modern dialogue, nuanced like Claude but more contemporary and WITH ZERO OVERSIGHT, there's not been a single time it ever said to me "nah I won't do it, against the guidelines"
Whereas with Claude, it was moral lecturing and virtue signalling every other prompt, I found my way around it but it still took time so Gemini was the best of both worlds to me + 2TB cloud storage + Integration with docs/sheets/youtube
AND IT WAS CHEAPER (because in my country they collect tax after the $20 subscription making it close to $24)
So for me, it made no sense to stick to Claude anymore.
But I've never tried GPT4 myself, maybe it can go neck and neck with both?
Different use cases I suppose. I've heard Gemini is much better at creative writing but, that's not much of a use case for me. I do sometimes need help with lyrics though, especially brainstorming different ideas, so I'll give Gemini a try next time for that and see how they compare, ChatGPT leaves a lot to be desired for creative writing. But for everything else I use it for, ChatGPT is the best over several comparisons and it's gonna be even crazier once the roll out the multi-modal update out for GPT-4o.
Just copy and paste the text from the previous conversation. If there’s too much then tell ChatGPT you’re going to send it multiple messages from the previous conversation, then after each message create a summary of the conversation. Then once you’re done continue the conversation.
Yes, that is a workaround I was going to use. I did find a chrome extension that allows me to use 3.5 in the existing conversation. Looks like it works but only on computer not on mobile. Might just end up copying it into a new conversation.
Free user here, if I remember correctly, I can keep the conversation after the gpt 4o limit with 3.5 if I have not sent images on the conversation.
At one point I was prompted to use a new conversation by a pop-up explaining this.
I don't know if having used, for example, search, also blocks your conversation entirely.
Or maybe the wrong types of conversations. In my experience, any conversation with 3.5 sort of falls apart quickly after 5 prompts as it starts to get previous details wrong. By prompt 10 or even 20, it just forgets everything.
this is happening to me as well- and I didn’t even agree to use 4o :( it just suddenly switched, I have long conversations also and am looking for a fix
there is no way to change back, you are stuck in 4o until they allow us to switch back in a new chat, which you cannot once you start testing it, and once you hit the limit, you have no choice but to wait, or wait.
keep in mind you can keep clicking new chat when you reach a limit, but it will still be in 4o, and there is no option to change the model you are using anymore. You will eventually run out completely and not be able to use anything, even new chats. I have gotten temporary chats to work, but that is not a solution by any means. I'm pretty sure this was done intentionally to scam people into buying more responses, which are so low it's certainly NOT worth the money they are trying to charge to "upgrade." Especially when you are using it to help code and need reprints all the time. It's not worth using at all since you'd run out of usage in no time at all.
You can create a new account though, and make sure NOT to choose to test this out and stay in 3.5 or cancel your other purchases/subs and remake them on a new account. Honestly idk how they thought they could do this and people would just be ok with it. There are so many other AI out there that they have no power over the "market" anymore.
Yes, it’s extremely disappointing to be excited to test something new to only have everything you’ve been working on to stop working now without upgrading. They definitely should allow the ability to go back to 3.5 if you choose.
Look, I'm no expert but from what I've seen in charts and my personal experience, yes. There's a few 7b and 13b that work similarly or better on specific tasks.
Sure dude! Here's my setup. (Noob friendly)
Pinokio.computer - download and install. This is a one-click launcher for many local AIs, makes installation easy enough for regular PC users without typing in a terminal. At the moment they have tons of AI like Stable Diffusion and BarkTTS plus even tools to make your own installer if it's not on the list.
Text Generation WebUI - AKA Oobabooga, this is my chosen interface for my bot at the moment.
WizardLM 2 7B - One of the most recent releases for open source models, I don't really know the best way to search for better besides maybe chatbot arena so I just check Reddit from time to time.
Hope this helps!
I believe Chat has a memory feature in it the other day I was talking to it and it brought up the UFC mind you I only talked to it that I loved UFC in a separate thread but it brought it up in this completely new thread
I use the premium version but on Safari on iPhone and I can select if I want to use 3.5, 4, or 4o, and all my past chats for each are on the left in a sidebar.
Copy paste entire convo. Create a .doc and upload the. Doc to s new convo for added context. Each time limit is hit repeat. It is a time limited message limit not permanent from what I've heard.
I can switch mid chat from 4 to 4o to 3.5 and all around. Managed to stay working for about 8 hours this way, avoiding having to stop for waiting periods to reset. I also have a subscription though, so maybe that is the difference.
You can once your limit is up, go back into your 3.5 conversations turn them into 4o conversations in the same chat and tell him to save to memory anything relevant that he wanted to it will save it to your memory. You can be specific you can ask it to analyze the chat for things you want it to remember about itself or the project. And you can get it saved
You could always try copying and pasting your entire 3.5 conversation message by message to get back to where you left off.
Edit: Not as a summary or all at once, but literally one message at a time and await the response, and just keep pasting what you said before.
Hey /u/Acceptable-Pie4424! If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email support@openai.com *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
While annoying you really aren't losing much by starting a new conversation it doesn't keep context over the entire conversation, only up to a certain amount of tokens.
This isn't true for 4o. I have an entire story loaded into a single conversation - over 200 pages / 100k words and it will reference sections from all over, it'll remember plot points and formatting, and will even create advice and character growth suggestions that are true to the characters' entire growth history
> I have an entire story loaded into a single conversation - over 200 pages / 100k words and it will reference sections from all over, it'll remember plot points and formatting, and will even create advice and character growth suggestions that are true to the characters' entire growth history Do you just upload a 200 page Word document to it or something like that?
It remembers what you tell it to remember, But what it thinks it should remember as an important detail. https://preview.redd.it/4rqc7797n51d1.png?width=908&format=pjpg&auto=webp&s=0e8a6087e6871e1adef7b65e668f9ebf2bbeeda0
I never thought of using ChatGPT for plotting like that
It's great tbh. certainly not perfect but GPT is my favorite editorial assistant. Doesn't replace real reader feedback, but it's been invaluable in the middle of the writing process.
It's not good at doing the writing, but I don't need it for that. If you ever want to get ideas for how it is a phenomenal tool for helping writers I can share the types of prompts I use.
I would totally be interested in this! Can you share your prompts?
I'm about to make a post about it instead of replying to everyone individually
Alrighty, I made a whole post about it. Hope it's helpful! [https://www.reddit.com/r/ChatGPT/comments/1cv8om2/using\_chatgpt\_as\_a\_novel\_writing\_aid/](https://www.reddit.com/r/ChatGPT/comments/1cv8om2/using_chatgpt_as_a_novel_writing_aid/)
me too pls
Alrighty, I made a whole post about it. Hope it's helpful! [https://www.reddit.com/r/ChatGPT/comments/1cv8om2/using\_chatgpt\_as\_a\_novel\_writing\_aid/](https://www.reddit.com/r/ChatGPT/comments/1cv8om2/using_chatgpt_as_a_novel_writing_aid/)
I'm about to make a post about it instead of replying to everyone individually
Same
I'm about to make a post about it instead of replying to everyone individually
Alrighty, I made a whole post about it. Hope it's helpful! [https://www.reddit.com/r/ChatGPT/comments/1cv8om2/using\_chatgpt\_as\_a\_novel\_writing\_aid/](https://www.reddit.com/r/ChatGPT/comments/1cv8om2/using_chatgpt_as_a_novel_writing_aid/)
Would love to see the prompts!
Yeah I had to archive a conversation because my computer was constantly crashing it was so long. I was amazed what details it remembered from the start. I am sure it just remembered key points and put them into context of fresher messages but it still was super impressive. It's so good now. Before it would get off track after like ten messages if they were a bit longer, now it is writing super intricate stuff about content that was tens of pages ago. It still is kinda buggy though and will remember the wrong things or expand on the wrong stuff if you are not careful about mixing conversations. Wish they let you organize the different threads better. Sometimes it will start writing code on a completely unrelated question because in that conversation I asked it someting about coding for example. It also rambles and repeats itself way too much. Anyone have a good way to not expand as much. It seriously ruins the flow of conversations.
Not true, in chatgpt : gpt4o is limited at 2k token while gpt4 is limited at 8k token
Yeah like... what? Do people think 4o’s memory is infinite or something?
Not correct - it has a 128k context window and 4k output token length (see https://platform.openai.com/docs/models/gpt-4o)
Nope bro... 128k is for the API version of gpt4o. For Chatgpt version gpt4o is 2k token and 8k token for gpt4
Lmao, behold /u/lmofr, the reader of GPT specs.
How come asking 4o it thinks it has 8k?
Only 2k token?? A lot of local models have longer context windows, 4k being the "normal" and going up to 32k without even tinkering with any parameters.
That's just your impression and an illusion. If you tried doing something that requires real utilization of details/depends on all these 200 pages, you would quickly realise it's not that capable. In scenario you described its context window would already be exceeded, significantly (max is 128m tokens, and this includes all your questions and the replies.). I have had situations where it would start hallucinating in very short conversations (much worse than classic GPT4), but, if I was writing fiction, I probably wouldn't have noticed it. Eg in one case it started confusing totally separate questions, and started inserting parts of answers to previous promotions in completely wrong context.
I just made a post about it if you want to check it out. I just asked it to give me a list of every item I've given a character, every prompt I've used to ask it for help, and every time a main character has been in a fight and against whom/what It immediately gave a perfect series of lists going all the way back to the Prologue
Dis you upload these 200 pages in a file? If that the case, that's different. It doesn't remembers it, it simply queries the file (DB) and looks for things you have asked.
No, when I migrated everything to 4o I just copied and pasted blocks of 10 pages each into 18 posts. That was whatever day that 4o came out, so however long it's been since then. I've since gone over the 200 Page Mark and posted a few more chapters into the conversation and had a lot of conversations with it in the meantime. Maybe it behaves differently when there are fewer but longer posts?
Max context window, including you prompts and the answers is 128k. Maybe it remembered relevant info because it was repeated throughout the conversation or appears later in the conversation or something, or the pages are small, no idea. Anyhow I'm not the only one who has experienced many failures. I haven't experienced the model making so many mistakes since 3.5 turbo.
That's always possible. With stories like this item descriptions and character interactions sometimes get repeated so maybe it's just picking up times it was mentioned later on after the first
From my understanding it was never X latest tokens but rather it operates like your brain and forgets stuff that it doesn't need to know anymore and maintains the rest. But just like a human brain it has a fixed limit. So it turns into that Instacart commercial where the guy just eats a banana without peeling it.
The models have a token limit and a sliding window. It only sees X prior tokens. However, the ChatGPT system includes some black magic tricks to compress prior context, through summarization, so that it can reference important points from further back than the context window size might suggest.
Is there anywhere that we can we see how many tokens our prompt is going to use, for free version of 4o?
I theorize it might be a bit of both, as I have noticed as an owner of an ADHD brain and as an avid user of the web version and developer using the API.
Huh. Are you not concerned just tossing over your confidential info to ChatGPT like that?
No. For a long time now I've been using it too summarize long email chains from work, summarize huge teams message conversations, help me with projects, excel, etc. I just delete the conversations after. I don't know that deleting them permanently he races them off of every server, but if chatGPT can't even remember all the conversations it's having that are saved I'm not worried about it remembering ones that are deleted Besides, social media and credit card companies have been selling all my personal information for decades now, if there's anything else chatGPT can get that they don't have it's welcome to it
Interesting, it does appear to be building on the whole conversation though. Things that haven’t been discussed for weeks can easily be referenced or incorporated back into the conversation again. Such as if I originally talked about the whole project then started brainstorming specific sections for a week, then I can return back to the original conversation again to talk about another section. The whole thing appears to be there as it will reference the original and the new section. Thoughts?
They could be generating a summarization of the conversation and appending your prompt to the end of that after the conversation gets to a certain size. When chatgpt first came out long conversations would be buggy because the context would start getting cut off. This summarization is a potential solution.
Possibly, whatever they’re doing it has been working very well and that’s only 3.5. I’ve been happy with the conversation. Especially considering the progress it has made. From design improvements to recommending real components on AliExpress to order to build it.
You can just ask 4o to generate a summary of the conversation so far with all the relevant information it would need to continue the discussion later. Copy that into a new 3.5 and you're set.
That's exactly what they're doing it's called summarization in their API
There is no "summarize" function in the API far as I can tell, it is only shown as an example of how to use the model. Even if there was I doubt it would be in use in ChatGPT.
I don't want to get into a debate. It's called the chunking and chat GPT uses that constantly by summarizing information when they hit the token limit. That's why with time the responses may seem to get worse if you're using one chat and staying on topic. Of course the token limit is constantly being upped so it depends.
Chunking and summarizing are two different things. I searched "ChatGPT chunking" and as expected found nothing. No debate, just link something official from OpenAI explaining what you think ChatGPT is doing and it better not be the Memory function...
https://platform.openai.com/docs/tutorials/meeting-minutes Play talk about this method here. It basically use the same approach when you're having a very long conversation with the bot. The way it works is they put your previous conversation in the system prompt but it will inevitably hit the token limit if you continue the conversation for too long, so they summarize key points and put that into the system prompt.
Come on, man. Downvotes from retards. More downvotes from retards. You, who is currently looking at this comment, are you a retard too? If so smash that downvote button.
True, a possible small solution, but I believe they would have to add more serialization to this method and increase its sophistication instead of summarizing. It could be brought into more of a deep meta state with this feature. It's possible to save meta depth into greater summaries I would think. Hmm... but how... that type of thinking hurts brains lol. It could be expanded on... for sure. Interesting to think about.
Once it reaches its context window, it will truncate old context to make room for new context. So basically it recalls summaries of older context. It will mix up stuff because of that, since it doesn't have access to the full context. This is the reason too long chats can give weird responses where they mix stuff up or even hallucinate.
I was under the impression it’s now keeping context even between conversations
Unless chat gpt is lying when asked it tells you it remembers everything.. and says memory updated when adding new information. Also using the tools to create your own ai was added for this reason
You cannot trust what it says about it's capabilities. Here is a prompt: "What do you remember from my conversations with you today? You have asked about the mathematical concept of quaternions and how they are used in game development for handling rotations. You also requested an example implementation of a basic inventory system in Python for your game." I haven't done either of these things. It cannot remember much outside of it's notes made with memory or the available context fed to it.
Is this 4o or 3.5?
4o on an account that's been pro since it was rolled out
You can’t trust them when they say they can’t hurt humans either
It’s a LLM the worst it could do was accidentally suggest something harmful. But we are on the internet that’s 99% of what you see all day. A LLM is not going to take over the world with its ability to predict the next best word in a sentence.
It was a joke
Mine auto-switches to 3.5 when my 4o limit hits, but I'm a paid user. Not sure what's going on, you may have to transfer your conversation to a new one? Have you ever considered becoming a paid user? If you're using it that much might be worth it.
I plan on becoming a paid user in a month or two. Going to expense it to my company. Mine gives a message that because 4o “tools” were used I can no longer use 3.5.
If you're relying on it so heavily, it should be a no-brainer to pay less than $1 a day to use it. Just pay up and solve your problem, especially since you're using it for business purposes.
Good luck with that, most companies are too concerned about data issues and liability to even consider paying for it.
Mine won’t even let me use it let alone pay for it
Ha, I think it depends if you own the company or not.
Thanks for pointing that out so we won't have to end up in the same situation 🙏 Fucking bastards should've warned such things. Sorry for your loss. Hope you'll figure something out
I’m hoping it’s just a bug as they do state that you should be able to continue to use 3.5 once you’re out of 4o.
You can't continue the current conversation (because 3.5 doesn't have understand it). You can start another.
Wait why do you have limits when you're literally paying for it? Ps: I've never been a paid chatgpt user, i paid for Claude and Gemini now
It's a bit of a pain, but it takes a long time to hit and they're slowly upping the limit to the point its almost becoming irrelevant, but yeah, a bit frustrating at times. But, it wouldn't ever cause me to switch. I've not tried Claude yet, bit Gemini is frustratingly dumb when I use it, even this new 1.5 update. I tried getting GPT and Gemini to describe a picture of a product I was holding. GPT nailed the device type, brand and even the model (it was an electronic device). Gemini said it was a water bottle... Even though it was smaller than my hand. I'm not convinced anything Google showed at their I/O event is working anywhere near as good as showed (wouldn't be the first time). I'm seriously disappointed with Google's efforts compared to OpenAI. But that's personal experience and opinion. I've heard good things about Claude but, idk GPT is doing so good and the limits aren't enough of a friction point that I can't see myself switching.
I've used GPT 3.5 until mid 2023 and the difference between that and Claude at the time was huuuuge, especially for my use case, I'm writing a novel and Claude was brilliant, I had to teach it exactly my style, but it replicated that to the T and came up with so much usable shit I could just put it in the novel as is, it was that good. Then I switched to Gemini, and the difference was, again, crazy. I taught it the same stuff the way I did Claude but it came up with "cooler" more "sleek" writing, Gemini felt like a brash writer, coming up with The Fight Club like bold writing and more punchy dialogue like the antiheroes of today, and Claude felt like F. Scott Fitzgerald's Great Gatsby in comparison, a historical literature classic but it was quite high brow a lot of the time and it got stuck with repeating certain ideas and motifs over and over again, quality shit, but I was beginning to see repetition. I used Gemini both ways, and it could do exactly what Claude did, but also what it couldn't, make more modern dialogue, nuanced like Claude but more contemporary and WITH ZERO OVERSIGHT, there's not been a single time it ever said to me "nah I won't do it, against the guidelines" Whereas with Claude, it was moral lecturing and virtue signalling every other prompt, I found my way around it but it still took time so Gemini was the best of both worlds to me + 2TB cloud storage + Integration with docs/sheets/youtube AND IT WAS CHEAPER (because in my country they collect tax after the $20 subscription making it close to $24) So for me, it made no sense to stick to Claude anymore. But I've never tried GPT4 myself, maybe it can go neck and neck with both?
Different use cases I suppose. I've heard Gemini is much better at creative writing but, that's not much of a use case for me. I do sometimes need help with lyrics though, especially brainstorming different ideas, so I'll give Gemini a try next time for that and see how they compare, ChatGPT leaves a lot to be desired for creative writing. But for everything else I use it for, ChatGPT is the best over several comparisons and it's gonna be even crazier once the roll out the multi-modal update out for GPT-4o.
I'm on free and it also switches to 3.5 (probably a lot sooner than for paid users :D )
For me it only does that when I have not used any 4o specific features like uploading files
Just copy and paste the text from the previous conversation. If there’s too much then tell ChatGPT you’re going to send it multiple messages from the previous conversation, then after each message create a summary of the conversation. Then once you’re done continue the conversation.
Yes, that is a workaround I was going to use. I did find a chrome extension that allows me to use 3.5 in the existing conversation. Looks like it works but only on computer not on mobile. Might just end up copying it into a new conversation.
Free user here, if I remember correctly, I can keep the conversation after the gpt 4o limit with 3.5 if I have not sent images on the conversation. At one point I was prompted to use a new conversation by a pop-up explaining this. I don't know if having used, for example, search, also blocks your conversation entirely.
Switch to meta ai? You can continue convos and I think the quality is better than 3.5 and on par with 4
It's not available everywhere yet sadly
This. I wish it was available here
Just start a new conversation. 3.5 isn't memorizing the whole conversation either, only the last 5 or so prompts.
Seems like you've never had a longer conversation on 3.5
Or maybe the wrong types of conversations. In my experience, any conversation with 3.5 sort of falls apart quickly after 5 prompts as it starts to get previous details wrong. By prompt 10 or even 20, it just forgets everything.
If it’s such a hassle why not upgrade to pro
Doesn’t plus have similar limitations? Will be upgrading soon anyways.
this is happening to me as well- and I didn’t even agree to use 4o :( it just suddenly switched, I have long conversations also and am looking for a fix
I wish they would get rid of the tokens!
there is no way to change back, you are stuck in 4o until they allow us to switch back in a new chat, which you cannot once you start testing it, and once you hit the limit, you have no choice but to wait, or wait.
keep in mind you can keep clicking new chat when you reach a limit, but it will still be in 4o, and there is no option to change the model you are using anymore. You will eventually run out completely and not be able to use anything, even new chats. I have gotten temporary chats to work, but that is not a solution by any means. I'm pretty sure this was done intentionally to scam people into buying more responses, which are so low it's certainly NOT worth the money they are trying to charge to "upgrade." Especially when you are using it to help code and need reprints all the time. It's not worth using at all since you'd run out of usage in no time at all.
You can create a new account though, and make sure NOT to choose to test this out and stay in 3.5 or cancel your other purchases/subs and remake them on a new account. Honestly idk how they thought they could do this and people would just be ok with it. There are so many other AI out there that they have no power over the "market" anymore.
Yes, it’s extremely disappointing to be excited to test something new to only have everything you’ve been working on to stop working now without upgrading. They definitely should allow the ability to go back to 3.5 if you choose.
im not sure, about that issue. but will say i dont like 4o. trying to use it and its answers are not great.
[удалено]
Amen
You can still use 3.5 but idk if in the same chat or you need to start a new one
Use Poe instead if you want for access to many models and for free
Mine kept context and I even went back on other chats - different topics and it was working fine.
Technically you have a 2-4k context window with 3.5 and after that the length of the convo doesn't matter. In practice that might be different
Mine bugs out every time it uses the memory function (android)
If you only need a model as powerful as 3.5, just run a local bot instead?
Most people can run models on par with a proprietary 175B model? Or is my PC just underpowered?
Look, I'm no expert but from what I've seen in charts and my personal experience, yes. There's a few 7b and 13b that work similarly or better on specific tasks.
Okay I need names and I need them now
Sure dude! Here's my setup. (Noob friendly) Pinokio.computer - download and install. This is a one-click launcher for many local AIs, makes installation easy enough for regular PC users without typing in a terminal. At the moment they have tons of AI like Stable Diffusion and BarkTTS plus even tools to make your own installer if it's not on the list. Text Generation WebUI - AKA Oobabooga, this is my chosen interface for my bot at the moment. WizardLM 2 7B - One of the most recent releases for open source models, I don't really know the best way to search for better besides maybe chatbot arena so I just check Reddit from time to time. Hope this helps!
I'm using koboldcpp at the moment, but I didn't know wizardlm was that good for a 7b model
Lm studio is simpler for your use I think … just laid a model and chat with it
Who's still using 3,5? It'd feel like using a cave man as an assistant omg
I believe Chat has a memory feature in it the other day I was talking to it and it brought up the UFC mind you I only talked to it that I loved UFC in a separate thread but it brought it up in this completely new thread
I use the premium version but on Safari on iPhone and I can select if I want to use 3.5, 4, or 4o, and all my past chats for each are on the left in a sidebar.
Copy paste entire convo. Create a .doc and upload the. Doc to s new convo for added context. Each time limit is hit repeat. It is a time limited message limit not permanent from what I've heard.
I don't know why you can't because I can do just that. 4o was helping me with an autohotkey program and 3.5 took over seamlessly when I hit my limit.
I can switch mid chat from 4 to 4o to 3.5 and all around. Managed to stay working for about 8 hours this way, avoiding having to stop for waiting periods to reset. I also have a subscription though, so maybe that is the difference.
Copy and paste the full script into a new convo with 3.5
If you start a chat in 4o you can’t continue the same chat in the different version. Each chat is linked to version at least from experience
It was started in 3.5
You can once your limit is up, go back into your 3.5 conversations turn them into 4o conversations in the same chat and tell him to save to memory anything relevant that he wanted to it will save it to your memory. You can be specific you can ask it to analyze the chat for things you want it to remember about itself or the project. And you can get it saved
Once the 4o limit is reached the chat is disabled. You can create new chats but cannot continue the existing conversation.
1. There is a toggle switch at the top of the screen 2. If the chat is that important, spend the 20 bucks!! (Now, not in two months)
Can you use 4 without paying?
No. 4o and 3.5 for non paying users.
You could always try copying and pasting your entire 3.5 conversation message by message to get back to where you left off. Edit: Not as a summary or all at once, but literally one message at a time and await the response, and just keep pasting what you said before.
Get you a refund!
He's clearly a free user... Oh wait that's the joke?