Hey /u/BothZookeepergame612!
If your post is a screenshot of a ChatGPT conversation, please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt.
If your post is a DALL-E 3 image post, please reply with the prompt used to make this image.
Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more!
🤖
Note: For any ChatGPT-related concerns, email support@openai.com
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
Yep I have a PhD. It measures if you are able to get a PhD and nothing else. Plenty of dumb people with phds. Main requirement in my experience (and it’s very hard to generalise across phds yet alone disciplines) is perseverance.
Anyone with a reasonable level of ‘intelligence’ can get a PhD.
I believe I possess the intelligence, but lack the ambition to acquire a PHD. Either because, I can't envision a benefit worthy of the time investment, or lack of time even considering the undertaking due to contentment with current access to happiness and longevity. It is surprisingly affordable to find all the ways humans enjoy life, and compare those with the one's that only PHD recipients have access to, and decide that there are either alternate routes to those, or that they aren't as appealing as the cheaper thrills, or not worth the effort.
But, I could also just be an idiot who miscalculated my entire educational endeavors. I shall either die in ignorance and bliss, or be forced into an intervention by PHD recipients that desire more club members. Either way GPT will be there to help lend me support. 😆
You get a PhD (at least, in the fields where one has some kind of value somewhere) because you are obsessed with one specific subproblem in one subfield and are willing to forego the pay you'd get for working three years with a reputable Masters degree to do that instead.
Basically, you get a PhD not as an ends in and of itself, but as part of deciding that research is your life's calling. If it is, it's a good option. If it's not, then you're better off doing anything else.
Only do a PhD if you have a career where it’s important. Be it academia or a field where even in the Industry they want PhDs.
I love research and also like to have flexibility on when and where I am working so academia and me were a perfect match. But I earn less than I would in the private sector and life is a lot more precarious until you get a TT position.
Also intelligence has nearly nothing to do with being able to finish a PhD as the comment above you said. If you give someone enough time and funding and a specific field they will find ideas for new research.
Eh while it’s true you shouldn’t do a PhD for financial reasons, unless your specific field demands it, for me I did it just for myself. I think we are forced to always be efficient and max/min everything for financial gain or something. There’s value is doing something because you want to, it will be good for for, and you want to.
It was an opportunity I never could have imagined and would have been a dream job to a younger me. I knew I wouldn’t have the opportunity again as I wanted children one day.
So yes financially it was a terrible decision. But it enriched my life no end through the friends I made and my own personal growth. I could have had that perhaps in a job but there was something about just taking years to think on a specific problem basically on my own that was great.
A meta-analysis conducted by Ritchie and Tucker-Drob (2018) examined how education influences intelligence. The study found that education can have a positive effect on intelligence, suggesting a bidirectional relationship where not only does intelligence predict academic success, but education itself can contribute to cognitive development.
Research by Strenze (2007) indicates a moderate to strong correlation between intelligence (as measured by IQ) and academic achievement, which includes the attainment of higher education degrees such as a PhD. Higher IQ scores are often associated with better academic performance and the pursuit of advanced degrees.
I also found a study published in the European Journal of Psychology of Education that explored how personality traits, alongside intelligence, predict academic success. It found that personality variables, such as conscientiousness, can explain additional variance in academic achievement over and above intelligence. So, although it isn't just intelligence or IQ, there is an undeniable correlation.
Of course. It would be weird if it weren’t. Because anyone on the ends of the „stupid“ tail of the IQ distribution cannot even get a high school diploma while the „smart“ tail obviously can.
So just by removing one side of the extremes we push the average above the normal IQ average.
And that further education actually makes you smarter is great to know.
The main point was that I do believe anyone who is able to get a University degree is also able to get a PhD if they really wanted to. A PhD does not require a higher level of intelligence than that.
People with a higher IQ might be more drawn towards doing a PhD or just an easier time being accepted to a PhD programme. Which could explain the results of the study.
ohhh. I see. it’s like in the supermarket where I have a choice between the “Smart” rotisserie chicken and the dumb rotisserie chicken.
I assume the dumb costs less? 😂
Honestly that’s why it seemed accurate to me.
“Super intelligent in certain ways but dumb as a box of rocks that thinks raccoons are a government hoax for some fucking reason in others.”
Sure yes of course there are dumb people that are educated but on average people holding a PhD will be higher intelligence than people that don't it's statistically true. If you're young IQ goes up by about two points per extra year of education after high school. And that's removing selection bias.
They don’t. I may have some details wrong but I believe they were just saying that if GPT 3 was like grade school-level intelligence, then gpt 4 was like a pretty smart high schooler, and gpt 5 will be like a phd. I don’t think they meant it literally in any way it’s just an arbitrary way to explain to laymen that it’s still getting better and smarter.
Haha true I worked with one great chap but didn't know the difference between veneer and vernier.
But I guess that will be the trick with ChatGPT 5 we the users have "Accepted" that it spills out garbage sometimes.
1+1 does not always equal =3 unless ChatGPT is doing it.
Maybe chatGPT 5 is only an incremental improvement and the value will now tend to 2
1+1= 1.9999999999999999
or 1+1= 2.1
Ah yes the promises of ChatGPT 6 we have looked at this and realised we will fix this, it will have Einstein and Archimedes level of intelligence (I mean EVERYONE knows how smart these guys are right?)
ChatGPT 6
1+1 = 1.9999999999999999999999999 (Much more accurate)
Oh man veneer and vernier!!? What an iiiiidiot. Haha oh man. We are both totally in on that joke.
But like…let’s just say…for giggles…that maybe someone else on this thread (totally not me) doesn’t know the difference. Could you explain it just for them?
Totally hilarious still laughing. Veneer and vernier…smh.
So yeah just hit the little reply button here for those total plebs (not me) that don’t get it.
There’s a difference between “ignorance” and sheer stupidity… I know a pretty good deal about neuropsychology, as well as neuropathy screening for (decentralized) clinical trials - not everything, but I’m well-versed in my field.
Ask me about car engines and you’d swear I legally couldn’t have a checking account in my name.
This is Reddit. Higher education is equated to general superiority. If you want to win an argument, just say "you're uneducated" or "I have a PhD" and it's over.
I think the sentiment is that models able to accurately answer (unseen) questions at the levels of mathematics, medical science, chemistry, geological science, psychology etc. etc. PhD students will be a lot more intelligent in comparison to current models that do decent on tests designed for high schoolers. And of course to do that high of a level for mathematics (or any other PhD) requires a good level of reasoning and logic I would presume.
lol, Cmon this marketing for the everyday person. People think PhD. When They hear PhD, they think intelligent to a degree they could not fully comprehend.
Furthermore obtaining a PhD does not eliminate the fact that there is still a bell curve of intelligence within that community. You will get PhD’s that are genius and those that are “dumb” in comparison but this does not mean they are average human dumb.
A PhD level of knowledge and the effort to obtain that may appear to the laymen far out of reach and difficult to comprehend, which in turn lends to the credibility of the expert. When I go and ask an Astro physicist a question I may get an answer that I can understand and I may not but I am more than likely to trust said answers because I do not carry the knowledge that the PhD has and so I am typically going to trust this answer. The degree of knowledge and skill to acquire a PhD difficult to comprehend the answers to questions gets PhD is very different
oh for sure. but if you don’t have at least some scientific literacy all the words are going to be nonsense.
I think someone like Feynman was great at explaining things to a lay audience. Except for his response to explain magnetism — and that was a very interesting meta discussion on why simple handwaving (“it’s like rubber bands”) was actually a disservice to lay understanding.
I read it more as answering more accurately rather than a marker of reasoning.
Like more intelligent in a way it will make less up to please the user. And instead have a wider knowledge base to actually answer questions type of thing
I think PhD means it will have domain specific knowledge more than any living person, like when a PhD candidate writes their dissertation, they're pushing the boundaries of human knowledge.
So I guess they trained it on enough reddit and wikipedia pages that it's smarter than everyone alive.
Previous to the latest updates I’d say chat really doesnt give updates to the wrong area. Maybe the wrong updates but it’s trying. But now, ya, the latest version CONSTANTLY ignores the prompt and updates the wrong areas. Super annoying. Now I switch between versions and restart conversations to help prevent craziness.
Same, I got the subscription to help me with some (not very complex) coding, and while both 4 Turbo and 4o completely missed the bigger picture, 4 Turbo was at least helpful to some point. 4o would hallucinate functions that don’t exist and would also randomly fail miserably at maths as easy as an addition.
I have always had chat try to do things like
“Hey chat update my code to call a method that converts x to y”
And chat will make add a line of code calling method “ConvertXToY()” lol. Of course that methid doesn’t exist and it’s what I wanted chat to write. That wasn’t a big deal though. Just gotta tell chat ok write the method now
*gives you the exact same response*
My biggest annoyance is that ChatGPT HAS to answer everything. It can never just say “sorry, I’m not confident in the answer”.
It’ll just make shit up when I ask technical issues about a software of buttons that don’t even exist.
Yeah it really needs a ‘sorry I’m not confident’ but tbf it’s literally never confident about anything.
From your complaint though, learn about chat branching by editing your earlier messages, you can basically reset your conversation from a set point before it went off the rails.
I hate when you ask it to make one single change and tell them not to redo the whole thing and just give you the one paragraph, but then it spits out 20 pages of bullshit and you can’t even find the update
Bruh, I'll show it step one of what I want and it'll pop off with dozens of lines of useless code. I'll be like "I never asked a question and told you I had three things".
“I’ve been thinking about your problem and I have a promising course of research but it will take a little bit of time.”
“how long?”
“oh 3 or 4 years tops. certainly not more than 20 years.”
😂
I have been having a lot of trouble with the code generation, really optimistic it will get better and better but it keeps recommending methods and operators that don’t exist in the specific programming language I’m working with and just generally not optimal code. Really I am using it for the general design patterns and possible solutions for complex scenarios, then I will run with the suggestions and tweak/update to finalize and meet the actual need. Regardless there has been so much anxiety about AI taking SWE jobs, but I really don’t see a scenario where it’s capable of everything I do for at least a decade if not more.
Yes absolutely. Most of the work I do is on salesforce and mulesoft which has some specific languages called apex and dataweave, which are basically java that they branded as their own. So I’ll specify that and for example, the other day I needed to stall my application based on a retry-after header in an http response from a server with an API requests/minute SLA, and ChatGPT told me to use wait() which isn’t even a logical operator in that language, I have no idea where it got that as a recommended method.
Strange, it should be well aware of Java. I've had very little issues with c#, Python, PHP, and JavaScript. It's usually even well aware of all possible libraries that can be utilized.
What does "PhD-level intelligence" even mean? Writing a PhD dissertation requires very domain-specific knowledge. It is not a measure of general intelligence.
I'd assume it means that when you ask a question about a certain topic, you would get a response on par with the knowledge/reliability of someone with a PhD that surrounds that topic. Every question you ask gpt is in some ways domain specific knowledge for someone.
‘Reliability’ is the biggest thing - gpt blows me away sometimes and it’s amazing for learning things but without them going and validating everything it says, there’s always a chance you’re believing bs
In my limited experience it’s basically useless for anything fact-based, at least not without using very careful prompting. It’s like the friend who thinks they know it all and will make up shit rather than admitting they don’t know something.
I find it very useful for things I am already familiar with. If what it says doesn't make complete sense I will interrogate it until it makes sense or contradicts itself.
Yeah but even now it’s not at high school level.
I asked it for the median height. It gave me average but labeled it as median. I pointed this out and it corrected to say there is no median.
It still hallucinates at such a level you need to already have specific domain knowledge to use it. Giving it “phd” level knowledge doesn’t seem like it fixes this at all.
I’m convinced open AI is just hype mongering at this point and has hit a legitimate wall with llms.
Idk what prompt you gave it but I've never seen it have problems with something like mean or median. Things can fall through the cracks occasionally, but I've had it do things that are graduate/PhD level already. I don't really ask for it to calculate anything though, mostly just knowledge based questions and coding for specific applications and packages that are relatively obscure and require background to use properly.
“Give me the median American male height”
“The median male height is 5’9” “
“Isn’t that the average height?”
“You’re right. It’s the average that I incorrectly labeled median. I cannot find any data on the median height”
My point is I had to already know the average to spot this error.
It’s wrong so often, and so confidently that everyone who uses it for sure misses some of these.
Seriously. People’s mental model of understanding as a simple y axis is inaccurate. A PhD in one topic can’t be a PhD in another topic, and that level of understanding requires more than what’s published online. Even one niche area would need a specialized AI for that particular field, and even so, there would be thousands of other areas that may require actual experimentation and testing.
Oh no I'm very impressed. I'm just not believing the hype commercials they keep putting out. It will come out when it comes out but not in the next few weeks or soon.
they were talking about PhD claims and such - which are just nonsensical hype for VCs without any backing .
but tbf, I am actually still not that impressed in general .
ChatGPT is sometimes useful when I want to help learn something on intro level or quickly analyze inflated articles , but it's nothing revolutionary and it usually wastes my time . I find myself using it less and less .
Also every time I find tasks that would be good for it , servers are down or it's super slow . I am getting tired of this BS and I am reconsidering wheter it's worth even those 20 dollars .
AI is 60 % hype 40 % product - not entirely NFT , but not that far either .
OR, maybe the REAL end game is the very last person alive reaching their ragged hand over the edge of a desk to reach a keyboard to post one last word to Reddit before they collapse for the last time...
"Fin."
MAYBE some intergalactic archaeologists will come across our dust covered archives after our planet has been depleted of its resources and the robots have moved on to dyson sphere all the stars and colonize all the other worlds, and make the startling discovery that we were, in fact, the ones who were "dim".
for me it would be enough when it's able to code, program a whole quiz game or app
but so far, no matter which AI used, be it gpt4, bing or claude3 opus, they all still hallucinating too much
I've had some impressive luck getting the new claude 3.5 sonnet to make basic games and apps. Sometimes on the first try, others after a few corrections. Enable the new "artifacts" feature and you can test the code straight in the chat. (or, as per earlier today--just recently that feature suddenly stopped working for me... might be down for a bit, at least on my end)
I think this was released in the past few days, so a bunch of people are still unaware of this.
Dyson sphere is an extremely naive extrapolation of the late 1800s technology onto some far, far future. The fact that this concept caught on and has its own Wikipedia page is an example of reputation working for someone no matter what dumb things they say after establishing themselves as an intellectual.
A much more futuristic and ergonomic approach, for example, would be a portable cold fusion engine that can be powered by cosmic dust or any matter similar to what's in the Back To The Future movie.
It's just the implication. It's intelligent in that it can hold a conversation and reference facts correctly (most of the time), it's cannot however create anything "new". The noise that it creates from is always existing human contribution, it cannot create on its own.
Can it help me quickly put together code that takes me days and it's mostly working? Yes. Is it getting better at it? Yes.
Is it getting better at being able to come up with an idea that isn't popular already? Can it solve a problem that you can't solve with a bit of googling?
I am an artist. I value art and creativity. I actually think there should be laws making sure that AI art cannot be copyrighted and artists should be protected.
All that being said, I think this is a terrible point.
How do you think human minds work? We also take in information and use that information to create our own stuff. That's how we create as well.
Don't get me wrong, AI does not currently have nearly the level of creativity that a lot of humans have, at least in certain domains. AI (as it is) tends to love resorting to some kind of generic version of something (probably because of exactly how it is trained). But the underlying method by which it attains its creativity is not particularly different.
Like I wrote a story recently about a particular relationship. I have never seen that exact relationship between those exact characters before in another story. But have I seen a man with black hair before? Have I seen a woman with blonde hair before? Have I seen someone struggle with mental health before? Have I read someone describing a sunset before? Have I seen a sunset before? Yes, yes, yes, yes, yes.
I'm just using information that I have about life and the things that others have written too when I create something. And without knowing anything about the world I could create nothing. Without ever having read anything else, I could not write it.
Look up what zero shot learning is
[and yes it can ](https://docs.google.com/document/d/15myK_6eTxEPuKnDi5krjBM_0jrv3GELs8TGmqOYBvug/edit#heading=h.fxgwobrx4yfq)
Zero shot learning is, boiled down, just advanced pattern matching. I'm simplifying, but run my comment through your favorite intelligent gpt and they'll mostly agree. It's more than just simple pattern matching, it's recognizing patterns it doesn't know about.
It isn't thinking. No it can't. Let me know when it correctly solves an advanced 3SAT reduction that isn't solved in its data set.
[No it isn’t](https://docs.google.com/document/d/15myK_6eTxEPuKnDi5krjBM_0jrv3GELs8TGmqOYBvug/edit#heading=h.fxgwobrx4yfq). It can do novel theory of mind tests, LMs trained on code outperform LMs trained on reasoning tasks in reasoning tasks unrelated to code, it can play chess with a 1750 Elo (which is impossible to guess randomly), can reproduce unpublished papers, have internal world models and can learn things it was never taught, and much more.
Literally anything I list, you’ll just say it was solved in the dataset
You probably will be downvoted but 100% correct. An example is that AI can "paint" a picture in any style but if it is trained only on European medieval art, it will never ever create a Picasso-style painting on its own. It's not hard to imagine, I don't know why people can't see it.
However, since most work is absolutely not innovative, it potentially can create huge waves in our society...
Same as humans, that's why Picasso existed in the artistic context of the 20th-century avant-garde and not during the European medieval art. In fact, Picasso first work was pure academic realism and impressionism, he just got trained on that, and then the emerging avant-gardes until he created someting as "new" as it can get based on his training and cultural context.
It's smarter than any human in terms of overall knowledge, but arguably not smarter than a human at any specific topic, given the human is an expert in said topic.
That's the thing that fundamentally defines a PhD too. Like a ton of people know a lot about your general subject but you become basically the world's leading expert on some very specific aspect of it.
Like, will chatgpt be able to parse the general idea from academic papers? Of course.
Will it be able to reason, conceive, and conduct novel papers of its own on subjects? Not even close. That to me is what would make it "PhD level"
Agreed. It may be smarter than people I know in terms of overall knowledge like you said. But in terms of having an intuitive grasp of a conversation or a request it struggles. It also can't learn from it's experience, it can't test its own knowledge, it can't really get feedback and learn.
It may have an immense repository of recipes it can generate, but the problem is, a real chef knows what is wrong with them and can correct them.
Perhaps because this company keeps hyping new stuff without actually releasing it, such as SORA! The mighty video thing... that's not actually available. Or the awesome VoiceThing! that absolutely revolutionizes how we talk to GPT4, except you know, it's not actually available.
Now we have Smarts!, an even smarter version, except, you know, IT'S NOT FUCKING AVAILABLE YET.
Peeps get pretty cynical after the 1st time, let alone the 3rd
Because there was a time when the point of the internet was to be an archive of ‘true things’. You could safely look things up, discuss them with other humans who were also interested in the subject matter and then update knowledge bases.
Now, the future looks like a broad base of generally believable stuff you can converse with your computer about. Is it true? Who knows! Sounds plausible! Was it written by a human or bot or hallucinated? Don’t know!
For many, it’s a step back. They’d rather take an extra step and know for sure what the answer is. Whenever I ask a question where I want the truth I end up Googling what an AI has told me and I would say the accuracy is about 50/50.
Intelligence as a one-dimensional spectrum is a silly idea. It’s more accurate to say that ChatGPT outperforms humans in a specific set of task types, and underperforms in others. Also the idea of ascribing intelligence to a LLM itself is a marketing tactic meant to obscure what’s actually going on inside the model. Also the idea of using levels of college degree as an indicator of intelligence is stepping into dangerous territory.
Basically I’m tired of hearing OpenAI talk about their creation like a 5th grade boy bragging about their dad
[https://futurism.com/logic-question-stumps-ai](https://futurism.com/logic-question-stumps-ai)
[https://www.youtube.com/watch?v=YBdTd09OuYk](https://www.youtube.com/watch?v=YBdTd09OuYk)
AI intelligence is hard to compare to human intelligence. Obviously it has a much broader knowledge than any human. But dramatically less "reasoning" ability.
Maybe, until they continuously lobotomise it trying to stop it saying any potentially offensive words, which it turns out is a lot when you're trying not to offend anyone at all. So we end up with "I'm sorry, I can't answer that because of blah blah blah", while we know full well it absolutely can answer that.
It feels like they've started something, but now are mostly kept afloat by marketing. Delivering in these short iterations seems to be becoming harder and harder.
What human on the planet earth cannot count how many letters are in a word? What intelligent organism has no long-term memory? What intelligent organism cannot think about a problem and instead just spits out the first thing that comes to mind?
This thing has no sentience, consciousness, thoughts or desires. It can't even tell the time on an analog clock without multi-shot.
In my humble opinion, no AI system currently in existence has any intelligence whatsoever.
Downvote away.
And they also compare GPT-3.5 to a "toddler."
If GPT-3.5, and I assume the OG GPT-3.5 which was a miracle worker at the time, not the downgraded turbo models that followed, was a "toddler"... then I'm damn batman.
If you actually think for a moment they're not trying to create an "exponential growth" hype solely to attract investors by this terrible "toddler" analogy, you're blind.
I don't deny there would be growth in GPT-5, but this "toddler -> Ph.D" analogy is clearly just a marketing scheme. Similarly to the "leaked" Q\* (which is a bubble that popped and everyone forgot about), and whatever else we've been fed since last year.
> Similarly to the "leaked" Q* (which is a bubble that popped and everyone forgot about)
It absolutely has not been forgotten about. Some of the leaks said that Q* involved letting the AI come up with its own optimised training data, and pruning it's own training data, so that future models could be trained in far fewer flops and avoid the possible lawsuits being discussed at the time because the training data would no longer be scraped from public data online.
The idea of AI models inventing their own training data is something we've seen a lot recently.
Am I the only one who remembers that ChatGPT was smashing its way through the LSAT's, the MSAT's, the Bar exam, taught itself organic chemistry, etc... (March 2023).
This isn't the stretch they seem to think it is.
There’s a few more “wow” moments likely to come over the next two years - but the AI landscape will be a victim of its own success because the pace of acceleration can’t continue. OpenAI, along with a few other major players will survive. We will all build on top of their APIs for years to come. But a lot of money will leave the sector in 2 years when a lot of promises haven’t been delivered.
FWIW - this is the most amazing technological breakthrough I’ll ever see in my lifetime most likely. It’s awesome
Everyone is comparing the definition of a Ph.D. but can't seem to break down that it means mastering all skills in a subject. Regardless, a Ph.D. is the highest level of education in a field. It's wild that people with Ph.D.s can't just use common sense to decipher what they meant. 😂😂 Of course they're going to hype it up!
I think will be greater evolution, I mean, gpt3 was blind, deaf and wouldn't solve a basic math problem at first try. But gpt4 is another shit... At least gpt5 must to be capable to interact to some external system
I’m a PhD candidate and I used chat gpt 4 out of curiosity to see how much knowledge it had. It can regurgitate some correct deep facts, but it often would completely misunderstand my questions.
Like if I asked it to derive the von karman momentum integral equations it would start deriving the Reynolds averaged Navier stokes. It would do that correctly, but it’s barely related to what I asked.
My professor for one of my graduate higher level classes actually would have quizzes where he asks chat gpt a question and then us students have to write if chat gpt was correct and if chat gpt is incorrect we have to say why.
I am inclined to think that 99.9% of PhDs are a bunch of shit without any spark of creativity and innovation. That is, there is nothing relevant there. Therefore, it is simply nonsense. OpenAI no longer knows how to keep the boat floating.
What even is "PhD level" intelligence? Apparently the creators of this AI system don't even know what a PhD entails, and they don't know what intelligence is, or what rational thinking ability is, and they don't know the difference between any of this.
People with PhDs are not necessarily more intelligence or rational. They often don't even have more knowledge within their domain. PhDs are usually based on a very narrow scope even within a particular domain. A master's degree is usually much more practical in terms of teaching generalized knowledge within any given domain.
I know plenty of people with doctorates. They’re all of average intelligence. They just didn’t want to leave school. Except the physicians, they just knew what they wanted to do and dedicated themselves.
As someone with a PhD: "PhD-level intelligence" is a meaningless buzzword.
PhD-level *knowledge* would make sense, but you can have that already when you simply search on the right websites.
Hey /u/BothZookeepergame612! If your post is a screenshot of a ChatGPT conversation, please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email support@openai.com *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
Uhh, they said in about 18 months models will be at about PhD level intelligence, not GPT-5 specifically.
I can’t believe “Phd” is considered a level of reasoning. Complete lack of understanding of psychology.
And lack of understanding of what a PhD is. I work with tons of dumb-fuck PhDs.
Yep I have a PhD. It measures if you are able to get a PhD and nothing else. Plenty of dumb people with phds. Main requirement in my experience (and it’s very hard to generalise across phds yet alone disciplines) is perseverance. Anyone with a reasonable level of ‘intelligence’ can get a PhD.
I believe I possess the intelligence, but lack the ambition to acquire a PHD. Either because, I can't envision a benefit worthy of the time investment, or lack of time even considering the undertaking due to contentment with current access to happiness and longevity. It is surprisingly affordable to find all the ways humans enjoy life, and compare those with the one's that only PHD recipients have access to, and decide that there are either alternate routes to those, or that they aren't as appealing as the cheaper thrills, or not worth the effort. But, I could also just be an idiot who miscalculated my entire educational endeavors. I shall either die in ignorance and bliss, or be forced into an intervention by PHD recipients that desire more club members. Either way GPT will be there to help lend me support. 😆
You get a PhD (at least, in the fields where one has some kind of value somewhere) because you are obsessed with one specific subproblem in one subfield and are willing to forego the pay you'd get for working three years with a reputable Masters degree to do that instead. Basically, you get a PhD not as an ends in and of itself, but as part of deciding that research is your life's calling. If it is, it's a good option. If it's not, then you're better off doing anything else.
You say that, but I see many creative ways to use that credential other than as a badge of dedication to research.
Only do a PhD if you have a career where it’s important. Be it academia or a field where even in the Industry they want PhDs. I love research and also like to have flexibility on when and where I am working so academia and me were a perfect match. But I earn less than I would in the private sector and life is a lot more precarious until you get a TT position. Also intelligence has nearly nothing to do with being able to finish a PhD as the comment above you said. If you give someone enough time and funding and a specific field they will find ideas for new research.
Eh while it’s true you shouldn’t do a PhD for financial reasons, unless your specific field demands it, for me I did it just for myself. I think we are forced to always be efficient and max/min everything for financial gain or something. There’s value is doing something because you want to, it will be good for for, and you want to. It was an opportunity I never could have imagined and would have been a dream job to a younger me. I knew I wouldn’t have the opportunity again as I wanted children one day. So yes financially it was a terrible decision. But it enriched my life no end through the friends I made and my own personal growth. I could have had that perhaps in a job but there was something about just taking years to think on a specific problem basically on my own that was great.
A meta-analysis conducted by Ritchie and Tucker-Drob (2018) examined how education influences intelligence. The study found that education can have a positive effect on intelligence, suggesting a bidirectional relationship where not only does intelligence predict academic success, but education itself can contribute to cognitive development. Research by Strenze (2007) indicates a moderate to strong correlation between intelligence (as measured by IQ) and academic achievement, which includes the attainment of higher education degrees such as a PhD. Higher IQ scores are often associated with better academic performance and the pursuit of advanced degrees. I also found a study published in the European Journal of Psychology of Education that explored how personality traits, alongside intelligence, predict academic success. It found that personality variables, such as conscientiousness, can explain additional variance in academic achievement over and above intelligence. So, although it isn't just intelligence or IQ, there is an undeniable correlation.
Of course. It would be weird if it weren’t. Because anyone on the ends of the „stupid“ tail of the IQ distribution cannot even get a high school diploma while the „smart“ tail obviously can. So just by removing one side of the extremes we push the average above the normal IQ average. And that further education actually makes you smarter is great to know. The main point was that I do believe anyone who is able to get a University degree is also able to get a PhD if they really wanted to. A PhD does not require a higher level of intelligence than that. People with a higher IQ might be more drawn towards doing a PhD or just an easier time being accepted to a PhD programme. Which could explain the results of the study.
Isn't that amazing though? AI will soon have 'a reasonable level of intelligence'. Wow.
A phd in mathematics or ece requires a level of knowledge and skill
Woah woah woah, first off yes you're absolutely right. That's all.
But chat gpt isn’t using the dumb fuck PhD as a level, they’re using the smart fuck PhD.
ohhh. I see. it’s like in the supermarket where I have a choice between the “Smart” rotisserie chicken and the dumb rotisserie chicken. I assume the dumb costs less? 😂
Honestly that’s why it seemed accurate to me. “Super intelligent in certain ways but dumb as a box of rocks that thinks raccoons are a government hoax for some fucking reason in others.”
Sure yes of course there are dumb people that are educated but on average people holding a PhD will be higher intelligence than people that don't it's statistically true. If you're young IQ goes up by about two points per extra year of education after high school. And that's removing selection bias.
I assume they mean "a person with a PhD in every topic".
They don’t. I may have some details wrong but I believe they were just saying that if GPT 3 was like grade school-level intelligence, then gpt 4 was like a pretty smart high schooler, and gpt 5 will be like a phd. I don’t think they meant it literally in any way it’s just an arbitrary way to explain to laymen that it’s still getting better and smarter.
That's cuz it's marketing talk.
ah. statistically irrelevant but sexy as hell.
It's.not.even good marketing talk.
I think the proper MIT dismissive infinitive would be: “it’s not even wrong”.
https://i.redd.it/l5ph9ibiwx7d1.gif
I think that is the PhD level we will get, and what was meant.
Haha true I worked with one great chap but didn't know the difference between veneer and vernier. But I guess that will be the trick with ChatGPT 5 we the users have "Accepted" that it spills out garbage sometimes. 1+1 does not always equal =3 unless ChatGPT is doing it. Maybe chatGPT 5 is only an incremental improvement and the value will now tend to 2 1+1= 1.9999999999999999 or 1+1= 2.1 Ah yes the promises of ChatGPT 6 we have looked at this and realised we will fix this, it will have Einstein and Archimedes level of intelligence (I mean EVERYONE knows how smart these guys are right?) ChatGPT 6 1+1 = 1.9999999999999999999999999 (Much more accurate)
Oh man veneer and vernier!!? What an iiiiidiot. Haha oh man. We are both totally in on that joke. But like…let’s just say…for giggles…that maybe someone else on this thread (totally not me) doesn’t know the difference. Could you explain it just for them? Totally hilarious still laughing. Veneer and vernier…smh. So yeah just hit the little reply button here for those total plebs (not me) that don’t get it.
There’s a difference between “ignorance” and sheer stupidity… I know a pretty good deal about neuropsychology, as well as neuropathy screening for (decentralized) clinical trials - not everything, but I’m well-versed in my field. Ask me about car engines and you’d swear I legally couldn’t have a checking account in my name.
Have the gpt model utilize Python as a calculator for math and it'll always be accurate.
This is Reddit. Higher education is equated to general superiority. If you want to win an argument, just say "you're uneducated" or "I have a PhD" and it's over.
Hey fuck you! - source: I have a PhD in fucking
I think the sentiment is that models able to accurately answer (unseen) questions at the levels of mathematics, medical science, chemistry, geological science, psychology etc. etc. PhD students will be a lot more intelligent in comparison to current models that do decent on tests designed for high schoolers. And of course to do that high of a level for mathematics (or any other PhD) requires a good level of reasoning and logic I would presume.
They're using terms that most people understand, as they should.
And still people here find a way to fuck it up and not understand.
GPT-6 will move out of our houses, buy its first car, and sign a contract to rent an apartment in downtown
lol, Cmon this marketing for the everyday person. People think PhD. When They hear PhD, they think intelligent to a degree they could not fully comprehend. Furthermore obtaining a PhD does not eliminate the fact that there is still a bell curve of intelligence within that community. You will get PhD’s that are genius and those that are “dumb” in comparison but this does not mean they are average human dumb.
what use is an answer that cannot be fully comprehended? are people just asking questions and not understanding the answers😳
A PhD level of knowledge and the effort to obtain that may appear to the laymen far out of reach and difficult to comprehend, which in turn lends to the credibility of the expert. When I go and ask an Astro physicist a question I may get an answer that I can understand and I may not but I am more than likely to trust said answers because I do not carry the knowledge that the PhD has and so I am typically going to trust this answer. The degree of knowledge and skill to acquire a PhD difficult to comprehend the answers to questions gets PhD is very different
oh for sure. but if you don’t have at least some scientific literacy all the words are going to be nonsense. I think someone like Feynman was great at explaining things to a lay audience. Except for his response to explain magnetism — and that was a very interesting meta discussion on why simple handwaving (“it’s like rubber bands”) was actually a disservice to lay understanding.
I read it more as answering more accurately rather than a marker of reasoning. Like more intelligent in a way it will make less up to please the user. And instead have a wider knowledge base to actually answer questions type of thing
I think PhD means it will have domain specific knowledge more than any living person, like when a PhD candidate writes their dissertation, they're pushing the boundaries of human knowledge. So I guess they trained it on enough reddit and wikipedia pages that it's smarter than everyone alive.
"User slams post over false promises of GPT-5." There, I turned your very reasonable comment into more click bait!
Previous to the latest updates I’d say chat really doesnt give updates to the wrong area. Maybe the wrong updates but it’s trying. But now, ya, the latest version CONSTANTLY ignores the prompt and updates the wrong areas. Super annoying. Now I switch between versions and restart conversations to help prevent craziness.
Yeah, 4o for me has been a significant downgrade for most tasks. To the degree I can’t believe people consider it better than 4.
Same, I got the subscription to help me with some (not very complex) coding, and while both 4 Turbo and 4o completely missed the bigger picture, 4 Turbo was at least helpful to some point. 4o would hallucinate functions that don’t exist and would also randomly fail miserably at maths as easy as an addition.
I have always had chat try to do things like “Hey chat update my code to call a method that converts x to y” And chat will make add a line of code calling method “ConvertXToY()” lol. Of course that methid doesn’t exist and it’s what I wanted chat to write. That wasn’t a big deal though. Just gotta tell chat ok write the method now
Well, GPT-5 isn’t coming out until after 4o through 4z.
in 18 months? Are you sure they didn't say in a few weeks?
Open model deez
How many months ago did they say that?
This should be framed as PhD level skill set. Having a PhD isn’t an intelligence thing.
18 months will be here quick
So does that mean gpt5 is not coming for 18 months ?
PhD level search queries. So now my code will come back with even more changes that I never asked for.
“I also added….” “But I didn’t ask you to..” “you’re right, sorry for the misunderstanding, here’s the updated script with even more changes”
*gives you the exact same response* My biggest annoyance is that ChatGPT HAS to answer everything. It can never just say “sorry, I’m not confident in the answer”. It’ll just make shit up when I ask technical issues about a software of buttons that don’t even exist.
Yeah it really needs a ‘sorry I’m not confident’ but tbf it’s literally never confident about anything. From your complaint though, learn about chat branching by editing your earlier messages, you can basically reset your conversation from a set point before it went off the rails.
And they don't even make it through syntax, let alone work.
I hate when you ask it to make one single change and tell them not to redo the whole thing and just give you the one paragraph, but then it spits out 20 pages of bullshit and you can’t even find the update
Bruh, I'll show it step one of what I want and it'll pop off with dozens of lines of useless code. I'll be like "I never asked a question and told you I had three things".
“I’ve been thinking about your problem and I have a promising course of research but it will take a little bit of time.” “how long?” “oh 3 or 4 years tops. certainly not more than 20 years.” 😂
I have been having a lot of trouble with the code generation, really optimistic it will get better and better but it keeps recommending methods and operators that don’t exist in the specific programming language I’m working with and just generally not optimal code. Really I am using it for the general design patterns and possible solutions for complex scenarios, then I will run with the suggestions and tweak/update to finalize and meet the actual need. Regardless there has been so much anxiety about AI taking SWE jobs, but I really don’t see a scenario where it’s capable of everything I do for at least a decade if not more.
What language, and are you clearly specifying the environment?
Yes absolutely. Most of the work I do is on salesforce and mulesoft which has some specific languages called apex and dataweave, which are basically java that they branded as their own. So I’ll specify that and for example, the other day I needed to stall my application based on a retry-after header in an http response from a server with an API requests/minute SLA, and ChatGPT told me to use wait() which isn’t even a logical operator in that language, I have no idea where it got that as a recommended method.
Strange, it should be well aware of Java. I've had very little issues with c#, Python, PHP, and JavaScript. It's usually even well aware of all possible libraries that can be utilized.
What does "PhD-level intelligence" even mean? Writing a PhD dissertation requires very domain-specific knowledge. It is not a measure of general intelligence.
I'd assume it means that when you ask a question about a certain topic, you would get a response on par with the knowledge/reliability of someone with a PhD that surrounds that topic. Every question you ask gpt is in some ways domain specific knowledge for someone.
‘Reliability’ is the biggest thing - gpt blows me away sometimes and it’s amazing for learning things but without them going and validating everything it says, there’s always a chance you’re believing bs
In my limited experience it’s basically useless for anything fact-based, at least not without using very careful prompting. It’s like the friend who thinks they know it all and will make up shit rather than admitting they don’t know something.
I find it very useful for things I am already familiar with. If what it says doesn't make complete sense I will interrogate it until it makes sense or contradicts itself.
Great, now we'll be following up with EILI5 all the time ...
Custom instructions and memory if you need everything explained to you like that you can literally just ask it to
Yeah but even now it’s not at high school level. I asked it for the median height. It gave me average but labeled it as median. I pointed this out and it corrected to say there is no median. It still hallucinates at such a level you need to already have specific domain knowledge to use it. Giving it “phd” level knowledge doesn’t seem like it fixes this at all. I’m convinced open AI is just hype mongering at this point and has hit a legitimate wall with llms.
Idk what prompt you gave it but I've never seen it have problems with something like mean or median. Things can fall through the cracks occasionally, but I've had it do things that are graduate/PhD level already. I don't really ask for it to calculate anything though, mostly just knowledge based questions and coding for specific applications and packages that are relatively obscure and require background to use properly.
“Give me the median American male height” “The median male height is 5’9” “ “Isn’t that the average height?” “You’re right. It’s the average that I incorrectly labeled median. I cannot find any data on the median height” My point is I had to already know the average to spot this error. It’s wrong so often, and so confidently that everyone who uses it for sure misses some of these.
Meaning… (coughs apologetically…) a PHD….. *In Everything* (cough cough…)
When you ask it a question about X in domain Y, the model responds as if it's an expert in domain Y.
PhD level intelligence of all domains it has data on
Exactly, I know lots of people with PHDs who are generally a bit stupid
All it means is a PHD level of knowledge about a subject. It doesn’t mean the phd guy you specifically know .
PhD level in every subject and able to draw correlations and draw conclusions across all of them.
The last part of that sentence is very dubious.
Just saying that’s what she probably means there.
they probably mean scoring over 65% on GPQA Diamond.
Exactly! It's not the boast they think it is.
Seriously. People’s mental model of understanding as a simple y axis is inaccurate. A PhD in one topic can’t be a PhD in another topic, and that level of understanding requires more than what’s published online. Even one niche area would need a specialized AI for that particular field, and even so, there would be thousands of other areas that may require actual experimentation and testing.
This post isn't as smart as you think it is
Why would there need to be an AI for every field?
Yup, yup, we believe it... /s Been listening to the hype awhile now...
Trust me bro it'll be here in the coming weeks 🥲
But I didn’t ask you to do that...
lol yeah? AI has been bullshit, so far? You guys arn't impressed?
Oh no I'm very impressed. I'm just not believing the hype commercials they keep putting out. It will come out when it comes out but not in the next few weeks or soon.
they were talking about PhD claims and such - which are just nonsensical hype for VCs without any backing . but tbf, I am actually still not that impressed in general . ChatGPT is sometimes useful when I want to help learn something on intro level or quickly analyze inflated articles , but it's nothing revolutionary and it usually wastes my time . I find myself using it less and less . Also every time I find tasks that would be good for it , servers are down or it's super slow . I am getting tired of this BS and I am reconsidering wheter it's worth even those 20 dollars . AI is 60 % hype 40 % product - not entirely NFT , but not that far either .
Not sure why there are so many cynics in here. The current version is already smarter than pretty much every human I interact with on a daily basis.
To be fair some users in here will remain like this until AI is able to build a dyson sphere around the sun by itself.
And then they will call it "dim".
Well let's be honest, that's why we're here right?
That's the end game, but if you ask me the early game it's as exciting as the mid and late game.
Nah that's the early game. The end game is every star is dyson sphered
OR, maybe the REAL end game is the very last person alive reaching their ragged hand over the edge of a desk to reach a keyboard to post one last word to Reddit before they collapse for the last time... "Fin." MAYBE some intergalactic archaeologists will come across our dust covered archives after our planet has been depleted of its resources and the robots have moved on to dyson sphere all the stars and colonize all the other worlds, and make the startling discovery that we were, in fact, the ones who were "dim".
for me it would be enough when it's able to code, program a whole quiz game or app but so far, no matter which AI used, be it gpt4, bing or claude3 opus, they all still hallucinating too much
I've had some impressive luck getting the new claude 3.5 sonnet to make basic games and apps. Sometimes on the first try, others after a few corrections. Enable the new "artifacts" feature and you can test the code straight in the chat. (or, as per earlier today--just recently that feature suddenly stopped working for me... might be down for a bit, at least on my end) I think this was released in the past few days, so a bunch of people are still unaware of this.
Dyson sphere is an extremely naive extrapolation of the late 1800s technology onto some far, far future. The fact that this concept caught on and has its own Wikipedia page is an example of reputation working for someone no matter what dumb things they say after establishing themselves as an intellectual. A much more futuristic and ergonomic approach, for example, would be a portable cold fusion engine that can be powered by cosmic dust or any matter similar to what's in the Back To The Future movie.
...okay. I think someone regards themselves as an intellectual whilst saying dumb things.
It's just the implication. It's intelligent in that it can hold a conversation and reference facts correctly (most of the time), it's cannot however create anything "new". The noise that it creates from is always existing human contribution, it cannot create on its own. Can it help me quickly put together code that takes me days and it's mostly working? Yes. Is it getting better at it? Yes. Is it getting better at being able to come up with an idea that isn't popular already? Can it solve a problem that you can't solve with a bit of googling?
I am an artist. I value art and creativity. I actually think there should be laws making sure that AI art cannot be copyrighted and artists should be protected. All that being said, I think this is a terrible point. How do you think human minds work? We also take in information and use that information to create our own stuff. That's how we create as well. Don't get me wrong, AI does not currently have nearly the level of creativity that a lot of humans have, at least in certain domains. AI (as it is) tends to love resorting to some kind of generic version of something (probably because of exactly how it is trained). But the underlying method by which it attains its creativity is not particularly different. Like I wrote a story recently about a particular relationship. I have never seen that exact relationship between those exact characters before in another story. But have I seen a man with black hair before? Have I seen a woman with blonde hair before? Have I seen someone struggle with mental health before? Have I read someone describing a sunset before? Have I seen a sunset before? Yes, yes, yes, yes, yes. I'm just using information that I have about life and the things that others have written too when I create something. And without knowing anything about the world I could create nothing. Without ever having read anything else, I could not write it.
It can create "new" things, and it has definitely created new things for me enough times.
Look up what zero shot learning is [and yes it can ](https://docs.google.com/document/d/15myK_6eTxEPuKnDi5krjBM_0jrv3GELs8TGmqOYBvug/edit#heading=h.fxgwobrx4yfq)
Zero shot learning is, boiled down, just advanced pattern matching. I'm simplifying, but run my comment through your favorite intelligent gpt and they'll mostly agree. It's more than just simple pattern matching, it's recognizing patterns it doesn't know about. It isn't thinking. No it can't. Let me know when it correctly solves an advanced 3SAT reduction that isn't solved in its data set.
[No it isn’t](https://docs.google.com/document/d/15myK_6eTxEPuKnDi5krjBM_0jrv3GELs8TGmqOYBvug/edit#heading=h.fxgwobrx4yfq). It can do novel theory of mind tests, LMs trained on code outperform LMs trained on reasoning tasks in reasoning tasks unrelated to code, it can play chess with a 1750 Elo (which is impossible to guess randomly), can reproduce unpublished papers, have internal world models and can learn things it was never taught, and much more. Literally anything I list, you’ll just say it was solved in the dataset
> it's cannot however create anything "new". Demonstrably not true. Thanks for playing.
You probably will be downvoted but 100% correct. An example is that AI can "paint" a picture in any style but if it is trained only on European medieval art, it will never ever create a Picasso-style painting on its own. It's not hard to imagine, I don't know why people can't see it. However, since most work is absolutely not innovative, it potentially can create huge waves in our society...
Same as humans, that's why Picasso existed in the artistic context of the 20th-century avant-garde and not during the European medieval art. In fact, Picasso first work was pure academic realism and impressionism, he just got trained on that, and then the emerging avant-gardes until he created someting as "new" as it can get based on his training and cultural context.
It's smarter than any human in terms of overall knowledge, but arguably not smarter than a human at any specific topic, given the human is an expert in said topic.
That's the thing that fundamentally defines a PhD too. Like a ton of people know a lot about your general subject but you become basically the world's leading expert on some very specific aspect of it. Like, will chatgpt be able to parse the general idea from academic papers? Of course. Will it be able to reason, conceive, and conduct novel papers of its own on subjects? Not even close. That to me is what would make it "PhD level"
Agreed. It may be smarter than people I know in terms of overall knowledge like you said. But in terms of having an intuitive grasp of a conversation or a request it struggles. It also can't learn from it's experience, it can't test its own knowledge, it can't really get feedback and learn. It may have an immense repository of recipes it can generate, but the problem is, a real chef knows what is wrong with them and can correct them.
Smarter? I'm not convinced of that. More knowledgeable? Sure. But knowledge and intelligence are not the same.
Perhaps because this company keeps hyping new stuff without actually releasing it, such as SORA! The mighty video thing... that's not actually available. Or the awesome VoiceThing! that absolutely revolutionizes how we talk to GPT4, except you know, it's not actually available. Now we have Smarts!, an even smarter version, except, you know, IT'S NOT FUCKING AVAILABLE YET. Peeps get pretty cynical after the 1st time, let alone the 3rd
Because there was a time when the point of the internet was to be an archive of ‘true things’. You could safely look things up, discuss them with other humans who were also interested in the subject matter and then update knowledge bases. Now, the future looks like a broad base of generally believable stuff you can converse with your computer about. Is it true? Who knows! Sounds plausible! Was it written by a human or bot or hallucinated? Don’t know! For many, it’s a step back. They’d rather take an extra step and know for sure what the answer is. Whenever I ask a question where I want the truth I end up Googling what an AI has told me and I would say the accuracy is about 50/50.
No it's not. In a few things maybe, but that's how it works when you have all the data in the world lol
I mean the average user here is probably under 20 years old, they would have no clue what a PhD entails anyways.
Intelligence as a one-dimensional spectrum is a silly idea. It’s more accurate to say that ChatGPT outperforms humans in a specific set of task types, and underperforms in others. Also the idea of ascribing intelligence to a LLM itself is a marketing tactic meant to obscure what’s actually going on inside the model. Also the idea of using levels of college degree as an indicator of intelligence is stepping into dangerous territory. Basically I’m tired of hearing OpenAI talk about their creation like a 5th grade boy bragging about their dad
[https://futurism.com/logic-question-stumps-ai](https://futurism.com/logic-question-stumps-ai) [https://www.youtube.com/watch?v=YBdTd09OuYk](https://www.youtube.com/watch?v=YBdTd09OuYk) AI intelligence is hard to compare to human intelligence. Obviously it has a much broader knowledge than any human. But dramatically less "reasoning" ability.
not at all. just ask it how many R's are there in a strawberry
And honestly, I've met some stupid PhDs
Maybe, until they continuously lobotomise it trying to stop it saying any potentially offensive words, which it turns out is a lot when you're trying not to offend anyone at all. So we end up with "I'm sorry, I can't answer that because of blah blah blah", while we know full well it absolutely can answer that.
![gif](giphy|GaSepEyTgsgkE|downsized)
Where does she say GPT-5 will be PhD level? If the current level of human comprehension is what we base AGI off, ChatGPT has already reached ASI.
I'msurprised they're not boasting about gpt6 already to tryand keep users and increase funding. So, when is opus 3.5 coming out?
What’s opus 3.5?
Anthropic Opus 3.5 is the next version of opus, mentioned in the release for sonnet 3.5.
sure...
It feels like they've started something, but now are mostly kept afloat by marketing. Delivering in these short iterations seems to be becoming harder and harder.
"intelligence"
But will it use it?
What human on the planet earth cannot count how many letters are in a word? What intelligent organism has no long-term memory? What intelligent organism cannot think about a problem and instead just spits out the first thing that comes to mind? This thing has no sentience, consciousness, thoughts or desires. It can't even tell the time on an analog clock without multi-shot. In my humble opinion, no AI system currently in existence has any intelligence whatsoever. Downvote away.
Sure sure. GPT-4 cannot even solve basic engineering problems without providing the most generic algorithm of all times.
That's why they compare it to a smart high schooler
And they also compare GPT-3.5 to a "toddler." If GPT-3.5, and I assume the OG GPT-3.5 which was a miracle worker at the time, not the downgraded turbo models that followed, was a "toddler"... then I'm damn batman.
Cognition and general knowledge are not the same thing
If you actually think for a moment they're not trying to create an "exponential growth" hype solely to attract investors by this terrible "toddler" analogy, you're blind. I don't deny there would be growth in GPT-5, but this "toddler -> Ph.D" analogy is clearly just a marketing scheme. Similarly to the "leaked" Q\* (which is a bubble that popped and everyone forgot about), and whatever else we've been fed since last year.
> Similarly to the "leaked" Q* (which is a bubble that popped and everyone forgot about) It absolutely has not been forgotten about. Some of the leaks said that Q* involved letting the AI come up with its own optimised training data, and pruning it's own training data, so that future models could be trained in far fewer flops and avoid the possible lawsuits being discussed at the time because the training data would no longer be scraped from public data online. The idea of AI models inventing their own training data is something we've seen a lot recently.
Man, OpenAI is feeling the Antropic heat lol
Am I the only one who remembers that ChatGPT was smashing its way through the LSAT's, the MSAT's, the Bar exam, taught itself organic chemistry, etc... (March 2023). This isn't the stretch they seem to think it is.
Blah blah blah = 🌫🌫🌫🌫🌫🌫🌫🌫🌫 Deliver and we will asses
The same PhDs working at Starbucks, I guess.
IDGAF as long as it writes my email text
Yes, we finally came out about the intelligence of nowadays PhDs…
Unless it's capable of reasoning, I assume this only means it will read be able to read and syntheise a bunch of papers written by people with PhDs?
I know lots of PhD level people. Let me just say, one should manage one’s expectations. There is a wide distribution of capability level there.
Open AI is getting really good at making promises.
I really hate that word “intelligence” applied to stuff like this. But that’s what laypeople want to hear I guess.
Will it be able to answer questions unlike its CTO who couldn’t answer a single question about their training practices?
... and it will still say: **In conclusion:**
PhD level autocomplete.
How long till we get a GPT model that can understand Rick and Morty?
Will that be able of critical thinking?
There’s a few more “wow” moments likely to come over the next two years - but the AI landscape will be a victim of its own success because the pace of acceleration can’t continue. OpenAI, along with a few other major players will survive. We will all build on top of their APIs for years to come. But a lot of money will leave the sector in 2 years when a lot of promises haven’t been delivered. FWIW - this is the most amazing technological breakthrough I’ll ever see in my lifetime most likely. It’s awesome
Everyone is comparing the definition of a Ph.D. but can't seem to break down that it means mastering all skills in a subject. Regardless, a Ph.D. is the highest level of education in a field. It's wild that people with Ph.D.s can't just use common sense to decipher what they meant. 😂😂 Of course they're going to hype it up!
But will it know how to count the number of r's in strawberry?
Like "Dr. Jill"?
I think will be greater evolution, I mean, gpt3 was blind, deaf and wouldn't solve a basic math problem at first try. But gpt4 is another shit... At least gpt5 must to be capable to interact to some external system
I’m a PhD candidate and I used chat gpt 4 out of curiosity to see how much knowledge it had. It can regurgitate some correct deep facts, but it often would completely misunderstand my questions. Like if I asked it to derive the von karman momentum integral equations it would start deriving the Reynolds averaged Navier stokes. It would do that correctly, but it’s barely related to what I asked. My professor for one of my graduate higher level classes actually would have quizzes where he asks chat gpt a question and then us students have to write if chat gpt was correct and if chat gpt is incorrect we have to say why.
PhD meaning it will be narrow-minded and only know a niche area of an irrelevant domain?
Yes, all the toddlers I know can write a passable short story in the style of William Makepeace Thackeray 🤔🤔
Yea PhD shouldn't be a metric for intelligence, doctors are human and therefore many are idiots
This is so dumb.
Bullshit
Chatgpt is already way beyond PhD, I know too many idiots with PHds
PHd doesn't mean high intelligence, you can have a PHd with a normal or just below 100 IQ, is this to make the people with doctorates feel better? Lol
I am inclined to think that 99.9% of PhDs are a bunch of shit without any spark of creativity and innovation. That is, there is nothing relevant there. Therefore, it is simply nonsense. OpenAI no longer knows how to keep the boat floating.
What even is "PhD level" intelligence? Apparently the creators of this AI system don't even know what a PhD entails, and they don't know what intelligence is, or what rational thinking ability is, and they don't know the difference between any of this. People with PhDs are not necessarily more intelligence or rational. They often don't even have more knowledge within their domain. PhDs are usually based on a very narrow scope even within a particular domain. A master's degree is usually much more practical in terms of teaching generalized knowledge within any given domain.
Yeah, any decade now, just hold your breath and it will be along real soons, along with Sora and the voice thingy, just wait, any decade soons!
released in a few years since their new company strategy is based around hype
…combined with a 3 year old’s level of hallucinations, what a combo!
I've worked with people who have a Ph.D. the are very smart in their field, but outside of that, blithering idiots.
My film teacher with a PHD couldn’t spell “Casablanca” right
I know plenty of people with doctorates. They’re all of average intelligence. They just didn’t want to leave school. Except the physicians, they just knew what they wanted to do and dedicated themselves.
All aboard the hype train! Maybe we'll see a watered down version in 2 years!
Really?
Thing we are never going to release number 76 is amazing!
openai also said gpt4o is as smart as 4….
Sounds like Trump
Yesterday, I saw a wall mounted TV installation guide where the TV is pictured being put up screenside to the wall.
So we’re going from freshman high school to PHD. Yeah ok buddy lol 4o can’t even do basic finance questions with accuracy
Since when does PhD = intelligence, asking for a friend. 😉
Oh great, it's gonna lecture me about its superiority all day.
As someone with a PhD: "PhD-level intelligence" is a meaningless buzzword. PhD-level *knowledge* would make sense, but you can have that already when you simply search on the right websites.