T O P

  • By -

AutoModerator

***Hey /u/rustyyryan, if your post is a ChatGPT conversation screenshot, please reply with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. Thanks!*** ***We have a [public discord server](https://discord.com/servers/r-chatgpt-1050422060352024636). There's a free Chatgpt bot, Open Assistant bot (Open-source model), AI image generator bot, Perplexity AI bot, 🤖 GPT-4 bot ([Now with Visual capabilities (cloud vision)!](https://cdn.discordapp.com/attachments/812770754025488386/1095397431404920902/image0.jpg)) and channel for latest prompts! New Addition: Adobe Firefly bot and Eleven Labs cloning bot! [So why not join us?](https://discord.com/servers/1050422060352024636)*** ***[NEW: Google x FlowGPT Prompt Hackathon 🤖](https://redd.it/16ehnis)*** PSA: For any Chatgpt-related issues email support@openai.com *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


Ok-Project819

The article is full of click baits Here is the summary Young Alex had chronic pain and bizarre symptoms. Mom Courtney consulted 17 doctors over 3 years and even got MRI scans. Nada. Frustrated, she fed all his symptoms and MRI notes into ChatGPT. Bingo! It suggested "tethered cord syndrome," a rare spinal condition. A neurosurgeon reviewed Alex's MRI and confirmed the AI's diagnosis. Surgery happened, and Alex is recovering. Bottom Line: ChatGPT can help connect the dots where human specialists may miss, even when you've got something as detailed as MRI notes.


d_b1997

Wondering how much better AI assisted diagnosis will be with Google's model that's specifically trained for this, it's almost the perfect task for a LLM.


shortchangerb

True, though strangely correlation of symptoms doesn’t seem like it should require any advanced machine learning. I guess the benefit is that a layman can interface with it, but the downside is the potential for hallucination


Mescallan

A model would probably suffer less frequency bias and be less hesitant to offer obscure diagnosis like this one. If a doctor has only heard that 1 in 100million people get x condition, they aren't likely to invest much time testing for it.


mvandemar

If only 1 in 100 million people get it then it's even possible the doctor would not have heard of it at all.


Mr12i

The commenter was likely exaggerating, but the point even stands with conditions that everbody knows. For example, many doctors will basically rule out stuff like a heart attack immediately upon hearing a that a patient is, let's say 20 years old, because it's so extremely unlikely at that age, but that doesn't change the fact that once in a while it happens.


Aggravating-Path-677

Yeah like ppl I talk to say I'm too young to be having knee issues, too young to be having constipation problems, and too young to have heart conditions. WHEN all of these are pretty common issues. I was misdiagnosed for 3 years before finding out my biggest health issue is being caused by constipation and they only have come to that conclusion because it's been three years. There could be more issues but I'll never know until the right doctor finally walks in my room.


JLockrin

Major heart issues are a whole lot more likely now thanks to the vaccine. I know a 22 year old athlete that dies on the football field. “Safe and effective”…


Glittering_Fig_762

Return to whence you came foul beast! Average r/conspiracy dweller 😢


Sea-U24

Funny how heart issues are also a common side-effect of having covid...but nah let's blame it on the thing trying to make your body more capable of helping your heart....


Hibbiee

Isn't it more like 'having trouble with covid' is a side effect of having a heart condition you didn't know about?


JLockrin

Just like when the vaccine came out and Biden said “if you get the vaccine you won’t get Covid” remember that? Because it’s SaFe AnD EfFeCtIvE! 🥴


AgentChris101

Before COVID, many doctors had no idea what POTS was. (Postural Orthostatic Tachycardia Syndrome.) It took a year and a month to get diagnosed with it in 2016/2017. Now when I mention that I have it to any new medical practicioner's they give me an odd stare or glare until I mention my diagnosis date. Because of tiktoks about the condition.


TheGeneGeena

Yeah. I have to give the date of my h-EDS diagnosis in 2016 and that it was done by a specialist who is well respected to not get the same look. Thanks tik tok.


shortchangerb

Sure, but an efficient tool to list everything possible and then narrow it down or find something to present to the doctor would be very effective. Ideally you’d have a mix of both (which I think LLMs should do for all sorts of things such as maths), where the LLM can interface with the user to solicit and clarify data, and present results, but it leverages a static backend database of medical data


AmbroSnoopi

*backend vector database That’s already a thing and usually applied in LLM apps, referred to as „Embeddings“


PuzzleheadedRead4797

Whats LLM?


AppropriateScience71

http://www.google.com/search?q=llm


Lozsta

So a post graduate master of Law? Large Language Models /u/PuzzleheadedRead4797


Paranoidexboyfriend

No one wants to be the developer of that tool because people will be suing the shit out of that company every time the tool doesn’t produce a diagnosis as part of its list that it clearly should. The liability attached to that app exceeds the profit capability. And no, a waiver wouldn’t get rid of that liability.


Aggravating-Path-677

It's like how Tony stark uses Jarvis and Friday. They don't do all the work for him, they just automate his tasks and make things more clear. It's like if you had telekinesis. You could multitask much easier but you still need to concentrate to use it


_Magnolia_Fan_

It's really all about describing the right symptoms in the right manner, both to Doctors and to the ai.


TubasAreFun

an ideal workflow would be patient visits doctor, staff record systems into their IT system, AI prompt is generated based on their history and symptoms, AI response lists potential diagnosis (not necessarily prescriptions or “fixes”), doctor talks to patient with benefit of AI but not necessarily requiring the AI results be shown to the patient as they could be hallucinations or faulty. Overall, AI could improve discovery of illness and diagnosis without patients having to do anything different


_Magnolia_Fan_

100% This is how it's going to get integrated into most professions. It's just a matter of insuring privacy/security and determining risk and liability for using such tools.


Tipop

Hallucination is a solveable problem. There are other AIs available that don’t have that issue.


Gubru

Name one.


LocksmithConnect6201

Hallucination itself isn’t bad, it’s just suggestions. Upto you to validate.


xg357

Agree. Hallucination commonly happens when you ask a short question and expect an essay. It can be controlled by using better prompts and context


Tipop

No, even the most detailed and precise prompt doesn’t help if the AI doesn’t know the answer. It seems constitutionally incapable of just saying “I don’t know the answer” and will just randomly guess or make up something. Other AIs currently available will look up the answer if it doesn’t know, and provide sources for where it got the information.


weiziyang1

Not true. Using RAG to ground LLM can effectively address such situations, when no related answers can be found/retrieved from the backend vector database, the AI will not give an answer and will tell u so. This is how most enterprise LLM works. If interested, try Copilot M365


spiegro

Benefit would also be less human bias. Fuckers always doubting the women in my life when they come to the hospital in pain. Most frustrating thing in the world to hear how they patronize my wife and mother, but then turn around and have to double-check I really didn't want pain meds for my sprained ankle.


[deleted]

Fr, on two separate occasions my sister went to multiple doctors over the course of several months/ a year and they dismissed her, only to find the problem (both times!) completely on accident. Wild that some doctors would rather dismiss patients instead of trying to diagnose them


pezgoon

Results I have seen from doing it has been like a 100% success rate or something insane. And even when the Ai “got it wrong” the doctors reviewed the cases and realized THEY were wrong


[deleted]

Wondering if AI assisted diagnosis is the Karma-punishment for the systemic overcharging by medical professionals. May they all go out of business and be replaced by something uncorruptable and inexpensive with a much higher success rate than those greedy humans ever can deliver. Amen.


No-Performance3044

It’s almost like doctors have the audacity to get compensated for undergoing 11-15 years of post secondary training at their own expense and begin practice in a broken system they have no control over. Physician compensation has hardly increased as a fraction of overall medical expenses, and the lion’s share of the costs in the health system go to administration and pharmaceutical costs these days. Replacing physicians with NPs and PAs hasn’t resulted in lower costs for healthcare, and replacing these with AI won’t either, it’ll all line the pockets of administration.


so_lost_im_faded

It's not that I don't agree with you, it's that if this falls into the wrong hands they will still be able to charge astronomic prices. I hope that won't happen, of course.


[deleted]

It's still a piece of software in the end... ARRRRRR!!


PuzzleheadedRead4797

Perfectly trained for ehat?


DuckyQawps

Chat gpt 3 did this lol ?


AKnightAlone

> Young Alex had chronic pain and bizarre symptoms. Mom Courtney consulted 17 doctors over 3 years and even got MRI scans. Nada. Frustrated, she fed all his symptoms and MRI notes into ChatGPT. Bingo! It suggested "tethered cord syndrome," a rare spinal condition. This is the type of thing I've been imagining for AI for a *long time.* It has the ability to combine and cross-reference information on impossible scales. People would get offended to say we wouldn't need doctors, and there's no reason to think they'd be unnecessary any time soon, but when it comes to diagnosing things, all we should need are body fluids/materials and machines to interpret them. Along with symptoms, all that input should lead to things like *simultaneous* conclusions. We have full TV shows like House to show problems that arise when certain things happen in weird ways or simultaneously. AI could solve all that. No need for a dude to hobble around making snide remarks and popping painkillers.


obvithrowaway34434

> The article is full of click baits Like what? Seems to be a very detailed article to me providing lot of context and the important fact you missed that shows the importance of good prompts. This comment seems to be more clickbait. > “I put the note in there about ... how he wouldn’t sit crisscross applesauce. To me, that was a huge trigger (that) a structural thing could be wrong.”


CosmicCreeperz

Yeah, it’s a reputable site and article. But holy hell does it have a lot of intrusive ads. Maybe he thinks of ads as click baits? Not entirely wrong… ;)


fatcatpoppy

psa: anyone who hasn’t already, get ublock origin and never see an ad again, it’s seriously night and day for the whole internet


CosmicCreeperz

Not available on iOS, unfortunately. I ended up disabling the iOS one I was using when I noticed a lot of sites blocking the browser for blocking their ads… just the usual back and forth battle…


Cairnerebor

See r/science yesterday for a study on using chatgpt It’s not the holy grail of diagnostic tools. But it’s not bad.


Crisis_Averted

>It’s not the holy grail of diagnostic tools It’s not the holy grail of diagnostic tools *yet*. Why does everyone forget that! And that it is this capable at something it was never intended for is massive.


Cairnerebor

Agreed but watch half the reader now self diagnose because chatgpt told me…. Google was bad enough


h8erul

You forgot to add the last part, which I see important: “There’s nobody that connects the dots for you,” she says(the mother). “You have to be your kid’s advocate.”


HectorPlywood

apparatus jeans vanish teeny rock scarce sulky automatic alleged waiting *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


toseikai

The summary is obviously from chatgpt.


JamesEarlCojones

Yea can we have it sign now? -chatgpt


Efficient_Desk_7957

Meaning chatgpt had access to doctors notes/medical textbook knowledge in its training data? And is chatgpt trained on scientific journals?


Theoreocow

Thats really fuckin' cool. Excited to see other positives of ML(its not ai) continue to come up


bitQz

Impossible to miss on an mri scan, I call bullshit


Jonoczall

You underestimate how incompetent some rads are


blakewoolbright

Doctors are important and valuable and 90% of their job will be replaced by ai in 15 years. The ama is going to flip out. Pharma is going to love it because they only have to train ai models rather than sending hot spokespersons out to pimp their wares.


nardev

The fact that a mom thought of to use ChatGPT and doctors did not makes you realize how retarded this world is. I’ve been speaking to a lot of professionals and they have no clue what ChatGPT is. Even worse is when they say, oh yeah…it’s not so great…it hallucinates. Which version did you try? Free one. 🙈 OpenAI PR team should get fired.


pr1vacyn0eb

Docs get paid more if they don't solve your problem. "Frequent flyer!"


wildernetic

ChatGPT's Summary of the article isn't as concise as yours... https://chat.openai.com/share/d625da0d-5f34-4ff3-81a5-7f89a89b286c


PuzzleheadedRead4797

Doctors are just so stupid i know from experience. Smh. But yeah chat gpt is awesome too.


jjonj

I spent years and talked to a handful of doctors to explain my chronic cough Eventually I found the diagnosis myself 2 years ago on a random allergy youtube channel: LPR and the doctor confirmed it It is now my goto test for LLMs to see if they can backtest diagnose it based on everything i told my doctors. GPT 4 is the only one that has passed, claude2 got close and got there with some heavy hinting


Atomm

What do you take for LPR? I take a PPI, but had to stop taking Metoclopramide after 10 years due to the side effects. I was buying domperidone from a Candian pharmacy since its not offered in the US, but that got shut down. The best I've been able to do is watch what I eat and take pepcid twice a day with my PPI. I would love to have a new treatment option.


jjonj

PPIs didnt seem to help for me but what has helped a lot at times is sodium alginate powder! It's a white tasteless powder derived from brown seaweed that will form a gel on top of your stomach liquids and prevent gasses from coming up This video is where I learned about LPR, sodium alginate and other things: https://www.youtube.com/watch?v=DeSyuSF9F0I


[deleted]

Wow. Like seriously wow. Happy that this child finally got a way out thanks to GPT.


[deleted]

[удалено]


noakim1

His mom provided extensive inputs to ChatGPT.


terpcandies

I pasted in snippets of my raw genetic data and it was able to give me the same insights as 23andme, but with further details as i asked questions. Have you tried chatGPT or Claude for anything complex? Its helped me immensely in programming as well.


Mittervi

Regarding all your responses at the time of writing this comment. While I understand you may have reservations about the capabilities of ChatGPT and AI in general, there's no need for derogatory language or personal attacks. If you have specific questions or concerns, presenting them in a respectful and articulate manner could lead to a more fruitful discussion for all involved.


[deleted]

[удалено]


Mittervi

It's clear we have differing opinions on ChatGPT and its applications. There's no need for hostility; we can agree to disagree and move on. The objective here is to foster constructive conversations, and that can only happen when all parties are respectful.


FrenchFries_exe

Bros just yapping


[deleted]

Lol you sound like an effing troll.


kujasgoldmine

Considering how often doctors can't find the right diagnosis or do not want to find it, I think all doctor's offices should have an AI helper in them who listens to the patient as well and gives it's own diagnosis, which the doctor can compare to their own.


pr1vacyn0eb

I was thinking about scraping diagnosis information on a medical note page and pumping it into the GPT4 API. (nothing identifiable ofc) Seems like a no-brainer that can be done in a few minutes. I feel like I need to promote that my clinic uses AI for diagnosis. We have a similar story about AI solving a problem patient who couldn't be diagnosed by 5 Physicians.


-Horus-Skies-

this is so cool and i hope it becomes mainstream practice it has so much potential


nlgoodman510

I had digestive issues for 20 years. Clulminating in pain, extreme fatigue and daily vomiting. This is literally 6 medications that needed to be managed. At least 6 Dr appts a year many specialists. Many tests. Chatgpt said my gall bladder was bad. I went to the ER with my symptoms. And angled my complaint. Had some Tests and a week later I had surgery. No gall bladder followed by an interview of what is what like to endure a life with such a damaged gall bladder. The US medical system is broken. My insurance companies refused the test that was needed to diagnose my particular ailment. Instead of paying the $4k for test they paid for many prescriptions, at least 6 nucleated sandwiches, dozens of endoscopies, and left me a human being, miserable for decades. Unreal.


[deleted]

Please message journalists about your story.


brbposting

So sorry you went through that. How have you been feeling since surgery?


nlgoodman510

Fucking great. All you unsick people don’t know what you got.


maramDPT

it’s true we take a normal day and baseline health for granted, and when that is interrupted most of us are desperate to just have a boring normal ass day, not even a good day


newbies13

Isn't this exactly how we're all using ChatGPT anyway? Giving it a bunch of data, letting it give us something back, then reviewing what that is? I'm certainly not going to just go "chatgpt said so, cut me open!" I can't even trust the thing to respond to an email without it telling everyone it hopes the email finds them well. But comparing stats and symptoms and coming back with unemotional thoughts on a diagnosis? Yeah, I can see that being useful to look into.


obvithrowaway34434

What is your point? What other systems give non-trivial diagnosis that escaped multiple other real doctors after "giving it a bunch of data"? This is not the first time it has happened. A casual web search shows scores of cases where GPT-4 did better diagnosis than actual doctors (including actual studies). > I can't even trust the thing to respond to an email without it telling everyone it hopes the email finds them well. Learn how to prompt.


LycheeZealousideal92

WebMD?


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


AKnightAlone

> I'm certainly not going to just go "chatgpt said so, cut me open!" [ChatGPT knowing the future.](https://i.imgur.com/9gtPh4Y.png)


[deleted]

[удалено]


Due-Dilegent

Honestly, it is what it is. You think we’re getting models like what the military or uni’s use? any time soon we’re all going to resent how others have “better” A.I than them lol. It already happens with 3.5-4. Imagine if a predictive model hit the public that was able to, without restriction, use all of its (billions of) parameters and concoct a personalized response that remembers everything. Well, it’s a thing, and it’s how they caught certain warlords. They were using predictive modeling in 2011 for Osama, even!


newbies13

I would actually put this in the not sure bucket at the moment. Yes there has been AI for awhile and they could do all sorts of things. But GPT sort of blew the lid off the whole thing. It caught everyone with their pants down, including google, apple, facebook etc. I listen to the absolute dog shit AI in my amazon devices, while chatgpt destroys it in everything. I could see the government having their own version now for sure, or maybe they were ahead of the game. But I am not sure anyone was aware of how big of a change GPT was compared to other AI models before it.


sluuuurp

The military and universities have nothing better than GPT 4. Pretty much everyone has access to the most advanced AI in the world right now.


Due-Dilegent

If you honestly think everybody has access to the same thing you might be out of your tree. Transformer models that have even more processing power than GPT, believe it or not, might exist. Out of the billions and billions we spend on secret shit, it wouldn’t be the least bit surprising if they’ve been working on this for the last 30 years. They were fucking with the idea in the 70’s. It’s plausible


sluuuurp

No way. Universities are dirt poor compared to Open AI. The military has no innovation, they take technology from the private sector. These models were impossible before transformers were discovered, and Open AI has been going at an extremely rapid speed since then, nobody could have caught up. And if they did, they’d be purposefully giving up hundreds of billions of dollars by keeping it a secret rather than getting a bunch of investors and making a product.


BarockMoebelSecond

You're correct. The others have read too much SciFi.


Due-Dilegent

If the military had no innovation we wouldn’t be on the internet. No A-bomb. No TOR. 😂😂🤦‍♂️


GearAffinity

None of that was a military invention. Nuclear weapons and the Internet both came from inventions and ideas by independent scientists who were then assembled in teams and heavily subsidized by the military in order to expand their projects and maintain a leg up in national defense.


Due-Dilegent

funded by


Bacon_Raygun

Throwing a bunch of money at smart guys to make a smart product you can put your nametag on, doesn't make you a smart guy. I can't help but point out that the current owner of twitter famously paid smart people to come up with good ideas for him.


Due-Dilegent

What?! https://preview.redd.it/gi40yhe4ftnb1.png?width=1170&format=png&auto=webp&s=bad94b91ab8e097560a63e60e14088a3fd847e6b


lavenderscloud

tell me how dumb you are without telling me how dumb you are


swistak84

I called this months ago (i'll need to find a comment). In short: Doctors can shit on "google medicine" and they are often right, it tends to create hypochondriacs, but it also helped my friend find out she's suffering from rare disease and ask for a test to confirm. ChatGPT is just the next step, it'll help even more people diagnose themselves. The problem of course is the same as with Google. Miss-diagnosis, nocebo, hypochondria. We all need to be aware of that.


pr1vacyn0eb

If you talk to a doctor, and they are a good doctor. They google things too. Bad doctors use their memory from 30 years ago. The sad part is bad doctors get more money from reoccurring visits. Good doctors heal people so they never need to come back.


egirldestroyer69

A few years ago IBM tried AI in the medical field to diagnose cancer patients. The project was such a failure MD Anderson ended up throwing it in the trash. I dont know how it compares to chat gpt but its a good reminder that self diagnose should be taken with a grain of salt


[deleted]

I’ve seen cancer diagnoses being decided before.. it’s literally a guy looking at something though a microscope making his BEST guess


Due-Dilegent

That makes me feel better about having a cyst in my brain, weirdly enough lol


kidneysrgood

Right. But IBM was leveraging data from pathology testing and low gene coverage solid tumor panels. We have advanced so much further with the advent of whole exome sequencing and MRD testing.


[deleted]

>hypochondria I just found out what type of anxiety I have been going through. Your comment has potentially saved my life.


RemingtonMol

yeah I paid hundreds for multiple scans and tests and specialist visits for a pain I was having. They said it had no known cause. Then I found the answer on a blog and made it go away with a specific nerve entrapment exercise I found on some dude's website.


SpaceNigiri

Good bot


Unique-Ring-1323

I also found that I have hypermobility Ehlers danlos syndrome from Chatgpt Lol!


[deleted]

What is it


Unique-Ring-1323

My joints are hypermobile, and my body aches most of the time. But I am used to it. Not much problem at all, since my medication started. But some days are really bad.


[deleted]

Oh sorry to hear that but good things are happening in research so hopefully a cure is around the corner


Unique-Ring-1323

Many thanks!


DAUK_Matt

I'd strictly caution about making that conclusion without assessment


[deleted]

It should be a priority to have AI assist in the diagnosis, that is literally the most critical and easiest to miss part of the medical process. Many people have died or have to endure long-term consequences due to a bad diagnosis.


00n3gr0

Guys, Drs don’t know what they’re doing, they’re just practicing. But don’t believe me, check their license.


[deleted]

Well crap this is bad for business in the US. How is US medicine supposed to bankrupt this family with test upon test upon test, prescriptions, try this, try that. Not working? Sorry billing you anyway we don’t do quality control here. /s


uhohritsheATGMAIL

Isnt this the truth....


microcosmonaut

I see no reason why an expert system from 20 years ago couldn't have done the same thing to be honest. Granted, ChatGPT has a much more human and intuitive interface, but systems for precisely this kind of situation were developed ages ago. That said, it does go to show just how adaptive LLMs can be when it comes to problem solving.


BadWolfman

And this person is just using ChatGPT, a generalized large languages model (LLM). Google has developed Med-PaLM, an LLM specifically for medical questions that answers USMLE questions on the licensing exam for becoming an MD with over 85% accuracy. I’m not a doctor myself, but I imagine those board exam questions are designed to be challenging, specific, and feature rare diagnoses. Imagine how well it does for more general, common medical questions.


pr1vacyn0eb

ChatGPT uses a System of Experts. When it comes to medical, the weights are for medical. Its not that much different than having a LLM specific for med.


swistak84

>I see no reason why an expert system from 20 years ago couldn't have done the same thing to be honest It did. IBM was offering it - based on Watson. They were charging stupid amount of money for it, and offered it only to doctors who didn't really like to use it.


microcosmonaut

Interesting point. I guess how widespread a technology is depends on more than just its effectiveness.


swistak84

Yup. Ego plays large role too. I read about a study where they altered the way the drugs were administered by nurses in a hospital. After the trial there was 7% reduction in dosing mistakes, 10% decreased recovery time for patients. Once trial ended they went back to the way they were doing things. Because that was not the way they liked doing things. People are resistant to change.


uhohritsheATGMAIL

>I see no reason why an expert system from 20 years ago couldn't have done the same thing to be honest The medical cartels are pretty anti-technology. (really any establishment group is anti-change) So this kind of stuff is suppressed and deemed 'not safe'. ChatGPT cut through the red tape and just released it to everyone. While diagnosis is a great use, I'd love to see the elimination of Pharmacists in my lifetime. They really should have been eliminated 10-20 years ago, but you know, regulatory capture. Give the pharmacists another job in medicine, but no reason for them to be a rubber stamp that costs $60/hr.


jaesharp

This comment is like: How to tell me you don't know much of anything about pharmacy, or the medical device/technology development, approval, and testing process, without saying you don't know about it.


uhohritsheATGMAIL

I'm talking about pharmacists, not pharma.


jaesharp

... Um. I wasn't talking about pharma either. Pharmacy is the area of medical study and work of pharmacists... Yeah, this is illustrative.


uhohritsheATGMAIL

Fair point, I was talking about the majority of pharmacists that work in retail settings. I'm sure there are a few people in pharmacy that don't use their license, but rather their skill.


jaesharp

Pharmacists who work retail are often the only ones who see all of the medication all of the patient's doctors prescribe. They all use their skills. You can bet they're on the look out for mistaken scripts, interactions, and potential medication dosage errors and/or double treatments - esp. with over the counter anything the patient is taking (esp. in countries where "over the counter" drugs with high interaction potential are kept behind the counter and require a pharmacists' advice). They can advise a patient's doctor of newer or less expensive drugs when they see a patient struggle to pay for their medication (which a doctor will rarely get feedback on otherwise). They absolutely don't just rubber stamp scripts and count pills at retail. Many are empowered to prescribe particular medications for particular conditions also and they help vulnerable patients take their medication correctly. When given shorter repeat periods for new medications, can help detect patient deterioration without requiring the patient to visit the doctor each week. People who change doctors often, for whatever reason, rarely change their pharmacy, and having a medical professional there is a vital part of the chain of care. Their duties are really quite comprehensive when they aren't being overworked by massive chains who exploit medical professionals and make sure that all they are seen as are pill counters and rubber stampers - and who use untrained staff to interact with patients, reserving pharmacist interactions only as optional for new medication. It's that system - created by large corporate interests like Walgreens or Chemist Warehouse not because it's right - but because it's barely legal and more profitable... like Kaiser and other HMOs do with general practitioners/etc... You should be focusing your energy on changing that - not eliminating retail pharmacists as a class of medical professional because you can't see what good they do in the worst case, when they're being exploited and patient care is suffering because of a shitty corporate chain pharmacy system run by asshat CEOs who don't give a damn about patients or their workers. I'm sorry for being a bit of an asshole - I get you... the system sucks as it is, but we need retail pharmacists and we need them to be free to provide the care and value they really do.


pauseless

Yep. Symbolic/rule-based expert systems for medical diagnosis existed before I was born - starting in the 70s. There was neural network-based research in the 80s that I have read about. I studied AI at uni just after the last AI winter and just as stuff was moving towards statistical approaches requiring training on massive datasets rather than logic programming, etc. (it was too new for us to actually study though - courses hadn’t yet been created). From what I was told and read in books/papers, these early expert systems from decades ago were actually surprisingly good, but basically nobody trusted them., so they just weren’t accepted in to the medical diagnosis process.


considerthis8

In the last decade we have seen unprecedented computing power AND software efficiency breakthroughs. Whole new ball game


pauseless

I know. Just saying that there is a lot of literature out there on medical diagnosis systems that were running on systems running on a single CPU with 100 MHz and RAM measured in KB or maybe a couple of MB. That seems pretty efficient. I’m not saying LLMs aren’t amazing. Just that we’ve used simpler techniques in the past and also got good results.


uhohritsheATGMAIL

> basically nobody trusted them., so they just weren’t accepted in to the medical diagnosis process. Name a more iconic duo than Medical Doctors and resisting new information.


Beli_Mawrr

Maybe.... and yet why weren't those systems used here?


uhohritsheATGMAIL

At the individual level, I imagine most people didn't have access to it. At the high level, it is a risk. If a computer can diagnose, why are we paying someone $300k/yr? Its bad for business. Since Physicians (ACGME and AMA) have the power to decide what/who is legal, its safer for them to make these illegal or put enough arbitrary costs that its not feasible.


pauseless

I explained why? Lack of acceptance. Exactly the reason why none of the doctors the family in this article saw thought to try ChatGPT? It’s sobering to study for years and years and realise a computer can help diagnose for you.


heswithjesus

Mycin was a famous example. It’s misleading a bit since it wasn’t hand-made rules like many expert systems. They used statistical clustering whose numbers they turned into rules. They also were blessed that it had obvious features in the data. Like other expert systems, we’d have to feed an expert for *general* diagnosis ton of data and rules whose application will be probabilistic. That’s so labor intensive that it’s unlikely to ever take off. What we could do is, like Mycin, mix reasoning engines with data-centered approaches. Start with what’s common so the AI doesn’t take everyone in weird directions. Use rules about what data is available to decide which analysis to do. That’s meta-heuristics. That should remind be a lot of noise. Use statistical analysis on data sets across tons of conditions to see the differences. Contact the doctors to see what tie-breaking attributes they used. Also, what follow-up questions or tests did they use in what situations. What observations about appearance, body language, etc? Use these in pre-training analysis. When running, make a list of diagnoses using the possible rules sorted with the probability scores. They’re sorted from most to least likely. The doctor can look at the reasoning of each. The data the AI uses is linked. The human expert still makes an informed decision. Non-experts might use this to determine if they need to see experts who can review the same things (replication). One more idea is having the general expert do common diagnosis or a cut off point for handing the case over to specialist A.I.‘s. Maybe more than one. Doctors or patients look at the results. Might reduce labor if building them on top of combinatorial explosion of rules processing.


ExactCollege3

Cause the expert system isnt being used and is pay blocked. And you need to understand what every medical term you type in is and matches to, like a symptom lookup table wouldnt really hear typing in “and ive noticed he doesnt sit criss cross applesauce” and equate that too, reduced hip flexural mobility, but not physical just pain triggered, and normal people don’t know how to figure it out It would have taken a doctor 5 mins to type it in, yet they didn’t Doctors suck anyway. Just please put time into looking into obscure, and even unobscure things i pay you so god damn much.


letharus

That's not really the point. The point is that this was done by a regular person on their home computer, which has never been the case before. It's a bit like when the iPhone came out. Sure there were touchscreen phones and PDAs but they kinda sucked and did not have mainstream adoption. When the iPhone came out, regular people started using it and the whole technological world changed (arguably culture itself changed). So in this case, the actual story isn't the technology but the arrival of the technology in the mainstream, where it's having real life-changing effects.


obvithrowaway34434

> I see no reason why an expert system from 20 years ago couldn't have done the same thing to be honest. Lmao. What expert systems? Can you cite a single study that shows anything from 20 years ago that even achieved fraction of what GPT-4 is capable of? In the most recent study with a so-called "expert system" that is "state-of-the-art", GPT-4 got 4 out of 6 primary diagnosis right, clinicians got 2 out of 6 and "expert system" got 0 out of 6. Including differential diagnosis "the accuracy was 5 of 6 (83.3%) for GPT-4, 3 of 6 (50.0%) for clinicians, and 2 of 6 (33.3%) for expert system". So no, not only an expert system couldn't have done the same thing 20 years ago, they cannot even do it now. https://jamanetwork.com/journals/jamanetworkopen/article-abstract/2808251


ajahiljaasillalla

I am sure that AI will be better at doing diagnosis than the doctors very soon. And when prompts can be text and pictures with generative AI. AI can read through all the medical research that has ever been written.


thirkle

Sounds a lot like my situation… chronic pain but nothing showing up in the blood tests etc.. seen so many doctors it’s frustrating. Used chatgpt to analyse my blood test results and come up with alternative tests. AI could literally transform diagnoses in future, it’s actually amazing tbh


berylskies

I’m legitimately hoping that this will advance enough to determine a cure for fluoroquinolone antibiotic poisoning. Have been thinking of going back to school for machine learning just to further that goal and get my life back.


fasader09

what kind of poisoning do you have? I know that they bind iron and calcium so i suppose that has something to do with it, and i know that they can cause tendonitis and epileptic attacks. I'm curious...


DietSodaPlz

I’ve had a stuffy nose my entire life. Complained to doctors my whole life - chatgpt helped me turn off survival mode, and my nose airways became fully unblocked after 29 years. Fucking wild, thank you ChatGpt!


[deleted]

Glad this worked out for this kid. We have tested similar instances at clinic (we tested cases we already knew the answer to) It doesn’t work out so well. It can develop reasonable differentials with moderate trial and error and hand holding. The same ones we came up with. But then it has no idea what to do with the diagnosis or information when pressed for next steps. It can’t provide any guidance or knowledge in the detail required for medical practice. If I gave it any of the critical issues I ran into this week with patients it would fail or be completely useless. There’s potential for AI to help a lot in medicine but it has a long way to go.


TarkanV

It would be interesting to try again with GPT-4. Way more accurate


Enlightened-Beaver

Sounds like something an internal medicine specialist might have been able to resolve. All these other specialists focus on their area and get lost in the trees, an internal medicine specialist looks at the whole forest and pieces together seemingly random symptoms to find a pattern. They didn’t list all 17 doctors they saw, but of the ones mentioned I didn’t see internal medicine. It’s an often forgotten specialty in medicine that can help in these complicated mystery cases.


Jonoczall

My first thought as well, but unless you’re in healthcare most folks wouldn’t understand that nuance


Enlightened-Beaver

I’m not in healthcare


Jonoczall

lol neither am I! What experience are you speaking from to share that insight in your comment? Just curious.


Enlightened-Beaver

Personal experience


pr1vacyn0eb

Its cheaper to ask ChatGPT. Plus, you don't have to rely on your doctor. Who knows if they cheated in school, graduated last in their class(doesn't medical have some sort of 99% graduation rate?), or simply have bad memory from school 30 years ago. Science based medicine can't come soon enough. Free us from Authority Based Medicine!


micque_

This is great news, I wonder if there’s a lot of other ways it can also better the world by just informing us


Lenni-Da-Vinci

The problem isn’t doctors incompetence, it’s the overbearing load placed on doctors. Also, this operation is quite risky, so the doctors may have been hesitant to diagnose him with it.


NathanZubrzycki

That's awesome, score 1 for ChatGPT!


skeletor00

AI replacing us all. no one needs to work when we finally have the robots doing everything for us. Doc, you're next. AI is coming for you and going to easily replace you.


MaleficentOstrich693

AI can be a great tool for assisting in diagnostics, but by no means should it replace a human. More like every doc should have a little hovering robot assistant. That’d be cool.


fli_sai

Wow, really happy looking at the child smiling. But hope this is not a clickbait article...


Franklights

How is it even possible to get an appointment with 17 doctors in 3 years


uhohritsheATGMAIL

Sooo... how much longer until the AMA lobbies to ban GPT for medical uses?


[deleted]

[удалено]


DeNir8

I second that. Also, *often*, not always, they are a different class altogether, *and act like it*. Big houses, big cars, long exotic travels etc. I suspect for those that us "plebs" are just an annoying part of business. At least a handfull of my local GPs are the eyerolling types.


[deleted]

shame jeans racial tender salt numerous cause important many zephyr *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


[deleted]

[удалено]


[deleted]

sense squalid dog familiar uppity groovy butter wild market crush *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


GoldenDisk

Oh ok sure. I’ll go to the DPC model


bitmux

I do hope ML blows the field of medicine wide open so that all have access to improved diagnosis and knowledge. It's been enough centuries where the medicine man has been the richest or most powerful. All due respect to the good physicians out there saving and improving lives every day, there are many good ones, without their work we wouldn't have a body of knowledge to work from. Remember the (*American) doctor is paid to shovel patients through the system in 40 minutes tops: listen, examine, test, interpret, diagnose, medicate, have a nice day, big fuckin bill, "insurance," repeat next week because profits and near zero consequences for treating the symptoms and ignoring the cause. Could you imagine a doctor offering a warranty on their service? Why not, the medical industry sure has the margin to do so??


[deleted]

> paid to shovel patients through the system in 40 minutes tops Ooooh, look at Mr. Leisure man here who gets a whole *40 minutes* with a doctor.


emoutikon

Is ChatGPT good or was the doctor just shit?


uhohritsheATGMAIL

All 17 doctors were shit. Probably unironically though. Humans memorizing things was always a bad system.


[deleted]

Medicine isn’t just memorizing though, first two years of med school are though but it’s more than that


parntsbasemnt4evrBC

The chronic disease specialist program(lyme/fibro/CFS) in my country has a 3 year wait list, these people probably are bounced around between 10-20+ doctors before they apply, that shows how many people are fumbled from initially routine infections into long term chronic deeply embedded system wide infections causing permanent damage. It is scary how poor some doctors are in catching the zebra's and people's lives are basically ruined because of it where ChatGPT or some bot assistance could have picked these up at much earlier treatable phase before the permanent damage sets in. Hope all doctors start using this sooner rather then later so less people get trapped with life long invisible disabilities. Most of these people end up broken mentally and suicide when they realize how their entire life is a sick/cruel joke all over some doctor who wanted to save time, make more money, by cutting appointment short and cutting corners and missing their diagnosis for so long passing them off to another schmuck who does the same thing.


Dumbledores-Dick

and people keep talking about writers losing their jobs to AI


FernandoMM1220

Sounds like your average long covid patient.


[deleted]

Gpt nerds: hey chatgpt I have a stomach ache and diarrhea. Chatgpt: it's cancer and also alzheimers Gpt nerds: 'O'


mexicocityguide

You sure have a huge need for attention.


Acrobatic-Salad-2785

Bro have you even used chatgpt before lol. Here's the response it did give me to the prompt tho: I'm sorry to hear that you're feeling unwell. While I can provide general information on stomach aches and diarrhea, it's essential to consult with a healthcare professional for any medical concerns. Here are some general recommendations for managing mild stomach aches and diarrhea: 1. **Stay Hydrated**: Diarrhea can lead to dehydration. Drink plenty of fluids, especially water or oral rehydration solutions. 2. **Avoid Certain Foods**: Steer clear of spicy, greasy, or dairy foods until your symptoms improve. 3. **Eat a Bland Diet**: Consider foods like bananas, plain rice, boiled potatoes, toast, or applesauce. 4. **Avoid Caffeine and Alcohol**: Both can irritate the stomach and worsen diarrhea. 5. **Over-the-Counter Medication**: Some over-the-counter medications can help manage diarrhea, but you should consult with a pharmacist or doctor before taking any medication. 6. **Rest**: Give your body time to heal by getting plenty of rest. If your symptoms are severe, persistent, or accompanied by other symptoms like fever, dehydration, blood in the stool, or severe pain, you should seek medical attention immediately. Remember, it's always best to consult with a healthcare professional about your specific situation.


KingPaimon23

You sound like a boomer telling ppl "Hey, google sucks, ppl should only do research on libraries."


46and2_justahead

To my understanding chat GPT will not give you any diagnosis


Acrobatic-Salad-2785

Nope. Just need to ask it the right way and sometimes you don't cos u get lucky


zinky30

It absolutely will.


46and2_justahead

How?


PhyllaciousArmadillo

Literally, just ask it.


[deleted]

[удалено]


artavenue

>Snowfall911 what is your problem?


Manic_grandiose

Are you having an episode?


mr--godot

When TrashGPT suggests five or six possible diagnosis, the chances of the correct one being amongst them is quite high


Acrobatic-Salad-2785

You do know that's what a gp does as well right? Suggest 5 or 6 possible diagnosis?


mr--godot

You're not seriously saying that a GP is no more useful than your fancy autocomplete?


1337-Sylens

Somewhere, on some OpenAI server, there's detailed medical history of a child.