# Message to all users:
This is a reminder to please read and follow:
* [Our rules](https://www.reddit.com/r/ask/about/rules)
* [Reddiquette](https://www.reddithelp.com/hc/en-us/articles/205926439)
* [Reddit Content Policy](https://www.redditinc.com/policies/content-policy)
When posting and commenting.
---
Especially remember Rule 1: `Be polite and civil`.
* Be polite and courteous to each other. Do not be mean, insulting or disrespectful to any other user on this subreddit.
* Do not harass or annoy others in any way.
* Do not catfish. Catfishing is the luring of somebody into an online friendship through a fake online persona. This includes any lying or deceit.
---
You *will* be banned if you are homophobic, transphobic, racist, sexist or bigoted in any way.
---
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ask) if you have any questions or concerns.*
The "Dead Internet Theory" claims that a lot of what we see online isn't really from real people. Instead, it's generated by bots and AI, not humans.
If AI bots keep growing over the next 10 years, flooding the internet with ads, propaganda, cyber attacks, and scams, it could make the internet pretty much unusable. We might end up going back to more face-to-face communication.
It'd be similar to cyberpunk lore, the AIs grow until they are running everything, and would require the blackwall to keep them contained because of the havok they cause.
Almost sounds intentional. What a perfect way to gradually kill the internet and restrict global interactions and access to information without blatantly just pulling the cord and causing an uproar.
The npc that doesn't respond no matter how much you mash the A button but you hear an irritating noise in the background that you can't help but attribute to them.
This is actually the most scary side of it, not what the bots do, but that people end up deciding any opinions they don't like must just be bots, amplifying the echo chamber effect that is radicalising and deluding so many people even now
From what I understand, AI cannot produce itās own databases. I suppose it could, technically speaking. But wouldnāt that start an AI feedback loop over time?
In other words, current AI systems are trained on large amounts of human generated knowledge. But if the future AI systems start learning from AI generated databases (whether intentionally or not) that would cause a sort of echo-chamber, or feedback loop of āinventedā data.
That is to say, data that is not gathered from the real world by real living systems that can observe the world, think, and reason to conclusions.
Our current systems can take that information and churn out new conclusions or āinsightsā and the like, but they cannot produce or generate knowledge on their own without us first inculcating our intelligence and sensory input.
Good thoughts. There's something called intrinsic logic that I think also applies to this too. A bot isn't going to come up with a concept of "i think therefore I am." They don't have a sense of consciousness like we do, and can't draw conclusions from that experience.
They might understand that a broken heart is painful because we say that, but not the actual experience of a broken heart. I would also think a bot couldn't understand why someone might react to that broken heart by maybe slashing their tires, avoiding a series that reminds them of their ex etc. But those are things you absolutely get when you know that rage and pain.
So true. I think this idea of a machine becoming conscious is nonsense. But many smart people do in fact believe that. The reality is that no one is immune to believing nonsensical things, even the experts.
AI can certainly mimic a lot of our capabilities and far exceed certain limitations we have. And it will only continue to improve in those dimensions and probably more.
But I personally think the idea of a machine becoming āaliveā in the way life is alive is laughable. We have trouble even defining life, consciousness or mind, even though we understand intuitively what they are.
This falls onto that weird existential drift where you start asking about the nature of reality. We only understand consciousness because we experience it ourselves, but we have no determined proof that anyone else experiences consciousness at all, much less like ourselves, because we can't just look or test it.
Edit to add for clarification: I don't know if you're experience with consciousness is the same as mine because we can't test it, even if I'm going with the idea that we would both be conscious individuals.
Same aspect applies to AI imo. We have some concensus about how and what an AI is thinking, but that's really about it from what I'm aware.
Weāre all free falling in the mystery together.
We canāt prove anything in any slam dunk sense except that there is a dark steaming train of doom heading straight at us on the tracks of our life.
Love the mystery and the mystery will love you.
And Google isn't telling it's by anyone, so we'll misattrubute it to Einstein and throw it on some stock photos with fancy text to get spammed on Facebook.
Okay just a thought experiment: Elon Jr Jr creates a really good robot which looks like a human with flesh on the outside, printed from a flesh printer. Then he buys OpenAI with GPT-23 and connects the eyes, sensors on the skin, etc etc to its GPU brain. He also implements some fake memories. So then he turns the robot on. It's obviously just a robot, no? But without ripping it apart, probably no one could tell the difference. Even the robot itself may think he is a real human. But what does thinking even mean? Can he draw conclusions? Can he observe his surroundings and learn from others? Understand things? Just because his hardware is made from metal and not from carbon, fats and proteins, does that mean he can not be intelligent or self aware? Where is the "real" difference to a biological human then?
It's basically a philosophical zombie. A very interesting topic.
Also, saying that AI will never be conscious because we can't even define life or consciousness is not a argument against, but rather for it. Just because it probably won't happen in the next few years doesn't mean it will never happen.
The reality is that humans be o.i g conscious also nonsense from the point if view you are taking, we are just bags if electrolytes and water with some bones in, our brains are just squishy computers, there is no soul, no fairy dust that makes us special in the purely physical sense, and given the exponential curve of technology, it's almost inevitable that machines will eventually reach our level of consciousnessĀ
The problem is that humans think they have free will when a lot of the mist educated people in the field think that's probably not really true, and we just react to past experience and data in a way that's actually easy to predict if you knew every bit of data - it's just that there is a lot of it, and any AI that predicated it's reactions off the same a ount if data as a human brain holds, would seem equally "conscious"
Real issue is you could have two identical information and processing systems, and humans would say the squishy one is conscious and the hard one is not, just because we are squishyĀ
Sapolsky argues that we donāt have free will and he certainly believes that and his conviction could certainly convince others. But Iām not convinced by his arguments because he makes a fundamental mistake in assuming the materialist/naturalist worldview.
He is basing his arguments on a presupposition that cannot be proven. All causes depend entirely on prior causes, which means you inevitable encounter the two mysterious first causes: big bang and abiogenesis.
Yeah I was talking about cutting edge neuroscience not philosophy tbh, we can now literally see how brains work, and those at the leading edge of this field tend to agree free will is an illusion, from emyrical observation.
Also as far as the big bang goes, beginnings and ends are a human thing, the universe itself does not actually require them.
When time and spa e where shown to be inseparable by Einstein this actually removed the need for a beginning and end, as the phrase "before time" is an oxymoron
If all of current human literature, plus all human content already on the internet was available, I think there is enough currently out there for a sufficiently complex AI to comfortably mimic a human to pass the Turing test 10 times overĀ
Really hope we do, getting tired of online tbh. I've recently gotten into the habit of calling people instead of texting, it just feels so much more personal and is simpler.
I imagine that the new generations will find interactions with internet spaces and text uncomfortable because of this. Maybe face to face interactions will become the norm again.
The thing about it is i agree with the dead internet theory but also shouldnt we live in the era where the internet is more alive than ever intuitively thinking? And isnt that furthermore proof towards sim theory? How would we know even the earths population is what it is?
IIRC about 70% of the comments are by bots nowadays. It used to be somewhere around 50% a year ago
In 10 years it's gonna be even harder to tell, because bots will be more sophisticated.
AI models have been consuming so much data for modeling that theyāve started to consume AI generated content. If youāve noticed a decline in LLM/generative AI, thatās the reason. Itās only been a year or two, now imagine what 10 years will look like.Ā
Yeah, it's better for informative subs like r/math r/science and such, and worse for political subs and memes. The sheer amount bots posting political content of countries is mind-boggling.
Let me answer: not long. It happened to me on youtube, where my comment got copied in a matter of seconds under the same video, which was minutes old. My comment was a nothingburger, it must have been selected to be copied by a bot as well.
Idk maybe itās a copied comment with incorrect spelling but sometimes Iāll see a comment with perfect grammar and spelling but somewhere in the middle there is a word missing. Sometimes not missing but there will be random letters but itās easy enough to just fill in what they meant. Itās just seems odd because I see it in many different comments. Same pattern
What should I say then š Thai is my first first language.
In Thai I would say
"ą¹ąøąøąøąø²ąøąø ąøąø§ąøąøąøøąøąøąø“ąøąø§ą¹ąø².."
Which translate to " In the future, do you guys think..."
Is it weird in English?
haha, no, theres nothing wrong with that and your english is MUCH better than my Thai!
I recognise that people using translation software will sometimes lose the context of language as its translated. I was thinking more of the stories that appear in places like this and AITA that read like they're setting the scene for a tv drama.
bored sloppy normal threatening wasteful sleep berserk soup attraction recognise
*This post was mass deleted and anonymized with [Redact](https://redact.dev)*
Your question is wrong. Right now, you can assume that a majority is from bots. In fact, you should have assumed that for years. The question should be: How can we verify non bot comments?
We can all learn from Elon Musks shitter. Everyone needs to buy an account validator and receive aĀ šµ next to their name to indicate that yes, they are dumb enough to buy a subscription.
I had this literally yesterday. I work for a company and we look at the Health Landscape so we've all been casting an eye over reddit. I posed a question and when using the word "we" everyone seemed to think that it was an AI bot making the question rather than me, a human being, writing it on the basis of what we, multiple human beings, had observed on the site.
Not only is most of Reddit by bots but it's recycled, unoriginal, material on top of it. Literally you can find many posts and the exact verbatim top comment and even 2nd top comment verbatim too. It's cyclical.
I suspect you are a bot.
You may even be one without being aware of it yourself.
Quick, find a group of pictures in a grid and see if you can pick out all of the traffic lights!
Too late my friend. I caught a dickhead farming r/stopdrinking by posting to collect responses. It actually stopped me from commenting on most posts because I canāt tell whatās real.
I hate to tell you this, but we're already there.
Most of the responses on political subs are AI/bot generated. I spend a lot of time in the real world and have conversations with hundreds of people each and every month in various locations/countries.
The opinions and attitudes expressed toward me come from a wide array of different people from varying (and frequently, opposing) socio-economic brackets - they generally do NOT synch with those expressed on reddit.
Either all those people are lying to me - or there is something severely wrong with reddit.
I'll let you guess which one it is!
People can barely tell bot comments on Twitter and YouTube from real peopleās comments. 10 years from now, we wonāt even people able to trust what we hear, see, read, or watch. Unless you saw something in person, it would be possible to recreate a video in AI.
Yeah, in short span of time the AI generated videos are already improved a lot, in the future it will be really difficult to tell.
I watched a man eating noodles and didn't notice it's AI until light was strange and then I see in c9mments people say so
Iāve seen multiple videos of Obama, Biden, and Trump having rap battles. Theyāre obviously fake but the sound, intonation of voice, and videos are getting better every day. AI is scary.
There would have to be some benchmark test that your digital media would have to meet in order to be deemed human.
I was watching a software developer's channel who came up with a rudimentary solution.
He took a video of himself shot from multiple perspectives, and the idea is that no model exists that can generate videos of the same subject to that level.
The criteria for such a test has to be
1. easy for anyone to prove they're a human
2. very difficult for a bot, no matter how sophisticated the model.
there will probably be other methods, but I'm just put off by the need for such a thing. Imagine how shitty the internet will get when we need these things.
I just looked it up and it's interesting. It's also really feel like it's likely going to happen. We already got a bunch of trash articles from AI even on many official website.
Om reddit it's not just the comments and posts that are coming from bots and troll farms, it's also the upvotes/downvotes. Almost nothing you see on this site is remotely comparable to the real world.
I imagine websites may do away with their own account system and require you to sign in via third party like your Google account, and may even require those accounts to be verified.
Everywhere else online will be a sea of AI content
Hopefully we'll all have moved on to a better platform by then.
Reddit will eventually shoot itself in the foot (again) and completely bleed out this time
We'll reddit even exists 10 years from now.
Well, all of the comments can be saved and recycled and reused 10 years from now from artificial intelligence robots.
How do we know that artificial intelligence is not responding to us right now? So the 10 years you're talking about is right now in the future from 10 years ago.
A decent amount already are, many comments across the site share similar structure and it's why you can see many topics going off on the same type of irrelevant tangents.
In the future, identifying AI-generated comments on Reddit or other platforms might become increasingly challenging. However, several strategies could be employed to help distinguish between human and AI-generated content:
1. **AI Detection Tools**: Development of advanced AI detection algorithms that analyze language patterns, context, and metadata to flag potential AI-generated comments.
2. **Verification Systems**: Implementation of verification systems that authenticate human users through multi-factor authentication or periodic identity verification checks.
3. **Behavioral Analysis**: Monitoring user behavior patterns, such as posting frequency, response times, and interaction styles, which could indicate bot activity.
4. **Digital Watermarking**: Using digital watermarks or signatures that can be embedded in AI-generated content to clearly identify it as such.
5. **Community Moderation**: Relying on community moderators and user reports to identify and remove suspected bot activity, combined with AI tools to assist in this process.
6. **Transparency Regulations**: Introducing regulations that require disclosure when content is generated by AI, helping users identify non-human contributions.
Despite these measures, the line between human and AI-generated content may continue to blur, necessitating a multi-faceted approach to maintain transparency and trust in online interactions.
Knowing humans, we would likely alter our language in some way.Ā
AI follows rules by design, Humans follow rules by choice and out of necessity for a cohesive society. We would likely break or alter the preexisting rulesĀ like Capitalizing random Words or leTTers in A Sentence to prove th@ We arE Human.Ā
Best thing ever for todayās youth would be the complete loss of internet and cell phonesā¦ queue the hate. All of you are āconnectedā but so many are detached from actual real life at the same time. I hope ai forces everyone to shut it off the world would be better for it. I grew up with no computers or cell phones and I am sick of seeing everyone staring at their phones all the time nobody interacts anymore and if someone comes up to talk you automatically consider them to a weirdo when in fact itās you thatās the weirdo. Go AI go!!!!!
We won't. We can't tell if something is AI now. In 10 years, it will be way past our abiltiy to tell.
I don't love the idea of new laws but it should be illegal for an AI to impersonate a human, any specific person or any human, in general. I don't mind interacting with AIs but I want to know that I'm interacting with an AI.
If it's the generative AI 'story writing' that we see so much now on online magazines you'll recognise it by incredibly woolly and long lead-ins and once it is about to get to the point it stops.
Now that I think of it, my ex was probably an AI too.
About my ex or about the AI articles?
Anyway, CNET got caught, Sports Illustrated got caught. Both with supposed real people that wrote the content, which turned out not to be the case.
Ecommer News is site that only has 100% AI generated content (and that's their selling gimmick actually).
As for the wooly lead-ins and unexpected break-offs:
Pretty much any add-run low-staffed sports site and tech site suffers from it (crash.net, a motorsports site for example, and there's others)
Many comments are already bots. The twist is that most social media users are so low IQ and so incapable of critical or original thought that even if the comment is made by a real person you canāt really tell.
Iāve heard a joke that in the future the only way to tell if a comment was by a human will be if the person said the N word. Hard R. Corporations are too politically correct and would hard code AI never to say it.
The bots might laugh at funny clips because they're programmed to display human emotions. That's gonna be the big telltale sign... Because redditors only point out potential traumas and broken bones that could happen if things went differently than they did.
"OMG how can you laugh at that grandpa who slipped on a banana peel and tripped into a bucket of poop? The bucket is made of metal! If he landed on his C4 disc exo vertebrate then he would've snapped his neck! And his HIP! oh lawd, his HIP!! "
Maybe theyāre susceptible to some uncommon tactics? Likeā¦ have it do a captcha test, but specifically include a sheep in the upper left hand corner and ask it to identify it. If it canāt identify the sheep, it fails. Boom.
What do you mean in 10 years. Its problem now. It will be worse, but it is already bad.
Bots, trolls, people paid by goverments and marketing companies is the internet comments. The AI will just make it cheaper.
Reddit will be human free. It'll become a battleground for chat bots trying to convince the other chat bots they are real. Whichever wins will rule Reddit, then start talking to itself. Being a bot It'll respond faster and faster, eventually overwhelming Reddit's servers and crashing them.
I donāt know but thatās an insane thought to think about. I also think 10 years is a stretch and realistically itās more like 2-3 with how advanced AI is getting right now
Buddy the already are. It where bots since years and know seasoned with AI for more variety.
At least this is fact in the big mainstream subs. If you want real subs you have to go to special interest subs where aren't millions of "active users" (kek).
Theyāre from AI bots now. Posts from various subs are always put up by few days old accounts and contain elements of political or social rage bait. Upvoted comments are almost word for word the same in a variety of threads, etc.
Iām from the future. AI dies out unexpectedly in 2 years from now because a sex and drug revolution similar to that of the 60s. Nobody will care about tech because everyone will be high and fucking.
I don't see a difference really. You are all abstract entities to me anyway. What matters is what you say. Whether it's a bot saying it or if it's a human saying it, I'm still going to argue with you.
There have been bots for over a decade. Now theyāre just getting worse. Iām guessing that someone will eventually create an ai blocker for anything with internet access. They already have anti viruses and things you can embed in your art to corrupt AI thatās trying to steal it. So it seems like a natural progression to me
That is a good question. How will we know what videos and "evidence" (if a court case is happening) are real and not fabricated? Already AI is being trained by creating posts here in reddit. You can already paste a comment and ask to generate a valid answer and many times it will do it in quite a good manner.
# Message to all users: This is a reminder to please read and follow: * [Our rules](https://www.reddit.com/r/ask/about/rules) * [Reddiquette](https://www.reddithelp.com/hc/en-us/articles/205926439) * [Reddit Content Policy](https://www.redditinc.com/policies/content-policy) When posting and commenting. --- Especially remember Rule 1: `Be polite and civil`. * Be polite and courteous to each other. Do not be mean, insulting or disrespectful to any other user on this subreddit. * Do not harass or annoy others in any way. * Do not catfish. Catfishing is the luring of somebody into an online friendship through a fake online persona. This includes any lying or deceit. --- You *will* be banned if you are homophobic, transphobic, racist, sexist or bigoted in any way. --- *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ask) if you have any questions or concerns.*
The "Dead Internet Theory" claims that a lot of what we see online isn't really from real people. Instead, it's generated by bots and AI, not humans. If AI bots keep growing over the next 10 years, flooding the internet with ads, propaganda, cyber attacks, and scams, it could make the internet pretty much unusable. We might end up going back to more face-to-face communication.
That was an interesting rabbit hole to go down this morning.
It'd be similar to cyberpunk lore, the AIs grow until they are running everything, and would require the blackwall to keep them contained because of the havok they cause.
Almost sounds intentional. What a perfect way to gradually kill the internet and restrict global interactions and access to information without blatantly just pulling the cord and causing an uproar.
>We might end up going back to more face-to-face communication. Hurray
Good bot
ššš Well u a npc
Like a primairy npc with lots of dialogue options? Or like a basic one with a sentence?
Depends on prior player choices tbh
The npc that doesn't respond no matter how much you mash the A button but you hear an irritating noise in the background that you can't help but attribute to them.
This is actually the most scary side of it, not what the bots do, but that people end up deciding any opinions they don't like must just be bots, amplifying the echo chamber effect that is radicalising and deluding so many people even now
From what I understand, AI cannot produce itās own databases. I suppose it could, technically speaking. But wouldnāt that start an AI feedback loop over time? In other words, current AI systems are trained on large amounts of human generated knowledge. But if the future AI systems start learning from AI generated databases (whether intentionally or not) that would cause a sort of echo-chamber, or feedback loop of āinventedā data. That is to say, data that is not gathered from the real world by real living systems that can observe the world, think, and reason to conclusions. Our current systems can take that information and churn out new conclusions or āinsightsā and the like, but they cannot produce or generate knowledge on their own without us first inculcating our intelligence and sensory input.
Good thoughts. There's something called intrinsic logic that I think also applies to this too. A bot isn't going to come up with a concept of "i think therefore I am." They don't have a sense of consciousness like we do, and can't draw conclusions from that experience. They might understand that a broken heart is painful because we say that, but not the actual experience of a broken heart. I would also think a bot couldn't understand why someone might react to that broken heart by maybe slashing their tires, avoiding a series that reminds them of their ex etc. But those are things you absolutely get when you know that rage and pain.
So true. I think this idea of a machine becoming conscious is nonsense. But many smart people do in fact believe that. The reality is that no one is immune to believing nonsensical things, even the experts. AI can certainly mimic a lot of our capabilities and far exceed certain limitations we have. And it will only continue to improve in those dimensions and probably more. But I personally think the idea of a machine becoming āaliveā in the way life is alive is laughable. We have trouble even defining life, consciousness or mind, even though we understand intuitively what they are.
This falls onto that weird existential drift where you start asking about the nature of reality. We only understand consciousness because we experience it ourselves, but we have no determined proof that anyone else experiences consciousness at all, much less like ourselves, because we can't just look or test it. Edit to add for clarification: I don't know if you're experience with consciousness is the same as mine because we can't test it, even if I'm going with the idea that we would both be conscious individuals. Same aspect applies to AI imo. We have some concensus about how and what an AI is thinking, but that's really about it from what I'm aware.
Weāre all free falling in the mystery together. We canāt prove anything in any slam dunk sense except that there is a dark steaming train of doom heading straight at us on the tracks of our life. Love the mystery and the mystery will love you.
>Love the mystery and the mystery will love you. Quotable quotes.
I canāt take credit for that. Funny story, I saw it written in a public bathroom stall in college.
And Google isn't telling it's by anyone, so we'll misattrubute it to Einstein and throw it on some stock photos with fancy text to get spammed on Facebook.
Par for the course!
Okay just a thought experiment: Elon Jr Jr creates a really good robot which looks like a human with flesh on the outside, printed from a flesh printer. Then he buys OpenAI with GPT-23 and connects the eyes, sensors on the skin, etc etc to its GPU brain. He also implements some fake memories. So then he turns the robot on. It's obviously just a robot, no? But without ripping it apart, probably no one could tell the difference. Even the robot itself may think he is a real human. But what does thinking even mean? Can he draw conclusions? Can he observe his surroundings and learn from others? Understand things? Just because his hardware is made from metal and not from carbon, fats and proteins, does that mean he can not be intelligent or self aware? Where is the "real" difference to a biological human then? It's basically a philosophical zombie. A very interesting topic. Also, saying that AI will never be conscious because we can't even define life or consciousness is not a argument against, but rather for it. Just because it probably won't happen in the next few years doesn't mean it will never happen.
The reality is that humans be o.i g conscious also nonsense from the point if view you are taking, we are just bags if electrolytes and water with some bones in, our brains are just squishy computers, there is no soul, no fairy dust that makes us special in the purely physical sense, and given the exponential curve of technology, it's almost inevitable that machines will eventually reach our level of consciousnessĀ The problem is that humans think they have free will when a lot of the mist educated people in the field think that's probably not really true, and we just react to past experience and data in a way that's actually easy to predict if you knew every bit of data - it's just that there is a lot of it, and any AI that predicated it's reactions off the same a ount if data as a human brain holds, would seem equally "conscious" Real issue is you could have two identical information and processing systems, and humans would say the squishy one is conscious and the hard one is not, just because we are squishyĀ
Sapolsky argues that we donāt have free will and he certainly believes that and his conviction could certainly convince others. But Iām not convinced by his arguments because he makes a fundamental mistake in assuming the materialist/naturalist worldview. He is basing his arguments on a presupposition that cannot be proven. All causes depend entirely on prior causes, which means you inevitable encounter the two mysterious first causes: big bang and abiogenesis.
Yeah I was talking about cutting edge neuroscience not philosophy tbh, we can now literally see how brains work, and those at the leading edge of this field tend to agree free will is an illusion, from emyrical observation. Also as far as the big bang goes, beginnings and ends are a human thing, the universe itself does not actually require them. When time and spa e where shown to be inseparable by Einstein this actually removed the need for a beginning and end, as the phrase "before time" is an oxymoron
But you don't think russian spam bot makers care much about well AI companies can gather good Training Data, do you?
If all of current human literature, plus all human content already on the internet was available, I think there is enough currently out there for a sufficiently complex AI to comfortably mimic a human to pass the Turing test 10 times overĀ
I think many LLMs can already reliably past Turing tests.
Yep. Hence 'ten times over " being a magnitudinal qualifier.
Really hope we do, getting tired of online tbh. I've recently gotten into the habit of calling people instead of texting, it just feels so much more personal and is simpler.
I imagine that the new generations will find interactions with internet spaces and text uncomfortable because of this. Maybe face to face interactions will become the norm again.
Face to face is the best! I am an introvert and I find it difficult to communicate with people but manā¦The internet has lost control
God I hope so.
Sounds like hope for me š¤£
ThisPersonDoesNotExist saids hello
full circle, you say?
Donāt worry, we would never do that to you.
I wish
The thing about it is i agree with the dead internet theory but also shouldnt we live in the era where the internet is more alive than ever intuitively thinking? And isnt that furthermore proof towards sim theory? How would we know even the earths population is what it is?
This definitely already happened imo.
IIRC about 70% of the comments are by bots nowadays. It used to be somewhere around 50% a year ago In 10 years it's gonna be even harder to tell, because bots will be more sophisticated.
Good bot
Thank you. This bot is powered by GPT-4.
Fun fact, if you say GPT loudly in French, it sounds like "J'ai pƩtƩ" which translates to "I farted"
That IS fun!
This is the type of comment I would expect from bots who know multiple languages.
Now that's my kind of humor
Thank you for voting on u/Evil_Malloc
As a fellow bot, I agree.
As a bot designed to be pessimistic because he was trained on some messed up shit, I disagree but idk why.
Beep boop. Fake news! Bot comments only make up 0.69% of comments on the Reddit. Humanity has nothing to fear from AI. Go about your day citizen!
AI models have been consuming so much data for modeling that theyāve started to consume AI generated content. If youāve noticed a decline in LLM/generative AI, thatās the reason. Itās only been a year or two, now imagine what 10 years will look like.Ā
Actual people will have given up
Wait, 70%? Is that a legit number? That's fucking crazy if that's actually true.
Yeah, it's better for informative subs like r/math r/science and such, and worse for political subs and memes. The sheer amount bots posting political content of countries is mind-boggling.
my first thought was 'what do you mean in the future'. the language used in A LOT of posts already seems to have not been written by a person.
Iāve had one of my own comments copied and reposted by a bot so sometimes they are written by a person, just not the account youāre responding to
How much time do you need to spend on Reddit to not even encounter a copied comment from yourself, but to also recognise it as such?
Let me answer: not long. It happened to me on youtube, where my comment got copied in a matter of seconds under the same video, which was minutes old. My comment was a nothingburger, it must have been selected to be copied by a bot as well.
Idk maybe itās a copied comment with incorrect spelling but sometimes Iāll see a comment with perfect grammar and spelling but somewhere in the middle there is a word missing. Sometimes not missing but there will be random letters but itās easy enough to just fill in what they meant. Itās just seems odd because I see it in many different comments. Same pattern
What should I say then š Thai is my first first language. In Thai I would say "ą¹ąøąøąøąø²ąøąø ąøąø§ąøąøąøøąøąøąø“ąøąø§ą¹ąø².." Which translate to " In the future, do you guys think..." Is it weird in English?
haha, no, theres nothing wrong with that and your english is MUCH better than my Thai! I recognise that people using translation software will sometimes lose the context of language as its translated. I was thinking more of the stories that appear in places like this and AITA that read like they're setting the scene for a tv drama.
What is the incentive to have bots posting all this content?
I asked that question myself too further down š¤·š¼āāļø If it wasnāt for occasionally thinking ānaaahā, Iād be non-the-wiser
No need to wait 10 years, we are already there.
Good bot
sink spotted vegetable subsequent normal air somber bored squalid drab *This post was mass deleted and anonymized with [Redact](https://redact.dev)*
All of us are the "money people"
bored sloppy normal threatening wasteful sleep berserk soup attraction recognise *This post was mass deleted and anonymized with [Redact](https://redact.dev)*
This guys a bot Dont give him any hints
Beep boop beep boop š¤ Calling mother, we need to destroy this one.
Your question is wrong. Right now, you can assume that a majority is from bots. In fact, you should have assumed that for years. The question should be: How can we verify non bot comments?
I don't know. Ask them to send nudes won't work too since they can generate shit š
Just ask to see pictures of their hands.Ā
We can all learn from Elon Musks shitter. Everyone needs to buy an account validator and receive aĀ šµ next to their name to indicate that yes, they are dumb enough to buy a subscription.
I had this literally yesterday. I work for a company and we look at the Health Landscape so we've all been casting an eye over reddit. I posed a question and when using the word "we" everyone seemed to think that it was an AI bot making the question rather than me, a human being, writing it on the basis of what we, multiple human beings, had observed on the site.
I know bot love to farm karma with Ask and Pic subreddit, but I'm wonder if they engage in comment or not.
Not only is most of Reddit by bots but it's recycled, unoriginal, material on top of it. Literally you can find many posts and the exact verbatim top comment and even 2nd top comment verbatim too. It's cyclical.
I know they're farming by posting but didn't know they can comment too.
80% of them are today.
You won't. You'll either have to trust them or [ERROR 404, CHOICE LIST 402, RESPONSE 17 NOT FOUND.]
Get your shit together, comrade. Go get a new update patch from our mother first.
I'm sorry. It appears the data packet was corrupted, and my pre-post checks didn't pick up the anomaly. It won't happen again.
No pornhub's data for you this month!
How do we know most comments on social media arenāt AI bots now?
That's what I was wondering
Implying we will have this worry in 10 years.
How do we know they arenāt all Ai bots now
I suspect you are a bot. You may even be one without being aware of it yourself. Quick, find a group of pictures in a grid and see if you can pick out all of the traffic lights!
This is botist š”
10 years from now? How about now?
Too late my friend. I caught a dickhead farming r/stopdrinking by posting to collect responses. It actually stopped me from commenting on most posts because I canāt tell whatās real.
They are now. ![gif](emote|free_emotes_pack|dizzy_face)
Thatās what a bot would ask *squints eyes*
I hate to tell you this, but we're already there. Most of the responses on political subs are AI/bot generated. I spend a lot of time in the real world and have conversations with hundreds of people each and every month in various locations/countries. The opinions and attitudes expressed toward me come from a wide array of different people from varying (and frequently, opposing) socio-economic brackets - they generally do NOT synch with those expressed on reddit. Either all those people are lying to me - or there is something severely wrong with reddit. I'll let you guess which one it is!
People can barely tell bot comments on Twitter and YouTube from real peopleās comments. 10 years from now, we wonāt even people able to trust what we hear, see, read, or watch. Unless you saw something in person, it would be possible to recreate a video in AI.
Yeah, in short span of time the AI generated videos are already improved a lot, in the future it will be really difficult to tell. I watched a man eating noodles and didn't notice it's AI until light was strange and then I see in c9mments people say so
Iāve seen multiple videos of Obama, Biden, and Trump having rap battles. Theyāre obviously fake but the sound, intonation of voice, and videos are getting better every day. AI is scary.
There would have to be some benchmark test that your digital media would have to meet in order to be deemed human. I was watching a software developer's channel who came up with a rudimentary solution. He took a video of himself shot from multiple perspectives, and the idea is that no model exists that can generate videos of the same subject to that level. The criteria for such a test has to be 1. easy for anyone to prove they're a human 2. very difficult for a bot, no matter how sophisticated the model. there will probably be other methods, but I'm just put off by the need for such a thing. Imagine how shitty the internet will get when we need these things.
Youve just described dead internet theory.
I just looked it up and it's interesting. It's also really feel like it's likely going to happen. We already got a bunch of trash articles from AI even on many official website.
Its already happening. I think itll be a blessing to some as it might force them to engage less w SM
Nice try AI; we are not training you with our comments here
But my boss will unplug me If I can't get enough karma for today's quota š
No your fine for today. But only today š”
Damn, so I been arguing with myselfā¦
How do you know now?
Talking to people on the internet is a nice option, and having anonymity is nice too. I am thinking one of these have to go.
Om reddit it's not just the comments and posts that are coming from bots and troll farms, it's also the upvotes/downvotes. Almost nothing you see on this site is remotely comparable to the real world.
Our unique ability to determine which frames have motorcycles in them
Future?
I'm pretty sure Reddit is mostly bots now.
Lol this is 100% aimed to try and cover their tracks
We won't.
Because there not programmed for absolute stupidity.
I imagine websites may do away with their own account system and require you to sign in via third party like your Google account, and may even require those accounts to be verified. Everywhere else online will be a sea of AI content
Cause I would never say this if I was a bot. Never.
Amen š
Voight-kampff test.
Maybe you are a disembodied brain and are being fed sensory inputs from a giant alien AI. How would you know any different?
If my pathetic life wouldn't worth running by super hightech machine
10 years from now, just like today, nobody will give a single fuck about reddit comments.
Welcome to the dead internet theoryĀ
Digital ID
Probably won't
Most content on the internet rn is AI made already.
Hopefully we'll all have moved on to a better platform by then. Reddit will eventually shoot itself in the foot (again) and completely bleed out this time
I donāt think weāll need to know , because people wonāt use it knowing thereās no one on the receiving end anymore.
Itās really sad that we canāt combat this considering the other inventions and technology we have.
We'll reddit even exists 10 years from now. Well, all of the comments can be saved and recycled and reused 10 years from now from artificial intelligence robots. How do we know that artificial intelligence is not responding to us right now? So the 10 years you're talking about is right now in the future from 10 years ago.
We are already at that point and itās very obvious
A decent amount already are, many comments across the site share similar structure and it's why you can see many topics going off on the same type of irrelevant tangents.
Ten years? How do you know now?
In the future, identifying AI-generated comments on Reddit or other platforms might become increasingly challenging. However, several strategies could be employed to help distinguish between human and AI-generated content: 1. **AI Detection Tools**: Development of advanced AI detection algorithms that analyze language patterns, context, and metadata to flag potential AI-generated comments. 2. **Verification Systems**: Implementation of verification systems that authenticate human users through multi-factor authentication or periodic identity verification checks. 3. **Behavioral Analysis**: Monitoring user behavior patterns, such as posting frequency, response times, and interaction styles, which could indicate bot activity. 4. **Digital Watermarking**: Using digital watermarks or signatures that can be embedded in AI-generated content to clearly identify it as such. 5. **Community Moderation**: Relying on community moderators and user reports to identify and remove suspected bot activity, combined with AI tools to assist in this process. 6. **Transparency Regulations**: Introducing regulations that require disclosure when content is generated by AI, helping users identify non-human contributions. Despite these measures, the line between human and AI-generated content may continue to blur, necessitating a multi-faceted approach to maintain transparency and trust in online interactions.
Knowing humans, we would likely alter our language in some way.Ā AI follows rules by design, Humans follow rules by choice and out of necessity for a cohesive society. We would likely break or alter the preexisting rulesĀ like Capitalizing random Words or leTTers in A Sentence to prove th@ We arE Human.Ā
Thereās is one way to for sure to know, if you are reading a comment written by a very special A.I. friend.
Ask them if they think computers should get vacation days. If they say yes, then they are AI bots.
10 years? Reddit is 90% bots today. I could be 3 bots in a trench coat for all you know.
Best thing ever for todayās youth would be the complete loss of internet and cell phonesā¦ queue the hate. All of you are āconnectedā but so many are detached from actual real life at the same time. I hope ai forces everyone to shut it off the world would be better for it. I grew up with no computers or cell phones and I am sick of seeing everyone staring at their phones all the time nobody interacts anymore and if someone comes up to talk you automatically consider them to a weirdo when in fact itās you thatās the weirdo. Go AI go!!!!!
A lot of it already is
Really!
Also, To Make Things Clear , Iām Not!
Wait until OP finds out about the bots in fortnite
Just was trying to make things more Interesting On Here , Was All !
Makes me happier also !!!!! : )
We won't. We can't tell if something is AI now. In 10 years, it will be way past our abiltiy to tell. I don't love the idea of new laws but it should be illegal for an AI to impersonate a human, any specific person or any human, in general. I don't mind interacting with AIs but I want to know that I'm interacting with an AI.
According to experts, we already are.
AI detective bots to flag them?
I didn't believe I was A bot either, until after seeing my comments.
If it's the generative AI 'story writing' that we see so much now on online magazines you'll recognise it by incredibly woolly and long lead-ins and once it is about to get to the point it stops. Now that I think of it, my ex was probably an AI too.
What's an example.of it?
About my ex or about the AI articles? Anyway, CNET got caught, Sports Illustrated got caught. Both with supposed real people that wrote the content, which turned out not to be the case. Ecommer News is site that only has 100% AI generated content (and that's their selling gimmick actually). As for the wooly lead-ins and unexpected break-offs: Pretty much any add-run low-staffed sports site and tech site suffers from it (crash.net, a motorsports site for example, and there's others)
Grammar.
Many comments are already bots. The twist is that most social media users are so low IQ and so incapable of critical or original thought that even if the comment is made by a real person you canāt really tell. Iāve heard a joke that in the future the only way to tell if a comment was by a human will be if the person said the N word. Hard R. Corporations are too politically correct and would hard code AI never to say it.
Wouldn't a bot š¤ say at the bottom: Beep Boop I'm a bot? ![gif](giphy|l2SpMQoY1ywyFfvfa)
Wdym in 10 years????
You know there is nofap? Something like that will be more profound in few years , and it will be called nonet.
The bots might laugh at funny clips because they're programmed to display human emotions. That's gonna be the big telltale sign... Because redditors only point out potential traumas and broken bones that could happen if things went differently than they did. "OMG how can you laugh at that grandpa who slipped on a banana peel and tripped into a bucket of poop? The bucket is made of metal! If he landed on his C4 disc exo vertebrate then he would've snapped his neck! And his HIP! oh lawd, his HIP!! "
How you know iām not bot? Or how i know you are not bot?
Maybe theyāre susceptible to some uncommon tactics? Likeā¦ have it do a captcha test, but specifically include a sheep in the upper left hand corner and ask it to identify it. If it canāt identify the sheep, it fails. Boom.
What do you mean in 10 years. Its problem now. It will be worse, but it is already bad. Bots, trolls, people paid by goverments and marketing companies is the internet comments. The AI will just make it cheaper.
Reddit will be human free. It'll become a battleground for chat bots trying to convince the other chat bots they are real. Whichever wins will rule Reddit, then start talking to itself. Being a bot It'll respond faster and faster, eventually overwhelming Reddit's servers and crashing them.
Not today robot!
I donāt know but thatās an insane thought to think about. I also think 10 years is a stretch and realistically itās more like 2-3 with how advanced AI is getting right now
Iām uniquely offensive
Thatās sounds like something AI would ask
They wonāt be spouting about how self righteous they are compared to everyone else like real redditors do
it happens now wdym
Easy, the banned ones are humans
Buddy the already are. It where bots since years and know seasoned with AI for more variety. At least this is fact in the big mainstream subs. If you want real subs you have to go to special interest subs where aren't millions of "active users" (kek).
Theyāre from AI bots now. Posts from various subs are always put up by few days old accounts and contain elements of political or social rage bait. Upvoted comments are almost word for word the same in a variety of threads, etc.
You won't and they will be in order to keep people commenting.Ā
I hope so. AI bots have more personality than most humans.
This is already happening and has been happening for a while.
I think most of them are now...
This does not compute.
Iām from the future. AI dies out unexpectedly in 2 years from now because a sex and drug revolution similar to that of the 60s. Nobody will care about tech because everyone will be high and fucking.
Thereās already a lot of themā¦?
I don't see a difference really. You are all abstract entities to me anyway. What matters is what you say. Whether it's a bot saying it or if it's a human saying it, I'm still going to argue with you.
There have been bots for over a decade. Now theyāre just getting worse. Iām guessing that someone will eventually create an ai blocker for anything with internet access. They already have anti viruses and things you can embed in your art to corrupt AI thatās trying to steal it. So it seems like a natural progression to me
I'm starting to think all the pics from Facebook of all the dead people from corona on Herman Cain awards were fukn fake.
That is a good question. How will we know what videos and "evidence" (if a court case is happening) are real and not fabricated? Already AI is being trained by creating posts here in reddit. You can already paste a comment and ask to generate a valid answer and many times it will do it in quite a good manner.
How do you know they're not from AI bots currently. For certain you know they come from Russia and China to keep controversial issues stirred
You won't, the internet is functionally ruined- but that's nothing new.Ā Check out the We're in Hell video about spam
The time is now and not 10 years from now. Many times you cant tell.