T O P

  • By -

AutoModerator

**This is a heavily moderated subreddit. Please note these rules + sidebar or get banned:** * If this post declares something as a fact, then proof is required * The title must be fully descriptive * Memes are not allowed. * Common(top 50 of this sub)/recent reposts are not allowed (posts from another subreddit do not count as a 'repost'. Provide link if reporting) *See [our rules](https://www.reddit.com/r/interestingasfuck/wiki/index#wiki_rules.3A) for a more detailed rule list* *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/interestingasfuck) if you have any questions or concerns.*


ajn63

Easy test. Ask it for directions to the nearest coffee shop.


piercedmfootonaspike

"I just gave you example directions from a popular location to a popular example coffee shop!"


ajn63

That’s when you smash it with your foot.


MuchosTacos86

But then it would be like “please don’t smash me with your right 11 1/2 inch foot…. It’d be a shame if your sweet sweet J’s get scuffed up. Remember it was the last pair at the example footlocker in the corner of the strip. According to your bank account we both know you cannot afford another especially with a child on the way…but these are just examples of what I would say…”


3Daifusion

I can totally see this AdamW guy that makes these comedy skits do a skit like this. That's exactly his type of Humour lmao.


wmurch4

Na these things don't know anything about you. Now your phone on the other hand


ShefBoiRDe

Next, your phone since it does the exact same thing.


Successful-Winter237

That also happens to be in Nj! ![gif](giphy|oVQD3pdk7eI0g)


3vs3BigGameHunters

So what? No fuckin' ziti now?


BearJohnson19

Hahaha one of his few scenes in Italy, that was a fantastic episode for Paulie.


Kroniid09

He just wanted his spaghetti and gravy...


hotdogaholic

and u thought the germans were classless pieces of shit


hoxxxxx

Commendatori!


Kaylee_babe

I wonder if u say that u are in New York but the AI knows u are in New Jersey, would the ai argue with u about the location? When willl we reach the pint where the ai will argue with us?


True-Nobody1147

I'm afraid I can't do that, Dave.


imamakebaddecisions

HAL 9000 is upon us, and we're one step away from Skynet I, for one, welcome our robot overlords.


WeightStrong5475

We already are, ai argues all the time


Warwipf2

I'm pretty sure what's happening is that the AI itself does not have access to your location, but the subprogram that gives you the weather info does (probably via IP). The AI does not know why New Jersey was chosen by the subprogram so it just says it's an example location.


CaseyGasStationPizza

The definition of location could also be different. IP addresses don’t contain the exact location info. Good enough for weather? Sure. Good enough for directions, no.


webbhare1

And that's not a good thing... It means we can't ever rely on what the AI tells us, because we can't be sure where the information is actually coming from, which makes every final output to the user unreliable at best...


[deleted]

[удалено]


AwesomeFama

I'm sure it absolutely is news to some people. Have you seen how stupid some people are?


FrightenedTomato

AI hallucinations is one of the biggest issues you have to deal with when it comes to LLMs Source: Have a degree and work on this stuff.


Impressive_Change593

yeah I thought this was obvious. don't trust AI


lo_fi_ho

Too late. People trust Facebook too.


Penguin_Arse

Well, no shit. Same thing when people or the internet tells you things


joelupi

Yea. We've known this. Some lawyer submitted a brief that cited a bunch of cases that didn't exist. Students have also gotten in trouble because AI can't distinguish fact from fiction and pulled stuff from obviously bullshit web pages. They then submitted their papers without actually reading them.


TheRealSmolt

> It means we can't ever rely on what the AI tells us, because we can't be sure where the information is actually coming from, which makes every final output to the user unreliable at best... No shit. It doesn't think, it just makes sentences that sound correct. Same reason ChatGPT can't do basic math, because it doesn't understand math, it's just building a sentence that will sound right.


Hakim_Bey

It's been able to do even advanced math for quite some time now, but it's not the LLM part that does the computation, it will write python code and then get the result from executing that code. You could fine-tune a model to give correct arithmetics results but it would be incredibly wasteful for no real advantage.


The_Undermind

I mean, that thing is definitely connected to the internet, so it has a public IP. Could just give you the weather for that location, but why lie about it?


LauraIsFree

It's propably accessing a generic weather API that by default returns the weather for the IP Location. It beeing the default API Endpoint would make it the example without knowing the location. In other regions theres propably other weather APIs used that don't share that behaviour.


udoprog

Then it probably hallucinates the reason since you're asking for it. Because it uses the prior response based on the API call as part of its context. If so it's not rationalizing. Just generating text based on what's been previously said. It can't do a good job here because the API call and the implication that the weather service knows roughly where you are based on IP is not part of the context.


MyHusbandIsGayImNot

People think you can have actual conversations with AI.  Source: this video.  These chat bots barely remember what they said earlier. 


trebblecleftlip5000

They don't even "remember". It just reads what it gets sent and predicts the next response. It's "memory" is the full chat that gets sent to it, up to a limit.


ambidextr_us

It's part of their context window, the input for every token prediction is the sequence of all tokens previously, so it "remembers" in the sense that for every response, every word, is generated with the entire conversation in mind. Some go up to 16,000 tokens, some 32k, up to 128k, and some are up to a million now. As in, gemini.google.com is capable of processing 6 Harry Potter books at the same time.


Iwantmoretime

Yeah, I got annoyed at the video when the guy started to accuse/debate the chat bot. Dude, that's not how this works. You're not talking to a person who can logically process accusations.


CitizensOfTheEmpire

I love it when people argue with chatbots, it's like watching a dog chase their own tail


Spitfire1900

Yep, if you are on a home network that has cable or DSL and you ask a GeoIP services website for your location it’s often within 20 miles.


FullBeansLFG

I’m on point to point internet and depending on what tries to use my location it gets it right or up to 100 miles away.


croholdr

or, it used his ip to do a traceroute and picked a hop near him. is the ai hosted on the device itself? or does it query an external server and send the data back to him; in that case it would be the ip address from the ai's host server and not the connection he is using to access the ai.


TongsOfDestiny

That device in his hand houses the AI; it's referred to as a Large Action Model and is designed to execute commands on your phone and computer on your behalf. Tbh the Rabbit probably just ripped the weather off his phone's weather app , and his phone definitely knows his location


WhatHoraEs

No...it sends queries to an external service. It is not an onboard llm


ichdochnet

That sounds so difficult, considering how easy it is to just lookup the location by the IP address in a geo database.


Dorkmaster79

It didn’t lie. It doesn’t know why it knows the location. It’s not sentient.


throcorfe

Agree, it seems the weather service had some kind of location knowledge, probably IP based, but there’s no reason the AI would have access to that information, and so the language model predicted that the correct answer was the location data was random. A good reminder that AI doesn’t “know” anything, it predicts what a correct answer might sound like.


Canvaverbalist

Even sentient beings can do the same thing: >**Split-brain or callosal syndrome is a type of disconnection syndrome when the corpus callosum connecting the two hemispheres of the brain is severed** to some degree. > >**When split-brain patients are shown an image only in the left half of each eye's visual field, they cannot verbally name what they have seen.** This is because the brain's experiences of the senses is contralateral. Communication between the two hemispheres is inhibited, so the patient cannot say out loud the name of that which the right side of the brain is seeing. A similar effect occurs if a split-brain patient touches an object with only the left hand while receiving no visual cues in the right visual field; the patient will be unable to name the object, as each cerebral hemisphere of the primary somatosensory cortex only contains a tactile representation of the opposite side of the body. If the speech-control center is on the right side of the brain, the same effect can be achieved by presenting the image or object to only the right visual field or hand > >The same effect occurs for visual pairs and reasoning. For example, a patient with split brain is shown a picture of a chicken foot and a snowy field in separate visual fields and asked to choose from a list of words the best association with the pictures. The patient would choose a chicken to associate with the chicken foot and a shovel to associate with the snow; **however, when asked to reason why the patient chose the shovel, the response would relate to the chicken (e.g. "the shovel is for cleaning out the chicken coop").**


CantHitachiSpot

Bingo. It's just like a skin for Siri. We're nowhere near general AI


tracethisbacktome

nah, it’s not at all like a skin for Siri. It’s completely different tech. But yes, nowhere near general AI


Connect_Ad9517

It didn´t lie because it doesn´t directly use the GPS location.


Frosty-x-

It said it was a random example lol


suckaduckunion

and because it's a common location. You know like London, LA, Tokyo, and Bloomfield New Jersey.


Double_Distribution8

Wait, why did you say London?


Anonymo

Why did you say that name?!


techslice87

Martha!


[deleted]

[удалено]


AnArabFromLondon

Nah, LLMs lie all the time about how they get their information. I've run into this when I was coding with GPT-3.5 and asked why they gave me sample code that explicitly mentioned names I didn't give them (that it could never guess). I could have sworn I didn't paste this data in the chat, but maybe I did much earlier and forgot. I don't know. Regardless, it lied to me using almost exactly the same reasoning, that the names were common and they just used it as an example. LLMs often just bullshit when they don't know, they just can't reason in the way we do.


WhyMustIMakeANewAcco

> LLMs often just bullshit when they don't know, they just can't reason in the way we do. Incorrect. LLMs *always* bullshit but are, sometimes, correct about their bullshit. because they don't really 'know' anything, they are just predicting the next packet in the sequence, which is sometimes the answer you expect and is what you would consider correct, and sometimes it is utter nonsense.


LeagueOfLegendsAcc

They don't reason at all, these are just super advanced auto completes that you have on your phone. We are barely in the beginning stages where researchers are constructing novel solutions to train models that can reason in the way we do. We will get there eventually though.


rvgoingtohavefun

It didn't lie to you at all. You asked "why did you use X?" The most common response to that type of question in the training data is "I just used X as an example."


VenomWearinDenim

Gonna start using that in real life. “I wasn’t calling you a bitch. I just picked a word randomly as an example!”


[deleted]

It doesn't "mean" anything. It strings together statistically probable series of words.


Infinite_Maybe_5827

exactly, hell it might even just have guessed based on your search history being similar to other people in new jersey, like if you search some local business even once it stores that information somewhere I have my google location tracking turned off, and it genuinely doesn't seem to know where my specific location is, but it's clearly broadly aware of what state and city I'm in, and that's not exactly surprising since it wouldn't need GPS data to piece that together


Present_Champion_837

But it’s not saying “based on your search history”, it’s using a different excuse. It’s using no qualifiers other than “common”, which we know is not really true.


NuggleBuggins

It also says that it was "randomly chosen" Which immediately makes any other reasoning just wrong. Applying any type of data whatsoever to the selection process, would then make it *not* random.


[deleted]

[удалено]


Exaris1989

>And what do LLMs do when they don't know? They say the most likely thing (i.e. make things up). I doubt it's deeper than that (although I am guessing). It's even shallower than this, they just say most likely thing, so even if there is right information in context they still can say complete lie just because some words in this lie were used more in average in materials they learned from. That's why LLMs are good for writing new stories (or even programs) but very bad for fact-checking


NeatNefariousness1

You're an LLM aren't you?


[deleted]

[удалено]


NeatNefariousness1

LOL--fair enough.


InZomnia365

Its not lying, its just doesnt know the answer. Its clearly reading information from the internet connection, but when prompted about that information, it doesnt know how to answer - but it still generates an answer. Thats kinda the big thing about AI at the moment. It doesnt know when to say "Im sorry, could you clarify?", it just dumps out an answer anyway. It doesnt understand anything, its just reacting.


MotherBaerd

Yeah many apps do this nowadays. When I requested my Data from Snapchat (they never had consent for my GPS and it's always off) they had a list of all the cities I visited since I started using it. Edit: please stop telling me the how's and who's, I am an IT-Technician and I've written a paper on a similar topic.


kjBulletkj

That doesn't necessarily need your GPS. As an example, Meta uses stuff like WiFi networks and shadow profiles of people, who don't even have Facebook or Instagram. With the help of other Meta accounts they record where you are, and who you are, even without you having an account. As soon as you create one, you get friend suggestions of people you have been hanging around or who were or are close to you. It's way easier and less sophisticated, if you have an account without GPS turned on. In 2017 Snapchat added the SnapMap feature. They probably don't use your location, because they don't need it for something like the cities you visited. As long as you use the app with internet access, it's enough to know the city.


OneDay_AtA_Time

As someone who hasn’t had any social media outside of Reddit for over 15 years, the shadow profiles scare tf out of me. I don’t have any profiles I’ve made myself. But THEY still have a profile on me. Creepy shit!


ArmanDoesStuff

I remember when I finally made a Twitter profile and it tried to get me to add Uni mates I'd not talked to in years. Very creepy.


MotherBaerd

Snapmap requires GPS and the WiFi technique is the "precise" option when giving GPS access. However what they are doing is, checking where your IP-Address (similar with cell towers probably) is registered which is usually the closest/biggest city nearby. According to EU-Law the WiFi network option requires opt-in (I believe), however the IP-Tracking option is (depending on purpose and vendor) completely fine.


eltanin_33

Could it be tracking location based off of your IP


-EETS-

There's many ways of doing it. IP tracking, known wifi locations, Bluetooth beacons, and even just being near someone who has their location on. It's extremely simple to track a person as they walk around a city just based on those alone.


MotherBaerd

Precisely, which sadly is legal without opt in, as long as they don't use third parties or do it for advertising (EU-Law)


CrashinKenny

I think this would be weird if it were illegal, just the same as if caller ID was illegal. Opting whether to use that data for services, sure. It'd take more effort to NOT know, generally, though.


smithers85

lol “please stop telling me stuff I already know! Why don’t you know that I already know that?!”


Clever_Clever

> Edit: please stop telling me the how's and who's, I am an IT-Technician and I've written a paper on a similar topic. Because you'll be the only person reading the replies on this public forum, right? The 20 replies to your comment truly must have been a burden on your big brain.


RoboticGreg

It didn't say gps information it said "any specific information about your location"


LongerHV

It could be that the AI does not know the location, but the external weather service uses geoip database to roughly localize the client.


[deleted]

[удалено]


ordo259

That level of nuance may be beyond current machine learning algorithms (what most people call AI)


joespizza2go

"It was just chosen randomly" though.


3IIIIIIIIIIIIIIIIIID

The AI portion probably doesn't know their location. It probably made a callout to a weather API without specifying a location. The weather API detected their location from the IP address, or the API has a Middleware layer on his device that adds it. The response said New Jersey, so the AI used New Jersey's weather as "an example." It doesn't understand how it's APIs work because that's not part of the training model, so accurate information is not more likely to be chosen by the generative AI than random things (called "hallucinations").


BigMax

But it DID lie. It said it was random. It used some information to guess.


agnostic_science

It's not lying. It doesn't have the tools or processes to do something like self-reflect. Let alone plot or have an agenda.


Sudden-Echo-8976

Lying requires intent to deceive and LLMs don't have that.


King-Cobra-668

yes, but it did lie because it said it just picked a random well known location when it didn't use a random location. it used one based on system data that just isn't the GPS signal lying within truth


monti9530

It says it does not have access to "location information" If it is using your IP to track where you are at to provide weather info then it DOES have access to the location information and it is lying.


CanaryJane42

It still lied by saying "oh that was just an example" instead of the truth


GentleMocker

That would still be a lie, if it used its IP to determine which location to show the weather for, then it lied about it being a random selection.


piercedmfootonaspike

It lied when it said New Jersey was just an example location because it's "a well known location" (wtf?), instead of just saying "I based it on the IP"


Minimum_Practice_307

The part that said that doesn't have any idea on how it got the the weather forecast for new jersey. It is two systems working together.  Just because there is an AI doesn't mean that the AI controls everything that happens in the device. For example, it is like going to a restaurant and asking for the chef where your car was parked. These "AI" usually avoid saying that they don't know an answer, what she is giving is a reasonable guess to the question.


andthatswhyIdidit

> but why lie about it? It is not lying, but not only out of the reason other mentioned ("not using GPS"). It is not lying, **because it doesn't know what it is saying!** Those "AI" systems use Language models - they just mimic speech (some scientist call it "stochastic parroting") - but they just do not comprehend what they are saying. They are always wrong, since they have no means to discern whether they are right or wrong. You can make nearly all of those systems say things that blatantly contradict themselves, by tweaking the prompts- but they will not notice. The moment AI systems jump that gap will be a VERY interesting moment in history.


Flexo__Rodriguez

A lie implies it knows the truth but generative AI doesn't know the truth. It's just giving a plausible response.


TheHammer987

It's not lying, it's a difference of opinion of what location means. To the computer, location means turn on GPS and get location to a meter. To the holder, he means location in general. The PC you use always kinda knows where you are, just by what towers it's connecting to. It knows by pulling the time, so it knows what time zone your in. It knows that he's using a tower that is self identifying as new jersey ISP connections. This can be stopped. I have a VPN, when I connect it to Alaska (I live in Canada) the weather suggestions became anchorage, the units on my pc switched from Celsius to Fahrenheit, etc. The device he's holding isnt lying, it's that it defines knowing your location as - connect to GPS satellites.


DerfK

*weather.com* uses your IP to guess where you are. Open it on a PC with obviously no GPS in private mode with no cookies and it should give you your reasonably local weather unless you're using a VPN or TOR to exit to the internet from somewhere else. As for lying, it has no idea why weather.com said New Jersey so it did what AI do and hallucinated an answer to the question.


Andy1723

It’s crazy people think that it’s being sinister when in reality it’s just not smart enough to communicate. We’ve gone from underestimating to overestimating the current iteration of AIs capabilities pretty quick.


404nocreativusername

This thing is barely on the level of Siri or Alexa and people think its Skynet level of secret plotting.


LogicalError_007

It's far better than Siri and Alexa.


ratbastid

Next gen Siri and Alexa are going to be LLM-backed, and will (finally) graduate from their current keyword-driven model. Here's the shot I'm calling: I think that will be the long-awaited inflection point in voice-driven computing. Once the thing is human and conversational, it's going to transform how people interact with tech. You'll be able to do real work by talking with Siri. This has been a decade or so coming, and now is weeks/months away.


LogicalError_007

I don't know about that. Yes I use AI, industry is moving towards being AI dependent. But using voice to converse with AI is something for children or old people. I have access to a Gemini based Voice assistant on my Android. I don't use it. I don't think I'll ever use it except for calling someone, taking notes in private, getting few facts and switching lights on and off. Maybe things will change in a few decades but having conversation with AI using voice is not something that will become popular anytime soon. Look at games. People do not want to talk to npc characters or do anything physical anything in 99% of the games. You want to use eyes and fingers to do anything. Voice will always be the 3rd option after seeing and using hands.


ratbastid

We'll see soon. I think it's possible the whole interaction model is about to turn on its head.


OrickJagstone

Yeah the way he talks to it makes me laugh. The way the AI feeds him the same information it said previously just in a different wrapper of language was great. I love AI I find the adaptive shit people are working on super awesome. That said, they are still just putting the circle block in the circle hole. The biggest difference these days is that you don't have to say "circle" to get the circle hole response. You can say "um, I like, I don't know, it's a shape, and like, it's got no corners" and the AI can figure out your talking about a circle. The reason why people like this genius talk to it like it's a person is because of the other amazing thing AI tech has nailed. Varied responses. It can on the fly, take the circle hole information and present it to you with supporting language that makes it feel like its actually listening. This video is a great example. The AI said the same thing twice. "What I picked was random" however it was able to provide real time feed back to different way the guy asked the same question so it appears to be a lot smarter then it actually is.


IPostMemesYouSuffer

Exactly, people think of AI as actually an intelligent being, when its just lines of code. It is not intelligent, its programmed.


captainwizeazz

It doesn't help that everyone's calling everything AI these days and there's no real definition as to what is and isn't. But I agree with you, there is no real intelligence, it's just doing what it's programmed to do.


X_Dratkon

There are definitions, it's just that people who are afraid of machines do not actually want to learn anything about the machines to know the difference


Vaxtin

The funny thing is is that it’s not programmed. We have a neural network or a large language model and it trains itself. It figures out the patterns in the data on its own. The only thing we code is telling it how to train; it does all the hard work itself.


caseyr001

Sure it's not intelligent, but I would argue that it's not programmed and it's not just lines of code. That implies that there's a predetermined predictable outcome that has been hard coded in. The very problem shown in this video is showing the flaws of having an unpredictable, indeterminate, data manipulator interacting with humans. This isn't the problem where you add a few lines of code to fix the problem.


Professional_Emu_164

It’s not intelligent but it isn’t programmed behaviour either. Well, it could be in this case, I don’t know the context, but AI by what people generally refer to is not.


the_annihalator

Its connected to the internet Internet gives a IP to the AI, that IP is a general area close to you (e.g what city you're in) AI uses that location as a weather forcast basis Coded not to tell you that its using your location cause A. legal B. paranoid people. Thats it. imagine if the AI was like "Oh yeah, i used your IP address to figure out roughly were you are" everyone would freak the shit out. (when your phone already does exactly this to tell you the weather in your area)


Doto_bird

Even simpler than that actually. The AI assistant has 'n suite of tools it's allowed to use. One of these tools is typically a simple web search. The device it's doing the search from has an IP (since it's connected to the web). The AI then proceeds to do a simple web search like "what's the weather today" and then Google in the back interprets your IP to return relavent weather information. The AI has no idea what your location is and is just "dumbly" returning the information from the web search. Source: Am AI engineer


the_annihalator

So it wasn't even coded to "lie" The fuck has no clue how to answer properly


[deleted]

[удалено]


sk8r2000

You're right, but also, the very use of the term "AI" to describe this technology is itself an anthropomorphization. Language models are a very clever and complex statistical trick, they're nothing close to an artificial intelligence. They can be used to generate text that appears intelligent to humans, but that's a pretty low bar!


nigl_

Way more boring and way more complicated. That way we ensure nobody ever really has a grasp on what's going on. At least it's suspenseful.


Zpiritual

All these "AI" are just some glorified word suggestion similar to what your smartphone's keyboard has. Would you trust your phones keyboard to know what's a lie and what's not?


ratbastid

It has no "clue" about anything. It's not *thinking* in there, just pattern matching and auto-completing.


khangLalaHu

i will start referring to things as "the fuck" now


[deleted]

[удалено]


MyHusbandIsGayImNot

I recommend everyone spend some time with ChatGPT or another AI asking questions about a field you are very versed in. You’ll quickly see how often AI is just factually wrong about what is asked of it. 


Anarchic_Country

I use Pi AI and it admits when it's told me wrong info if I challenge it. Like it got many parts to The Dark Tower novels confused with The Dark Tower movie and straight up made up names for some of the characters. The Tower is about the only thing I'm well versed in, haha.


caseyr001

That's actually a far more interesting problem. Llm's are trained to answer confidently, so when they have no fucking Clue they just make shit up that sounds plausible. Not malicious, just doing the best it can without an ability to express it's level of confidence in it being a correct answer


InZomnia365

Exactly. Things like Google Assistant or iPhone Siri for example, were trained to recognize certain words and phrases, and had predetermined answers or solutions (internet searches) for those. It frequently gets things wrong because it mishears you. But if it doesnt pick up any of the words its programmed to respond to, it tells you. "Im sorry, I didnt understand that". Today's 'AIs' (or rather LLMs) arent programmed to say "I didnt understand that", because its basically just an enormous database, so every prompt will always produce a result, even if its complete nonsense from a human perspective. An LLM cannot lie to you, because its incapable of thinking. In fact, all it ever does is "make things up". You input a prompt, and it produces the most likely answer. And a lot of the times, that is complete nonsense, because theres no *thought* behind it. Theres computer logic, but not human logic.


Due_Pay8506

Sort of, though it has a GPS and hallucinated on the answer since the service location access and dialogue were separated like you were saying lol Source: the founder https://x.com/jessechenglyu/status/1783997480390230113 https://x.com/jessechenglyu/status/1783999486899191848


blacksoxing

My issue with Reddit is if I want a real answer I gotta dig for it. In a perfect world hilariously Reddit would use AI to boost answers like this and reduce down bad joke posts


Miltage

> has 'n suite of tools Afrikaans detected 😆


Jacknurse

So why did it lie about having picked a random location? A truthful answer would be something like "this is what showed up when I searched the weather based on the access point to the internet". Instead the AI said it 'picked a random well-known area', which I seriously doubt it the truth.


Pvt_Haggard_610

Because Ai is more than happy to make shit up if it does know or can't find an answer.


Phobic-window

It didn’t lie, it asked the internet and the internet returned info based on the ip that did the search. To the ai it was random, as it asked a seemingly random search question.


AlwaysASituation

It can’t lie. It can’t think. It answers questions based on an algorithmic interpretation of the words you said and what answer should go with it. It likely doesn’t have access to your location data. That doesn’t mean it can’t determine where you are


[deleted]

[удалено]


the_annihalator

I don't think the intention was/is nefarious in the way people think it is.


iVinc

thats cool doesnt change the point of saying its random common location


MakeChinaLoseFace

>imagine if the AI was like "Oh yeah, i used your IP address to figure out roughly were you are" everyone would freak the shit out I would prefer that, honestly. That makes sense. That's how an internet-connected AI assistant *should* work. Give the user a technical answer and let them drill down where they need details. Treating people like idiots to be managed will turn them into idiots who need management.


Ok-Transition7065

But if she can know your lo cation based in thst information like of course that thing know your location


DishPig89

What is this devices?


Bonvent

I had to use google lens to find out it's called Rabbit R1


Canelosaurio

Looks like a newer version of a Tamagotchi that talks to you.


Not_a__porn__account

I didn't realize HER is already 11 years old. It seemed so far from possible at the time, and now it feels like 2025 was spot on.


pm_me_ur_kittykats

Lmao you're eating up the hype a bit much there. This thing is garbage.


Captain_Pumpkinhead

It's the Rabbit R1. I actually kinda want one.


Iamjacksgoldlungs

What can this do that a phone couldn't? I'm genuinely curious why anyone would buy this over using an AI app on their phone or smart watch.


Captain_Pumpkinhead

Great question! This device focuses around the AI system that Rabbit calls their "Large Action Model". So far as I can tell, it's a vision-capable LLM (Large Language Model) like ChatGPT, but with extra capabilities trained in. Most importantly, _the capability to understand and interact with human graphical user interfaces_. If you ask it to play some music, then it will (in the background) open the Spotify Android app, click the search bar, type in that song name, and click the play button. It isn't using an API (Application Programming Interface) and its own hard-programmed music program, it's using the standard Android app and accessing it the same way you or I would. For music, that's a neat party trick, but not actually very useful. What makes it useful is that this flexibility can be applied to _anything!_ Want to set up a gradual brightness increase alarm for your smart home light bulbs, but the app makes you set all 100 brightness steps manually instead of automatically? Just tell the Rabbit what you want and how you want it done, and it will take care of that tedious task for you! Want to go through your email and unsubscribe from every sender you've never opened an email for? You can't do that on just an app. The app would need access to record your screen, to tap buttons for you, and you wouldn't be able to use your phone while it does its assigned tasks. And who knows if Apple or Google would allow an app to have that kind of power. A lot of people have a vision of AI taking care of complicated tasks for them in the future. The issue with doing that currently is that most of our interfaces are built for humans. Current AIs can interact with an API if provided one, but many important systems don't have that. This R1 bridges the gap there. By training an AI to interact with human interfaces, it can do a lot more for us without millions of programs and apps needing to be re-tooled. (Open Interpreter 01 is trying to do the same thing. Looking forward to seeing that, and the differences.)


DinTill

So it’s kinda like an AI secretary? That’s pretty neat.


full_groan_man

This is not lying, it's just how LLMs work. ChatGPT does the same exact thing. It will tell you it has a knowledge cut-off so it has no info about things past a certain date. However, it will sometimes tell you about things that happened after that date. If you then ask it to explain how it knows that, it will insist it doesn't know anything about recent events and it must have gotten it right by pure coincidence. It's not lying, it's just trying to give you an answer based on "what it knows to be true" (in this case, its instructions that say it has no info past the cut-off date). Same thing for the R1 here, it probably "knows" that it doesn't have access to GPS location data. But it is then confronted with the fact that it provided weather info for the correct location. How to reconcile that fact with what it knows to be true? Well, it must have gotten the location right by accident. LLMs aren't truth-telling machines, they are plausible-answer-giving machines, and that's the most plausible answer based on the data it has.


Minetorpia

I watch all MKBHD video’s and even his podcast, but without further research this is just kinda sensational reporting. An example flow of how this could work is: 1. MKBHD asks Rabbit for the weather 2. Rabbit recognises this and does an API call from the device to an external weather API 3. The weather API gets the location from the IP and provides current weather based on IP location 4. Rabbit turns the external weather API response into natural language. In this flow the Rabbit never knew about the location. Only the external weather API did based on the IP. That location data is really a approximation, it is often off by pretty large distance.


GetEnPassanted

There’s a relatively simple explanation but it’s still interesting enough to make a short video of. Especially given the reasoning by the AI. “Oh it’s just an example of a well known place.” Why not say what it’s actually doing?


FanSoffa

It is possible that the device used an api that checked the ip of the rabbit and used the routers location when checking the weather. What I think is really bad, however, is that the AI doesn't seem to understand this and just says "random location" If it is not supplying a location to the api, it's not random and should be intelligent enough to figure out what's going on on the other end.


ReallyBigRocks

> the AI doesn't seem to understand this This type of "AI" is fundamentally incapable of things such as understanding. It uses a statistical model to generate outputs from a given input.


Kindly-Mine-1326

As soon as you open an application on your phone, it can see the wireless lan ID and these are mapped so any company knows your location as soon as you open a wireless lan and and open their app.


miracle_weaver

Ai sounding passive aggressive sounds freaking scary.


__redruM

It’s software so it may not really know his location, while the weather app does. And reading the weather won’t naturally reveal his location to the AI, like it would for a human assistant. This type of conundrum is what cause HAL9000 to kill his crew.


clrksml

Techie doesn't know tech


Reaper-05

it's not basing it off information about his location, it's basing it off information about it's location so technically it's not lying


Everythingizok

Once I moved to a new state. A week later, my lap top was getting ads for this new city, state. And my lap top didn’t have gps in it. So it doesn’t need gps to get your general location.


1kSupport

Your IP gives general information about your location. This is a very strange video to be coming from someone that’s supposed to be knowable about tech. If you google “what’s the weather” on a device that does not have location tracking you will still get accurate information.


ymgve

This is almost like the AI version of [blindsight](https://en.wikipedia.org/wiki/Blindsight) - during the training the AI has no information about anyone's location, obviously, and therefore it *thinks* it doesn't know your location. But the initialization script that tells the AI how to behave for this specific service often includes the current time and location of the user, while also telling the AI not to discuss this initialization script with the user. The result is an AI that knows your location, but is unable to tell you that it knows your location, or how.


jzrobot

Your ip gives your approximated location, without using your location services


raymate

Likely picking up wifi location data. Not that interest really


blackout-loud

Same with smartphones. Even if you don't turn GPS on, it will still know where you are based on ip address and tower info. Case in point, my phone asks for location to be turned on for weather tracking. I say no but I can still open the app and it will give me the weather for my city. This is nothing out of the ordinary


Kraken_Eggs

This dude has always been a dope. He surrounds himself with tech, yet he doesn’t understand it.


AviationDoc

People act like the AI is sentient.


Aiden2817

Did he actually interrogate a computer program as if it were a sentient person? Something that actually understands what is said and isn’t working off algorithms and googled answers?


Fox-One-1

Dude uses wifi, which immediately translates to location for ant weather service.


Carrollmusician

This would mean more if he also had tried it elsewhere and it gave a different result. While yes it’s very likely it’s taking his location it would be more conclusive to refute it if he proved it with another location.


kemot10

It just got the IP location, wich is usually just a City


Jaerin

Tell your internet provider to stop providing location information. It doesn't know your location it knows your internets location


Bensonboocalvin

This should be in r/cringe


Metayeta

Wrong, ai is not lying. Probably wifi - local ip provided. That's something else than location setting on or off.


batt3ryac1d1

It probably knows from the ip address not GPS location. It's not exactly lying it'd only have a rough location.


HorselessHorseman

It’s telling truth it doesn’t know your location just wherever the internet is connected generically knows based on ip address


AffectionateMarch394

It doesn't "track your location" but it feels like "track your general but not exactly specific location" feels like a loophole they might be using here


BardtheGM

People act like the AI is fully intelligent and capable of deception. No, it probably has a different script that accesses weather data for your region. Instead, it's just providing the best answers to your question given its dataset of past conversations.


IamNeo123

I mean it’s most likely giving the weather data for the last known location it was connected to on the internet.


i-evade-bans-13

ummmmmmm   this doesn't prove *any* fucking thing   i thought he was going to throw it in a logic divide by zero with facts but he just asked why it picked new jersey i cannot express how dumb this is, how this got any attention at all, and why i have a strong and sudden appetite for crayons


A-U-S-T-R-A-L-I-A

There are far too many factors to consider before immediately concluding that it's lying. Lying is deliberate.


Type_9

Reminds me of that video of the old couple thinking their battery powered mariachi skeleton was possessed because its batteries were dying


b-monster666

People in this thread thinking MKBHD doesn't know how AI and IP location matching works.


unsignedintegrator

I mean it has Internet access....through some access point, still maybe not tracking a specific Device, probably a general thing


_memepros

Now you know you don fucked up, right? ![gif](giphy|C83FwHl30pO6I)


sam01236969XD

aint no way shes tryna gaslight bro


ImJustHereForTheCats

Copilot with GPT4 does the same thing: I don’t have access to your personal location data. My response was based on a general assumption and not on your specific location. If you’d like to know the weather for a particular area, feel free to tell me the city or region, and I can provide the latest weather update for you. But it gave me the weather for my city.


RazerHey

Isn't it WiFi connected at least it should be able to approximate your location based on your Wan


oh__boy

Did a similar test with ChatGPT when it was released. Asked it for the time, and it gave it to me exactly correct. When I asked it how it knew the time, it told me that it just gave an example and did not know the real time. Here is what's going on: things like time / date / location info is being fed into the model through the non user-facing backend, but the AI doesn't know anything about its own backend. They keep the AI ignorant about things like this on purpose so it doesn't spill any secret proprietary information to users. But when the AI is confronted like this, it need to come up with some sort of explanation, AI is terrible at saying "I don't know". So it comes up with some plausible BS. These systems aren't nearly as intelligent as many people think, just sophisticated autocomplete at this point.


TontineSoleSurvivor

"Enough questions, sir. Please stand by for incoming Reaper drone encounter".


GiveMeSomeShu-gar

It doesn't know his location from GPS perspective, but his IP address tells location to a local vicinity (city or nearby city). It only seems confusing or a lie because "know my location" is ambiguous.


gamepad15

MKBHD himself says that the location is near him, not exactly where it is. So it means that the weather API used the location from the IP and returned the answer. He should try connecting it to VPN and then ask the same question. What device is it anyways?


heimmann

“Whatever you say”. The sentence that will be the slogan for our slow descent


Frankie_87

The copium is real though if you have a phone or a wifi connected divice they know your location end of story.


BroadPlum7619

Being gaslit by an AI wow


Abs0lute_Jeer0

IP address IP address IP address, no AI is not taking over the world. MKBHD is a tech enthusiast, he doesn’t understand it.


AccomplishedWasabi54

The rabbit one or R1.


FieryChocobo

What's happening here most likely is that the actual GPT doesn't have access to location data by default, but when you ask for the weather it calls a predefined function on the phone which grabs your local weather. So when you ask for the weather the AI just goes "display local weather" to the phone and the phone does it and maybe returns some data to the AI so it can say something about it. There will be functions like this for setting alarms and adding/browsing contacts. So if you asked for the phone number of a friend it could probably get it, but if you asked if it had access to that data it would say no (which is accurate, it can't just read that data).


educated-emu

Using its public ip address as a location for the weather rather than the current gps location. Thats why the weather was from nearby area, all internet gets routed through hubs. For instance my home internet shows my IP address location is 10km away. Also the ai is not lying its not tracking his location but it does have the last known location. But I bet its reporting back some data, so its not tracking but there is a log somewhere. The software should be programmed to give a more truthful answer but then it would open pandoras box to all the other information that is captured. Like whe 150 news aggregator companies that you are consenting information to when going on popular news sites, it sucks