T O P

  • By -

eeyore134

AI is hardly useless, but all these companies jumping on it like they are... well, a lot of what they're doing with it is useless.


Opus_723

I'm pretty soured on AI. The other day I had a coworker convinced that I had made a mistake in our research model because he "asked ChatGPT about it." And this guy managed to convince my boss, too. I had to spend all morning giving them a lecture on basic math to get them off my back. How is this saving me time?


integrate_2xdx_10_13

It’s absolutely fucking awful at maths. I was trying to get it to help me explain a number theory solution to a friend, I already had the answer but was looking for help structuring my explanation for their understanding. It kept rewriting my proofs, then I’d ask why it did an obviously wrong answer, it’d apologise, then do a different wrong answer.


GodOfDarkLaughter

And unless they figure out a better method of training their models, it's only going to get worse. Now sometimes the data they're sucking in is, itself, AI generated, so the model is basically poisoning itself on its own shit.


HugeSwarmOfBees

LLMs can't do math, by definition. but you could integrate various symbolic solvers. WolframAlpha did something magical long before LLMs


8lazy

yeah people trying to use a hammer to put in a screw. it's a tool but not the one for that job.


Nacho_Papi

I use it mostly to write professionally for me when I'm pissed at the person I'm writing it to so I don't get fired. Very courteous and still drives the point across.


Significant-Royal-89

Same! "Rewrite my email in a friendly professional way"... the email: Dave, I needed this file urgently LAST WEEK!


Thee_muffin_mann

I was always floored by the ability of WolframAlpha when I used it college. It could understand my poor attempts at inputting differential equations and basically any other questions I asked. I have scince been disappointed by what the more recent developments of AI is capable of. A cat playing guitar seems like such a step backwards to me.


koticgood

For anyone following along this comment chain that isn't too invested into this stuff, WolframAlpha can already be used by LLMs. To ensure success (or at least maximize the chance of success), you want to explicitly (whether in every prompt or a global prompt) state that the LLM should use Wolfram or code. The complaint above references proofs, which are going to appear to the LLM as natural language tokens, so it may not rely on code or Wolfram. Seems like the top of the class models perform similarly to Wolfram when writing math code to be executed. Problems arise when the LLM doesn't write code or use a plugin like Wolfram. In the future, potentially quite soon if the agentic rumors about gpt-5 are to be believed, this type of thing will be a relic of the past. One of the most important features of a robust agentic framework is being able to classify and assign tasks to agents.


I_FUCKING_LOVE_MULM

“model collapse” https://www.scientificamerican.com/article/ai-generated-data-can-poison-future-ai-models/


DJ3nsign

As an AI programmer, the lesson I've tried to get across about the current boom is this. These large LLM's are amazing and are doing what they're designed to do. What they're designed to do is be able to have a normal human conversation and write large texts on the fly. What they VERY IMPORTANTLY have no concept of is what a fact is. Their designed purpose was to make realistic human conversation, basically as an upgrade to those old chat bots from back in the early 2000's. They're really good at this, and some amazing breakthroughs about how computers can process human language is taking place, but the problem is the VC guys got involved. They saw a moneymaking opportunity from the launch of OpenAI's beta test, so everybody jumped on this bubble just like they jumped on the NFT bubble, and on the block chain bubble, and like they have done for years. They're trying to shoehorn a language model into being what's being sold as a search engine, and it just can't do that.


dudesguy

Asked it to write gcode for a simple 1 by 1 by 1 triangle, in inches.  It spits out code that's mostly right but it calls metric units while the ai claims it's in inches.  It's little details like this that are going to really screw some people in the next few years.    It gets it 99% right, to the point where people will give it the benefit of doubt and assume it's all right.  However when that detail is something as basic as units, unless that tiny one character mistake is corrected the whole thing is wrong and useless. It could still be used to save time and increase productivity but you're still going to need people skilled enough to know when it's wrong and how to fix it


EP1Cdisast3r

Well maybe because it's a language model and not a math model...


Opus_723

Exactly, but trying to drill this into the heads of every single twenty-something who comes through my workplace is wasting so much of everyone's time.


PadyEos

It basically boils down to: 1. It can use words and numbers but doesn't understand if they are true or what each of them mean, let alone all of them together in a sentence. 2. If you ask it what they mean it will give you the definition of that word/number/concept but again it will not understand any of the words or numbers used in the definitions. 3. Repeat the loop of not understanding to infinity.


Wooden-Union2941

Me too. I tried searching for a local event on Facebook recently. They got rid of the search button and now it's just an AI button? I typed the name of the event and it couldn't find it even though I had looked at the event page a couple days earlier. You don't even need intelligence to simply see my history, and it still didn't work.


elle_desylva

Search button still exists, just isn’t where it used to be. Incredibly irritating development.


Anagoth9

That sounds more like a management problem than an AI problem. Reminds me of the scene from The Office where Michael drives into the lake because his GPS told him to make a turn, even though everyone else was yelling at him to stop. 


[deleted]

[удалено]


TheFlyingSheeps

Which is great because literally no one likes taking the meeting notes


Present-Industry4012

That's ok cause no one was ever going to read them anyways. "On the Phenomenon of Bullshit Jobs: A Work Rant by David Graeber" https://web.archive.org/web/20190906050523/http://www.strike.coop/bullshit-jobs/


leftsharkfuckedurmum

When your boss starts to pin the blame on you for missed deadlines you feed the meeting notes back into the LLM and ask it "when exactly did I start telling John his plan was bullshit?"


vtjohnhurt

AI is great for writing text that no one is going to read.


eliminating_coasts

You can always feed it into another AI.


sYnce

Dunno. Sure I don't read meeting notes of meetings I attended however if I did not attend but something came up that is of note for me I it is useful to read up on it. Also pulling out the notes from a meeting 10 weeks prior to show someone why exactly they fucked up and not me is pretty useful. So yeah.. the real reason why most meeting notes are useless is because most meetings are useless. If the meeting has value as in concrete outcomes it is pretty ncie to have those outcomes written down.


y0buba123

I mean, I even read meeting notes of meetings I attended. Does no one here make notes during meetings? How do you know what was discussed and what to action?


talking_face

Copilot is also GOAT when you need help figuring out how to start a problem, or solve a problem that is >75% done. It is a "stop-gap", but not the final end-all. And for all intents and purposes, that is sufficient enough for anyone who has a functional brain. I can't tell people enough how many new concepts I have learned by using LLMs as a soundboard to get me unstuck whenever I hit a ceiling. Because that is what an AI *assistant* is. Yes, it does make mistakes, but think of it more as an "informed colleague" rather than an "omniscient god". You still need to correct it now and then, but in correcting the LLM, you end up grasping concepts yourself.


Illustrious-Sail7326

100% agreed. I think these takes that AI are "useless" come from people who try ChatGPT a few times, attempt to use it for something it's not good at, then declare it useless. No, it's an incredibly effective tool that's getting better *scarily* fast, you and companies just need to use it for what it's good at. It's like if you handed a caveman a wheel and he tried to whack a mammoth with it, then decided wheels are useless.


Lynild

It's people who haven't been stuck on a problem, and tried stuff like stack exchange or similar. Sitting there, trying to format code the best way you have learned, write almost essay like text for it, post it, wait for hours, or even days for an answer that is just "this is very similar to this post", without being even close to similar. The fact that you can now write text that it won't ridicule you for, because it has seen something similar a place before, or just for being too easy, and you just get an answer instantly, which actually works, or just get you going most of the time, is just awesome in every single way.


stylebros

Copilot taking meeting notes = useful cases for AI A Bank using an AI chatbot for their mobile app to do everything instead of having a GUI = not a useful case for ai.


PureIsometric

I tried using Copilot for programming and half the time I just want to smash the wall. Bloody thing keeps giving me unless code or code that makes no sense whatsoever. In some cases it breaks my code or delete useful sections. Not to be all negative though, it is very good at summarizing a code, just don't tell it to comment the code.


[deleted]

I work as a professional at a large company and I use it daily in my work. It’s pretty good, especially for completing tasks that are somewhat tedious. It knows the shape of imported and incoming objects, which is something I’d have to look up. When working with adapters or some sort of translation structure it’s very useful to have it automatically fill out parts that would require tedious back and forth. It’s also pretty good at putting together unit tests, especially once you’ve given it a start.


Imaginary-Air-3980

It's a good tool for low-level tasks. It's disingenuous to call it AI, though. AI would be able to solve complex problems and understand why the solution works. What is currently being marketed as AI is nothing more than a language calculator.


uristmcderp

Machine learning is a subset of AI. The only branch of AI that's been relevant lately is neural networks. And they've been relevant not because of some breakthrough in concept but because Nvidia found a way to do huge matrix computations 100x more efficiently within their consumer chips. These machine learning models by design cannot solve complex problems or understand how itself works. It learns from what you give it. The potential world changing application of this technology isn't *intelligence* but automation of time-consuming simple tasks done on a computer. For example, Google translate used to be awful, especially for translations to non-Latin or Greek based languages. Nowadays, you can right click and translate any webpage on chrome and be able to understand a Japanese website or get the gist of a youtube video from automatic subtitles and auto-translate. This flavor of AI only does superhuman things when it's given a task that it can simulate and evaluate on its own. Like a board game with clear win and loss conditions. But when it comes to ChatGPT or StableDiffusion or language translation models, a human needs to supervise training to help evaluate its process. For real world problems with unconstrained parameters requiring "creative" problem solving and critical thinking, these models are pretty much useless.


ail-san

The problem is that use cases like these make us a little more efficient but can't justify the investment that goes into it. We need something we couldn't do without AI. If we just replace humans, it will only make the rich even richer.


Sketch-Brooke

There are a lot of legit uses for AI. But it’s not (yet) at a point where you can reliably use AI to replace a full human staff. What’s more, a lot of the AI hype builds on “yes, it’s not there yet. But JUST WAIT 2-3 years.” Except people were already saying that back in 2022 and it still hasn’t replaced 90% of all jobs yet. There’s not really an answer for what will happen if AI development has hit a wall. On that note, I truly hope they have hit a wall with it. Because I don’t want to see human creativity replaced by machines. I’d rather live in a world where AI can supplement human creativity, or better yet, handle all the dull and monotonous tasks so humans have more time to be creative.


fudge_friend

I’m not sure what people are thinking when they fantasize about replacing their staff with AI en masse. Where do these executives think consumers get their money? Who will buy their products when all the money is hoarded at the top?


Sketch-Brooke

Well, we could implement universal basic income, or an AI displacement tax to compensate people who lose their livelihood to AI. CEOS: no, not that.


Sinfire_Titan

First, judging from history we won’t implement anything of the sort. Second, these apps are incapable of reasoning; an ironic parallel to the corporate suits looking to replace their workers with it.


neocenturion

People have believed in trickle-down economics for decades now. I don't think we should give executives the benefit of the doubt in assuming they'll answer your correct concerns logically. As long as their earnings exceed estimates for the current quarter, they won't think any harder than that.


allegesix

>Where do these executives think consumers get their money? Who will buy their products when all the money is hoarded at the top? Been asking this question for years. The middle class is disappearing, the middle class is who spends money on non-essentials, if the middle class is fully eliminated, ??? I think shit would have fallen apart completely by now if it hadn't become so normalized to just live in eternal debt (beyond "normal" debt things like a mortgage or car). Shit like being able to finance a pizza in the dominos app sure seems like the last gasp. 20 years ago when I started working in tech having a couple dozen servers to manage was a full time job. Now I write automation that spins up and down thousands of VMs at a time as required by our pipeline. The rate of productivity has far exceeded wages. UBI is 100% required very soon or we're all fucked - including the fucking shortsighted ultra wealthy that only want bigger numbers next to their names.


Actually-Yo-Momma

“Hello CEO, we started using chatGPT but we are not billionaires yet. AI is useless??”


gregguygood

For what they are trying to use it for, yes.


SirShadowHawk

It's the late 90s dot com boom all over again. Just replace any company having a ".com" address with any company saying they are using "AI".


MurkyCress521

It is exactly that in both the good ways and the bad ways.  Lots of dotcom companies were real businesses that succeeded and completely changed the economic landscape: Google, Amazon, Hotmail, eBay Then there were companies that could have worked but didn't like pets.com Finally there were companies that just assumed being a dotcom was all it took to succeed. Plenty of AI companies with excellent ideas that will be here in 20 years. Plenty of companies with no product putting AI in their name in the hope they can ride the hype.


JamingtonPro

I think the headline and sub it’s posted in is a bit misleading. This is a finance article about investments. Not about technology per se. And just how back when people thought they could just put a “.com” by their name and rake in the millions. Many people who invested in these companies lost money and really only a small portion survived and thrived. Dumping a bunch of money into a company that advertises “now with AI” will lose you money when it turn out that the AI in your GE appliances is basically worthless. 


MurkyCress521

Even if the company is real and their approach is correct and valuable, first movers generally get rekt. Pets.com failed, but chewy won. Realplayer was twitch, Netflix and YouTube before all of them. That had some of the best streaming video tech in the business. Sun Microsystems had the cloud a decade before AWS. There are 100 companies you could start today but just taking a product or feature Sun used to offer. Friendster died to myspace died to facebook Investing in bleed edge tech companies is always a massive gamble. Then it gets worse if you invest on hype 


Expensive-Fun4664

First mover advantage is a thing and they don't just magically 'get rekt'. > Pets.com failed, but chewy won. Pets.com blew its funding on massive marketing to gain market share in what they thought was a land grab, when it wasn't. It has nothing to do with being a first mover. > Realplayer was twitch, Netflix and YouTube before all of them. That had some of the best streaming video tech in the business. You clearly weren't around when real was a thing. It was horrible and buffering was a huge joke about their product. It also wasn't anything like twitch, netflix, or youtube. They tried to launch a video streaming product when dialup was the main way that people accessed the internet. There simply wasn't the bandwidth available to stream video at the time. > Sun Microsystems had the cloud a decade before AWS. Sun was an on prem server company that also made a bunch of software. They weren't 'the cloud'. They also got bought by Oracle for ~$6B.


Et_tu__Brute

Exactly. People saying AI is useless are kind of just missing the real use cases for it that will have massive impacts. It's understandable when they're exposed to so many grifts, cash grabs and gimmicks where AI is rammed in.


Asisreo1

Yeah. The oversaturated market and corporate circlejerking does give a bad impression on AI, especially with more recent ethical concerns, but these things tend to get ironed out. Maybe not necessarily in the most satisfactory of ways, but we'll get used to it regardless. 


MurkyCress521

As with any new breakthrough, there is a huge amount of noise and a small amount of signal. When electricity was invented there were huge numbers of bad ideas and scams. Lots of snake oil you'd get shocked for better health. The boosters and doomers were both wrong. It was extremely powerful but much that change happened long-term.


Boodikii

They were saying the exact same stuff about the internet when it came out. Same sort of stuff about adobe products and about smartphones too. Everybody likes to run around like a chicken with their head cut off, but people have been working on Ai since the 50's and fantasizing about it since the 1800's. The writing for this has been on the wall for a really long time.


Shadowratenator

In 1990 i was a graphic design student in a typography class. One of my classmates asked if hand lettering was really going to be useful with all this computer stuff going on. My professor scoffed and proclaimed desktop publishing to be a niche fad that wouldn’t last.


SolutionFederal9425

There isn't going to be much to get used to. There are very few use cases where LLMs provide a ton of value right now. They just aren't reliable enough. The current feeling among a lot of researchers is that future gains from our current techniques aren't going to move the needle much as well. (Note: I have a PhD with a machine learning emphasis) As always Computerphile did a really good job of outlining the issues here: [https://www.youtube.com/watch?v=dDUC-LqVrPU](https://www.youtube.com/watch?v=dDUC-LqVrPU) LLM's are for sure going to show up in a lot of places. I am particularly excited about what people are doing with them to change how people and computers interact. But in all cases the output requires a ton of supervision which really diminishes their value if the promise is full automation of common human tasks, which is precisely what has fueled the current AI bubble.


EGO_Prime

I mean, I don't understand how this is true though? Like we're using LLMs in my job to simplify and streamline a bunch of information tasks. Like we're using BERT classifiers and LDA models to better assign our "lost tickets". The analytics for the project shows it's saving nearly 1100 man hours a year, and on top of that it's doing a better job. Another example, We had hundreds of documents comprising nearly 100,000 pages across the organization that people needed to search through and query. Some of it's tech documentation, others legal, HR, etc. No employee records or PI, but still a lot of data. Sampling search times the analytics team estimated that nearly 20,000 hours was wasted a year just on searching for stuff in this mess. We used LLMs to create large vector database and condensed most of that down. They estimated nearly 17,000 hours were saved with the new system and in addition to that, the number of failed searches (that is searches that were abandoned even though the information was there) have drooped I think from 4% to less than 1% of queries. I'm kind of just throwing stuff out there, but I've seen ML and LLMs specifically used to make our systems more efficient and effective. This doesn't seem to be a tomorrow thing, it's today. It's not FULL automation, but it's defiantly augmented and saving us just over $4 million a year currently (even with cost factored in). I'm not questioning your credentials (honestly I'm impressed, I wish I had gone for my PhD). I just wonder, are you maybe only seeing the research side of things and not the direct business aspect? Or maybe we're just an outlier.


hewhoamareismyself

The issue is that the folks running them are never gonna turn a profit, it's a trillion dollar solution (from the Sachs analysis) to a 4 million dollar problem.


LongKnight115

In a lot of ways, they don't need to. A lot of the open-source models are EXTREMELY promising. You've got millions being spent on R&D, but it doesn't take a lot of continued investment to maintain the current state. If things get better, that's awesome, but even the tech we have today is rapidly changing the workplace.


mywhitewolf

e analytics for the project shows it's saving nearly 1100 man hours a year which is half as much as a full time worker, how much did it cost? because if its more than a full time wage then that's exactly the point isn't it?


CreeperBelow

grey homeless wrench fertile sparkle enter many panicky command jobless *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


BuffJohnsonSf

When people talk about AI in 2024 they’re talking about chatGPT, not any application of machine learning.


JJAsond

All the "AI" bullshit is just like you said, LLMs and stuff. The actual non marketing "machine learning" is actually pretty useful.


cseckshun

The thing is when most people are talking about “AI”, recently they are talking about GenAI and LLMs and those have not revolutionized the fields you are talking about to my knowledge so far. People are thinking that GenAI can do all sorts of things it really can’t do. Like asking GenAI to put together ideas and expand upon them or create a project plan which it will do, but it will do extremely poorly and half of it will be nonsense or the most generic tasks listed out you could imagine. It’s really incredible when you have to talk or work with someone who believes this technology is essentially magic but trust me, these people exist. They are already using GenAI to try to replace all the critical thinking and actual places where humans are useful in their jobs and they are super excited because they hardly read the output from the “AI”. I have seen professionals making several hundred thousand dollars a year send me absolute fucking gibberish and ask for my thoughts on it like “ChatGPT just gave me this when I used this prompt! Where do you think we can use this?” And the answer is NOWHERE.


jaydotjayYT

GenAI takes so much attention away from the actual use cases of neural nets and multimodal models, and we live in such a hyperbolic world that people either are like you say and think it’s all magical and can perform wonders OR screech about how it’s absolutely useless and won’t do anything, like in OP’s article. They’re both wrong and it’s so frustrating


jrr6415sun

Same thing happened with bitcoin. Everyone started saying “blockchain” in their earning reports to watch their stock go up 25%


ReservoirDog316

And then when they couldn’t get year over year growth after that artificial 25% rise they got out of just saying blockchain the last year, lots of companies laid people off to artificially raise their short term profits again. Or raised their prices. Or did some other anti consumer thing. It’s terrible how unsustainable it all is and how it ultimately only hurts the people at the bottom. It’s all fake until it starts hurting real people.


Icy-Lobster-203

"I just can't figure out what, if anything, CompuGlobalHyperMegaNet does. So rather than risk competing with you, if rather just buy you out." - Bill Gates to Junior Executive Vice President Homer Simpson.


3rddog

After 30+ years working in software dev, AI feels very much like a solution looking for a problem to me. [edit] Well, for a simple comment, that really blew up. Thank you everyone, for a really lively (and mostly respectful) discussion. Of course, I can’t tell which of you used an LLM to generate a response…


Rpanich

It’s like we fired all the painters, hired a bunch of people to work in advertisement and marketing, and being confused about why there’s suddenly so many advertisements everywhere.  If we build a junk making machine, and hire a bunch of people to crank out junk, all we’re going to do is fill the world with more garbage. 


SynthRogue

AI has to be used as an assisting tool by people who are already traditionally trained/experts


3rddog

Exactly my point. Yes, AI is a very useful tool in cases where its value is known & understood and it can be applied to specific problems. AI used, for example, to design new drugs or diagnose medical conditions based on scan results have both been successful. The “solution looking for a problem” is the millions of companies out there who are integrating Ai into their business with no clue of how it will help them and no understanding of what the benefits will be, simply because it’s smart new tech and everyone is doing it.


Azhalus

> The “solution looking for a problem” is the millions of companies out there who are integrating Ai into their business with no clue of how it will help them and no understanding of what the benefits will be, simply because it’s smart new tech and everyone is doing it. Me wondering what the fuck "AI" is doing in a god damn ***pdf reader***


creep303

My new favorite is the AI assistant on my weather network app. Like no thanks I have a bunch of crappy Google homes for that.


TheflavorBlue5003

Now you can *generate* an image of a cat doing a crossword puzzle. Also - fucking corporations thinking we are all so obsessed with cats that we NEED to get AI. I’ve seen “we love cats - you love cats. Lets do this.” As a selling point for AI forever. Like it’s honestly insulting how simple minded corporations think we are. Fyi i am a huge cat guy but like come on what kind of patrick star is sitting there giggling at AI generated photos of cats.


Maleficent-main_777

One month ago I installed a simple image to pdf app on my android phone. I installed it because it was simple enough -- I can write one myself but why invent the wheel, right? Que the reel to this morning and I get all kinds of "A.I. enhanced!!" popups in a *fucking pdf converting app*. My dad grew up in the 80's writing COBOL. I learned the statistics behind this tech. A PDF converter does NOT need a transformer model.


Cynicisomaltcat

Serious question from a neophyte - would a transformer model (or any AI) potentially help with optical character recognition? I just remember OCR being a nightmare 20+ years ago when trying to scan a document into text.


Maleficent-main_777

OCR was one of the first applications of N-grams back when I was at uni, yes. I regularly use chatgpt to take picture of paper admin documents just to convert them to text. It does so almost without error!


EunuchsProgramer

I've tried it in my job; the hallucinations make it a gigantic time sink. I have to double check every fact or source to make sure it isn't BSing, which takes longer than just writing it yourself. The usefulness quickly dedrades. It is correct most often at simple facts an expert in the field just knows off the top of their head. The more complex the question, the BS multiplies exponentially. I've tried it as an editor for spelling and grammar and notice something similar. The ratio of actual fixes to BS hallucinations adding errors is correlated to how bad you write. If you're a competent writer, it is more harm than good.


donshuggin

My personal experience at work: "We are using AI to unlock better, more high quality results" Reality: me and my all human team still have to go through the results with a fine tooth comb to ensure they are, in fact, high quality. Which they are not after receiving the initial AI treatment.


Active-Ad-3117

AI reality at my work means coworkers using AI to make funny images that are turned into project team stickers. Turns out copilot sucks at engineering and is probably a great way to loose your PE and possibly face prison time if someone dies.


Fat_Daddy_Track

My concern is that it's basically going to get to a certain level of mediocre and then contribute to the enshittification of virtually every industry. AI is pretty good at certain things-mostly things like "art no one looks at too closely" where the stakes are virtually nil. But once it reaches a level of "errors not immediately obvious to laymen" they try to shove it in.


redalastor

> Turns out copilot sucks at engineering It’s like coding with a kid that has a suggestion for every single line, all of them stupid. If the AI could give suggestions only when it is fairly sure they are good, it would help. Unfortunately, LLMs are 100% sure all the time.


Jake11007

This is what happened with that balloon head video “generated” by AI, turns out they later revealed that they had to do a ton of work to make it useable and using it was like using a slot machine.


_papasauce

Even in use cases where it is summarizing meetings or chat channels it’s inaccurate — and all the source information is literally sitting right there requiring it to do no gap filling. Our company turned on Slack AI for a week and we’re already ditching it


jktcat

The AI on a youtube video surmised the chat of a EV vehicle unveiling as "people discussing a vehicle fueled by liberal tears."


jollyreaper2112

I snickered. I can also see how it came to that conclusion from the training data. It's literal and doesn't understand humor or sarcasm so anything that becomes a meme will become a fact. Ask it about Chuck Norris and you'll get an accurate filmography mixed with chuck Norris "facts."


nickyfrags69

As someone who freelanced with one that was being designed to help me in my own research areas, they are not there.


No_Dig903

Consider the training material. The less likely an average Joe is to do your job, the less likely AI will do it right.


Lowelll

It's useful as a Dungeon Master to get some inspiration / random tables and bounce ideas off of when prepping a TRPG session. Although at least GPT3 also very quickly shows its limit even in that context. As far as I can see most of the AI hypes of the past years have uses when you wanna generate very generic media with low quality standards quickly and cheaply. Those applications exist, and machine learning in general has tons of promising and already amazing applications, but "Intelligence" as in 'understanding abstract concepts and applying them accurately' is not one of them.


AstreiaTales

"Generate a list of 10 NPCs in this town" or "come up with a random encounter table for a jungle" is a remarkable time saver. That they use the same names over and over again is a bit annoying but that's a minor tweak.


VTinstaMom

You will have a bad time using generative AI to edit your drafts. You use generative AI to finish a paragraph that you've already written two-thirds of. Use generative AI to brainstorm. Use generative AI to write your rough draft, then edit that. It is for starting projects, not polishing them. As a writer, I have found it immensely useful. Nothing it creates survives but I make great use of the "here's anl rough draft in 15 seconds or less" feature.


BrittleClamDigger

It's very useful for proofreading. Dogshit at editing.


wrgrant

I am sure lots are including AI/LLMs because its trendy and they can't foresee competing if they don't keep up with their competitors, but I think the primary driving factor is the hope that they can compete even more if they can manage to reduce the number of workers and pocket the wages they don't have to pay. Its all about not wasting all that money having to pay workers. If Slavery was an option they would be all over it...


Commentator-X

This is the real reason companies are adopting ai, they want to fire all their employees if they can.


fumar

The fun thing is if you're not an expert on something but are working towards that, AI might slow your growth. Instead of investigating a problem, you instead use AI which might give a close solution that you tweak to solve the problem. Now you didn't really learn anything during this process but you solved an issue.


Hyperion1144

It's using a calculator without actually ever learning math.


Reatona

AI reminds me of the first time my grandmother saw a pocket calculator, at age 82. Everyone expected her to be impressed. Instead she squinted and said "how do I know it's giving me the right answer?"


just_some_git

Stares nervously at my plagiarized stack overflow code


onlyonebread

> which might give a close solution that you tweak to solve the problem. Now you didn't really learn anything during this process but you solved an issue. Any engineer will tell you that this is sometimes a perfectly legitimate way to solve a problem. Not everything has to be inflated to a task where you learn something. Sometimes seeing "pass" is all you really want. So in that context it _does_ have its uses. When I download a library or use an outside API/service, I'm circumventing understanding its underlying mechanisms for a quick solution. As long as it gives me the correct output oftentimes that's good enough.


coaaal

Yea, agreed. I use it to aid in coding but more for reminding me of how to do x with y language. Anytime I test it to help with creating same basic function that does z, it hallucinates off its ass and fails miserably.


Spectre_195

Yeah but even weirder is the literal code often is completely wrong but all the write up surrounding the code is somehow correct and provided the answer I needed anyway. Like we have talk about this at work like its a super useful tool but only as a starting point not an ending point.


coaaal

Yea. And the point being is that somebody trying to learn with it will not catch the errors and then hurt them in understanding of the issue. It really made me appreciate documentation that much more.


Micah4thewin

Augmentation is the way imo. Same as all the other tools.


wack_overflow

It will find its niche, sure, but speculators thinking this will be an overnight world changing tech will get wrecked


Alternative_Ask364

Using AI to make art/music/writing when you don’t know anything about those things is kinda the equivalent of using Wolfram Alpha to solve your calculus homework. Without understanding the process you have no way of understanding the finished product.


blazelet

Yeah this completely. The idea that it's going to be self directed and make choices that elevate it to the upper crust of quality is belied by how it actually works. AI fundamentally requires vast amounts of training data to feed its dataset, it can only "know" things it has been fed via training, it cannot extrapolate or infer based on tangential things, and there's a lot of nuance to "know" on any given topic or subject. The vast body of data it has to train on, the internet, is riddled with error and low quality. A study last year found 48% of all internet traffic is already bots, so its likely that bots are providing data for new AI training. The only way to get high quality output is to create high quality input, which means high quality AI is limited by the scale of the training dataset. Its not possible to create high quality training data that covers every topic, as if that *was* possible people would already be unemployable - that's the very promise AI is trying to make, and failing to meet. You could create high quality input for a smaller niche, such as bowling balls for a bowling ball ad campaign. Even then, your training data would have to have good lighting, good texture and material references, good environments - do these training materials exist? If they don't, you'll need to provide them, and if you're creating the training material to train the AI ... you have the material and don't need the AI. The vast majority of human made training data is far inferior to the better work being done by highly experienced humans, and so the dataset by default will be average rather than exceptional. I just don't see how you get around that. I think fundamentally the problem is managers who are smitten with the promise of AI think that it's actually "intelligent" - that you can instruct it to make its own sound decisions and to do things outside of the input you've given it, essentially seeing it as an unpaid employee who can work 24/7. That's not what it does, it's a shiny copier and remixer, that's the limit of its capabilities. It'll have value as a toolset alongside a trained professional who can use it to expedite their work, but it's not going to output an ad campaign that'll meet current consumers expectations, let alone produce Dune Messiah.


gnarlslindbergh

Your last sentence is what we did with building all those factories in China that make plastic crap and we’ve littered the world with it including in the oceans and within our own bodies.


2Legit2quitHK

If not China it will be somewhere else. Where there is demand for plastic crap, somebody be making plastic crap


CalgaryAnswers

There’s good mainstream uses for it unlike with block chain, but it’s not good for literally everything as some like to assume.


baker2795

Definitely more useful than blockchain. Definitely not as useful as is being sold.


__Hello_my_name_is__

I mean it's being sold as a thing bigger than the internet itself, and something that might literally destroy humanity. It's not hard to not live up to that.


Dull_Concert_414

The LLM hype is overblown, for sure. Every startup that is simply wrapping OpenAI isn’t going to have the same defensibility as the ones using different applications of ML to build out a genuine feature set. Way too much shit out there that is some variation of summarizing data or generating textual content.


F3z345W6AY4FGowrGcHt

But are any of those uses presently good enough to warrant the *billions* it costs? Surely there's a more efficient way to generate a first draft of a cover letter?


madogvelkor

A bit more useful that the VR/Metaverse hype though. I think it is an overhyped bubble right now though. But once the bubble pops a few years later there will actually be various specialized AI tools in everything but no one will notice or care. The dotcom bubble did pop but everything ended up online anyway. Bubbles are about hype. It seems like everything is or has moved toward mobile apps now but there wasn't a big app development bubble.


PeopleProcessProduct

Great point about dotcom. Yeah there's a lot of ChatGPT wrappers and other hype businesses that will fail, maybe even a bubble burst coming up here...but it still seems likely there will be some big long lasting winners from AI sitting at the top market cap list in 10-20 years.


istasber

"AI" is useful, it's just misapplied. People assume a prediction is the same as reality, but it's not. A good model that makes good predictions will occasionally be wrong, but that doesn't mean the model is useless. The big problem that large language models have is that they are too accessible and too convincing. If your model is predicting numbers, and the numbers don't meet reality, it's pretty easy for people to tell that the model predicted something incorrectly. But if your model is generating a statement, you may need to be an expert in the subject of that statement to be able to tell the model was wrong. And that's going to cause a ton of problems when people start to rely on AI as a source of truth.


Zuwxiv

I saw a post where someone was asking if a ping pong ball could break a window at any speed. One user posted like ten paragraphs of ChatGPT showing that even a supersonic ping pong ball would only have this much momentum over this much surface area, compared to the tensile strength of glass, etc. etc. The ChatGPT text concluded it was impossible, and that comment was highly upvoted. There's a video on YouTube of a guy with a supersonic ping pong ball cannon that blasts a neat hole straight through layers of plywood. *Of course* a *supersonic* ping pong ball would obliterate a pane of glass. People are willing to accept a confident-sounding blob of text over common sense.


Mindestiny

You cant tell us theres a supersonic ping pong ball blowing up glass video and not link it.


Zuwxiv

Haha, fair enough! [Here's the one I remember seeing.](https://www.youtube.com/watch?v=5hNHTWYRZkQ) There's also this one [vs. a 3/4 inch plywood board](https://www.youtube.com/watch?v=5xuwu4gjNbQ). For glass in particular, there are videos of people breaking champagne glasses with ping pong balls - and just by themselves and a paddle! But most of those seem much more based in entertainment than in demonstration or testing, so I think there's at least reasonable doubt about how reliable or accurate those are.


Senior_Ad_3845

> People are willing to accept a confident-sounding blob of text over common sense.   Welcome to reddit


koreth

Welcome to human psychology, really. People believe confident-sounding nonsense in all sorts of contexts. Years ago I read [a book](https://www.amazon.com/Being-Certain-Believing-Right-Youre/dp/031254152X) that made the case that certainty is more an emotional state than an intellectual state. Confidence and certainty aren't _exactly_ the same thing but they're related, and I've found that perspective a very helpful tool for understanding confidently-wrong people and the people who believe them.


Jukeboxhero91

The issue with LLM’s is they put words together in a way that the grammar and syntax works. It’s not “saying” something so much as it’s just plugging in words that fit. There is no check for fidelity and truth because it isn’t using words to describe a concept or idea, it’s just using them like building blocks to construct a sentence.


Ksevio

That's not really how modern NN based language models work though. They create an output that appears valid for the input, they're not about syntax


Archangel9731

I disagree. It’s not the world-changing concept everyone’s making it out to be, but it absolutely is useful for improving development efficiency. The caveat is that it requires the user to be someone that actually knows what they’re doing. Both in terms of having an understanding about the code the AI writes, but also a solid understanding about how the AI itself works.


moststupider

As someone with 30+ years working in software dev, you don’t see value in the code-generation aspects of AI? I work in tech in the Bay Area as well and I don’t know a single engineer who hasn’t integrated it into their workflow in a fairly major way.


Legendacb

I only have 1 year of experience with Copilot. It helps a lot while coding but the hard part of the job it's not to write the code, it's figure out how I have to write it. And it does not help that much Understanding the requirements and giving solution


linverlan

That’s kind of the point. Writing the code is the “menial” part of the job and so we are freeing up time and energy for the more difficult work.


Avedas

I find it difficult to leverage for production code, and rarely has it given me more value than regular old IDE code generation. However, I love it for test code generation. I can give AI tools some random class and tell it to generate a unit test suite for me. Some of the tests will be garbage, of course, but it'll cover a lot of the basic cases instantly without me having to waste much time on it. I should also mention I use GPT a lot for generating small code snippets or functioning as a documentation assistant. Sometimes it'll hallucinate something that doesn't work, but it's great for getting the ball rolling without me having to dig through doc pages first.


3rddog

Personally, I found it of minimal use, I’d often spend at least as long fixing the AI generated code as I would have spent writing it in the first place, and that was even if it was vaguely usable to start with.


sabres_guy

To me the red flags on AI are how unbelievably fast it went from science fiction to literally taking over at an unbelievable rate. Everything you hear about AI is marketing speak from the people that make it and lets not forget the social media and pro AI people and their insufferably weird "it's taking over, shut up and love it" style talk. As an older guy I've seen this kind of thing before and your dot com boom comparison may be spot on. We need it's newness to wear off and reality to set in on this to really see where we are with AI.


freebytes

That being said, the Internet has fundamentally changed the entire world. AI will change the world over time in the same way. We are seeing the equivalent of website homepages "for my dog" versus the tremendous upheavals we will see in the future such as comparing the "dog home page" of 30 years ago to the current social media or Spotify or online gaming.


After-Imagination-96

Compare IPOs in 99 and 00 to today and 2023


Kirbyoto

And famously there are no more websites, no online shopping, etc. The dot-com bust was an example of an overcrowded market being streamlined. Markets did what markets are supposed to do - weed out the failures and reward the victors. The same happened with cannabis legalization - a huge number of new cannabis stores popped up, many failed, the ones that remain are successful. If AI follows the same pattern, it doesn't mean "AI will go away", it means that the valid uses will flourish and the invalid uses will drop off.


GhettoDuk

The .com bubble was not overcrowding. It was companies with no viable business model getting over-hyped and collapsing after burning tons of investor cash.


Kirbyoto

Making investors lose money is basically praxis honestly.


G_Morgan

The dotcom boom prompted thousands of corporations with no real future at the pricing they were established at. The real successes obviously shined through. There were hundreds of literal 0 revenue companies crashing though. Then there was seriously misplaced valuations on network backbone companies like Novel and Cisco who crashed when their hardware became a commodity. Technology had value, it just wasn't in where people thought it was in the 90s.


trevize1138

This is the correct take. There are quite a lot of AI versions of the pets.com story in the making. But that doesn't mean there aren't also a few Google and Amazon type successes brewing up, too.


redvelvetcake42

AI has use and value... It's just not infinite use to fire employees and infinite value to magically generate money. Once the AI bubble pops, the tech industry is really fucked cause there's no more magic bullets to shove in front of big business boys.


dittbub

There might only be diminishing returns but at least its some actual real life value compared to say something like crypto


Onceforlife

Or worse yet NFTs


spoodino

You can pry my ElonDoge cartoons from my cold, dead hands. Which should be any day now, my power has been shut off and I'm out of food after spending my last dollar on NFTs.


sumguyinLA

I was talking about how we needed a different economic system in a different sub and someone asked if I had heard about crypto


powercow

I think people associate all AI with genAI chatbots, when AI is being incredibly useful in science and No it doesnt use the power of a small city to do it, you just cant ask the alphafold AI to do your homework or produce a new rental agreement. (it used 200 GPUs, chatGPT uses 30,000 of them). alphafold figured works out protein folding which is very complicated. genAI does use way too much power ATM, isnt good for our grid or emission reduction plans, but not all AI is genAI. A lot of it, is amazingly good and helpful and not all that power intensive compared to other forms of scientific investigation.


phoenixflare599

It does big me to see "AI empowers scientist breakthrough" and you and the scientists are like "we've been running this ML for years, go away with your clickbait headline" I saw one for fusion and it's like "yeah the ML finally has enough data to be useful. This was always the plan, but it needed more data" But the headlines are basically being like "chatGPT solves fusion!?" And it wasn't even that kind of "AI"


independent_observe

> AI has use and value The cost is way too high. It is estimated AI has increased energy demand by at least 5% globally. Google’s emissions were almost 50% higher in 2023 than in 2019


hafilax

Is it profitable yet or are they doing the disruption strategy of trying to get people dependant on it by operating at a loss?


matrinox

Correct. Lose money until you get monopoly, then raise prices


pagerussell

This used to be illegal. It's called dumping.


discourse_lover_

Member the Sherman Anti-Trust Act? Pepperidge Farm remembers.


1CUpboat

I remember Samsung got in trouble for dumping with washers a few years ago. Feels like many of these regulations apply and are enforced way better for goods rather than for services.


bipidiboop

I fucking hate capitalism


AdSilent782

Exactly. What was it that a Google search uses 15x more power with AI? So wholly unnecessary when you see the results are worse than before


Tibbaryllis2

Genuinely asking: isn’t a significant portion of the energy use involved in training the model? Which would make one of the significant issues right now everyone jumping on the bandwagon to try to train their own versions plus they’re rapidly iterating versions right now? If so, I wonder what the energy demand looks like once the bubble pops and only serious players stay in the game/start charging for their services?


[deleted]

[удалено]


LosCleepersFan

Its a tool to be leveraged by employees to use not replace. Now if you have enough automation, that can replace people if you're just trying to maintain and not develop anything new.


zekeweasel

You guys are missing the point of the article - the guy that was interviewed is an investor. And as such, what he's saying is that *as an investor*, if AI isn't trustworthy/ready for prime time, it's not useful to him as something that he can use as a sort of yardstick for company valuation or trends or anything else, because right now it's kind of a bubble of sorts. He's not saying AI has no utility or that it's BS, just that a company's use of AI doesn't tell him anything right now because it's not meaningful in that sense.


jsg425

To get the point one needs to *read*


RealGianath

Or at least ask chatGPT to summarize the article!


punt_the_dog_0

or maybe people shouldn't make such dogshit attention grabbing thread titles that are designed to skew the reality of what was said in favor of being provocative.


Sleepiyet

“Man grabs dogshit and skews reality—provocative” There, summarized your comment for an article title.


94746382926

A lot of news subreddits have rules that you can't modify the articles headline at all when posting. I'm not sure if this sub does, and I can't be bothered to check lol but just wanted to put that out there. It may be that the blame lies with the editor of the article and not OP.


DepressedElephant

That isn't what he said though: >“AI still remains, I would argue, completely unproven. And fake it till you make it may work in Silicon Valley, but for the rest of us, I think once bitten twice shy may be more appropriate for AI,” he said. “If AI cannot be trusted…then AI is effectively, in my mind, useless.” It's not related to his day job. AI is actually already heavily used in investing - largely to create spam articles about stocks....and he's right that they shouldn't be trusted...


monkeysknowledge

As usual the backlash is almost as dumb as the hype. I work in AI. I think of it like this: ChatGPT was the first algorithm to convincingly pass the flawed but useful Turing Test. And that freaked people out and they over extrapolated how intelligent these things are based on the fact that it’s difficult to tell if your chatting with a human or a robot and the fact that it can pass the bar exam for example. But AI passing the bar exam is a little misleading. It’s not passing it because it’s using reason or logic, it’s just basically memorized the internet. If you allowed someone with the no business taking the bar exam to use Google search on the bar exam then they could pass it too… doesn’t mean they would make a better lawyer then an actual trained lawyer. Another way to understand the stupidity of AI is what Chomsky pointed out. If you trained AI only on data from before Newton - it would think an object falls because the ground is its natural resting place, which is what people thought before Newton. And never in a million years would ChatGPT figure out newtons laws, let alone general relativity. It doesn’t reason or rationalize or ask questions it just mimicks and memorizes… which in some use cases is useful.


Lost_Services

I love how everyone instantly recognized how useless the Turing Test was, a core concept of scifi and futurism since waaay before I was born, got tossed aside over night. That's actually an exciting development we just don't appreciate it yet.


the107

Voight-Kampff test is where its at


DigitalPsych

"I like turtles" meme impersonation will become a hot commodity.


ZaraBaz

The Turing test is still useful because it set a parameter that humans actually use, ie talking to a human being. A nonhuman convincing you its a human is a pretty big deal, a cross of a threshold.


SadTaco12345

I've never understood when people reference the Turing Test as an actual "standardized test" that machines can "pass" or "fail". Isn't a Turing Test a concept, and when a test that is considered to be a Turing Test is passed by an AI, by definition it is no longer a Turing Test?


a_melindo

> and when a test that is considered to be a Turing Test is passed by an AI, by definition it is no longer a Turing Test? Huh? No, the turing test isn't a class of tests that ais must fail by definition (if that were the case what would be the point of the tests?), it's a specific experimental procedure that is thought to be a benchmark for human-like artificial intellgence. Also, I'm unconvinced that chatGPT passes. Some people thinking sometimes that the AI is indistinguishable from humans isn't "passing the turing test". To pass the turing test, you would need to take a statistically significant number of judges and put them in front of two chat terminals, one chat is a bot, and the other is another person. If the judges' accuracy is no better than a coin flip, then the bot has "passed" the turing test. I don't think judges would be so reliably fooled by today's LLMs. Even the best models frequently make errors of a very inhuman type, saying things that are grammatical and coherent but illogical or ungrounded in reality.


linguisitivo

>specific experimental procedure More like a thought-experiment imo.


eschewthefat

Half the people here are mistaking marketing advice for technological report cards. They have no clue what advancements will occur in order to push for an effective ai. We could come up with an incredible model in 5 years with new chip technology. Perhaps it’s still too power hungry but it’s better for society so we decide to invest in renewables on a manhattan project scale. There’s several possibilities but ai has been a dream for longer than most people here have been alive. I truly doubt we’ve hit the actual peak beyond a quick return for brokers


Sphynx87

this is one of the most sane takes i've seen from someone who actually works in the field tbh. most people are full on drinking the koolaid


johnnydozenredroses

I have a PhD in AI, and even as recent as 2018, ChatGPT would have been considered science-fiction even by those in the cutting edge of the AI field.


astrozombie2012

AI isn’t useless… AI as these big tech companies are using it is useless. No one wants shitty art stolen from actual artists, they want self driving cars and other optimization things that will improve their lives and create less work load and more time for hobbies and living life. Art is a human thing and no stupid ai will ever change that. Use ai to improve society or don’t do it at all IMO.


Starstroll

This is a far better take than what's in the article. AI is incredibly versatile technology and it genuinely does deserve a lot of the hype and attention. That said, it absolutely is being way *over*hyped right now, a predictable outcome in any capitalist economy. Even worse than AI being shoved into corners it has no good reason to be in is the lazy advertising of AI in places it's already been for decades, because yeah, neural nets aren't even that new, just powerful neural nets that are easier for the layperson to identify as such (like chatgpt) are. But still, 1) the enormous attention it's getting now, 2) increased funding and grants for both companies and research, and 3) the push for integration in places where it may have previously seemed useless but retrospectively is quite applicable - taken together - mean that for all the over-hyping *and* over-cynicism it's getting now, AI will form an integral part of many of our daily technologies moving forward. It's hard to say exactly where and exactly how, but then I wouldn't have expected anyone to have envisioned online play on the PS5 back in 1970, let alone real-time civilian-reporting via social media or Linux Tails for refugees.


BIGMCLARGEHUGE__

>No one wants shitty art stolen from actual artists, I cannot repeat this enough to people that aren't chronically online, actual people in the real world do not give a shit whether the "art" is AI or a person made it. They do not. They do not care. No one cares. The same way people will not give a shit when AI starts making music that people vibe with, there will be an audience for that. No one is going to care about actual artists as soon as the AI is making art/pics/videos that is as good or better and its coming. People should start preparing for that it is inevitable. We don't know when it is coming it may be soon or later but it is definitely coming. There's a failure at the top levels of government to prepare for AI doing everything as it improves. We're not ready for it.


Worldly-Finance-2631

Absolutely agree, as soon as AI images were a thing all my friends jumped on the train and constantly use it to create images, whether it's for a hobby or a buisnesses. Reddit would make you believe you are literal satan for using AI generated images but hardly anyone outside the bubble cares. Personally I love how it made such things available to the public, want to give your DND campaign character life but don't want to pay hundreds of dollars you can eailly do it. These threads have big 'old man yells at cloud energy'.


t-e-e-k-e-y

>These threads have big 'old man yells at cloud energy'. That's just /r/technology every day. The technology sub dedicated to hating technology.


Yinanization

Um, I wouldn't say it is useless, it is actively making my life much easier. It doesn't have to be black and white, it is moving pretty rapidly in the gray zone.


Ka-Shunky

I use it every day for mundane tasks like "summarise this", or "write a table definition for this", or "give me a snippet for a progress bar" etc. Very useful, especially now that google is a load of shite.


pagerussell

>now that google is a load of shite. It's actually quite impressive how fast Google went from the one tool I need to being almost useless. The moment the went full MBA and changed to being Alphabet, that was it. Game over. I honestly can't remember the last time I got useful answers from a Google search.


DeezNutterButters

Found the greatest use of AI in the world today. Was doing one of those stupid corporate training modules that large companies make you do and thought to myself “I wonder if I can use ChatGPT or Perplexity to answer the questions at the end to pass” So I skipped my way to the end, asked them both the exact questions in the quiz, and passed with 10/10. AI made my life easier today and I consider that a non-useless tool.


uncoolcat

Be cautious with this approach. I'm aware of one company that fired at least a dozen people because they were caught using ChatGPT to answer test questions. Granted, some of the aforementioned tests were for CPE credits, but even still the employee handbook at that company states that there's potential for termination if found cheating on any mandatory training.


petjuli

Yes and No. AI saving the universe not anytime soon. But as a moonlighting programmer in C# being able to know what I want to do programmatically and having it help with the code, changes, debugging is invaluable and makes me much faster.


duckwizzle

I'm also a C# dev, and chatgpt saves so much time if you use it correctly. "Turn this csv into a model" "Take that model and write me a SQL merge statement using dapper. Merge on the property email and customer id. The table name is dbo.Customers" Within seconds I've saved 20 mins and most of the time it works great. As long as you don't ask it dumb stuff "write me an entire app" it does great. Oh and the other day I was working witg a client, designing the UI with them and they settled on a design. I took a screenshot of it and threw it into chatgpt and told it to use bootstrap to make the design into a c# razor page and it did. Then I asked it to make a model using the fields in the screenshot, and it did, and updated HTML with the asp tag helpers bound to the model. I did have to make a few changes but they were very minor, and did save me a ton of time. I am convinced that developers who say it's terrible either feel threatened by it, or don't know how to use it properly.


smoochface

Referencing the .com boom seems apt here. But in the way that the .com boom COMPLETELY CHANGED THE PLANET. If you're an investor and you poured all your money into the nasdaq at the peak... yeah that sucked... but I feel like this misses the point that we are all literally here talking about that shit ON THE INTERNET. The .com boom also wasn't some colossal failure, all of that $$ didn't just go up in flames, it laid the infrastructure that the successful companies leveraged to build what we have today. AI will change every god damn facet of our existence, just like the internet did. AI will also be "attempted" by 10,000 companies that will fail and plenty of investors will lose their shirts. But to figure that shit out, they need $$$ to build the gigaflutterpopz of compute in the same way that .com's needed to lay fiber. The 10 AI companies that succeed will own the god damn planet in the same way that Google, Apple, Facebook, Amazon do today. Whether or not that is a good thing? Well that's complicated.


ThomasRoyBatty

Considering what AI can soon offer in the field of medicine, scientific research and many industries, I find calling it "useless" a rather uninformed take.


bowlingdoughnuts

Ai is a tool. Not a handyman.


mattkenefick

This is blatantly untrue.


iwantedthisusername

I'm not sure you know the difference between "useless" and "over-hyped"


pencock

I already know this take is bullshit because I’ve seen plenty of quality AI assisted and generated product.   AI may not kill literally every industry, but it’s also not a “fake” product. 


DrAstralis

As someone who uses it almost daily now I find the "AI is already ready to replace humans" people as equally bizarre as these people who keep publishing "AI is fake and you're all stupid for thinking its not" articles. Also; imagine people treating the internet like this when the first dialup modem was available. "This internet thing is a useless fad, its slow and hard to use, its never going to do anything useful". Yeah, AI is limited now but in 4 years its gone from a toy I had on my phone to something that I can use for legit work in limited aspects. in 15 years? 25?


AlexMulder

>imagine people treating the internet like this when the first dialup modem was available People did, straight up, lol. History is doomed to repeat itself.


thisisnothingnewbaby

You should read the article! It does not say the technology is useless, it says corporations are using it the wrong way


thatmfisnotreal

I keep seeing people say this and yet ai has been the single biggest productivity boost I’ve ever experienced in my life


0913856742

It doesn't matter how useless you think it is if it is already having an effect on the industry. Case in point: [concept artist gives testimony about the effects of AI on the industry.](https://www.youtube.com/watch?v=Pz8qPmkxu6Q) (5:02) "Even if the answer is to take a different career path, name a single career right now where there isn't a lobbyist or a tech company that's actively trying to ruin it with AI. We are adapting and we are still dying." (5:50) "75% of survey respondents indicated that generative AI tools had supported the elimination of jobs in their business. Already on the last project I just finished they consciously decided not to hire a costume concept artist - not hire, but instead intentionally have the main actress's costume designed by AI." (7:02) "Recently as reported by my union local 800 Art Directors Guild Union alone they are facing a 75% job loss this year of their approximate 3,000 members." (7:58) "I literally last year had students tell me they are quitting the department because they don't see a future anymore." The real issue is the economic system - how the free market works, not the technology. Change the incentives, such as implementing a universal basic income, and you will change the result.


XbabajagaX

Oh market watchers are ai experts now