This has mirrored what I've seen in my company. People in the data science/ML part of the company are super excited about AI and are always giving presentations on it and evangelizing it. Most engineers in other areas, though, are generally underwhelmed every time they try using it. It's being heavily pushed by AI "experts" and senior leaders, but the enthusiasm on the ground is lacking as results rarely live up to the extremely rosy promises that the "experts" keep making. Meanwhile, everyone can read the news about layoffs attributed to AI and can see that hiring (especially of junior engineers) has slowed to a trickle. You can only fool people for so long.
> Meanwhile, everyone can read the news about layoffs attributed to AI and can see that hiring (especially of junior engineers) has slowed to a trickle.
According to FRED/Indeed[1], software job openings have been roughly flat for 2-3 years, and they've actually been slightly increasing again. What data source are you looking at?
Flat at 60% of pre-covid hiring while the number of graduates continue to increase and there's still a backlog of people who were laid off. That's not a particularly optimism inducing hiring market.
Do not with a straight face act like pre-COVID hiring levels were a Good Thing. They weren’t. They were a symptom of a broken economy that you personally happened to pretty directly benefit from.
Thing is, the companies doing these layoffs rarely actually end up losing money from overhiring. They’re still profitable. Just not profitable enough for the people on top.
That’s a bit perverse. In democracies, corporations ultimately exist to serve society, not shareholders.
The plutocracy is forgetting that a working and productive populace - with fair wages and representation - is their end of the deal for disproportionally benefitting from the fruits of labor from others; and directly prevents violence against the status quo. See: The top articles in the last 3 days.
The companies doing the layoffs are themselves stating AI as a reason; that’s the news people are responding to. The parent didn’t claim that it’s based on reality, but it informs public opinion.
Whether or not the CEOs' statements are true, they affect public opinion.
You have CEOs claiming that AI is driving layoffs alongside CEOs of Anthropic and OpenAI talking about the end of white collar work. All this is then amplified by tech journalists like Casey Newton and Kevin Roose. The biggest public proponents of AI keep telling people that it will take their jobs.
What comes after the end of jobs? Who knows. Sam Altman occasionlly making vague statements about curing cancer. There are vague hand-waving notions of a Star Trek utopia.
But to be honest it feels more like a Cyberpunk future, where the Altmans and Musks get to live cancer-free and the rest of us eek out an existence without jobs or any prospect for a better life. Or maybe it looks more like Star Trek, but we're all red shirts.
Anything Musk or Altman say is just about raising money. Nothing they say can be taken at face value. There’s a funny interview with Mark Anderseen, where he talks about how he never looks backwards and doesn’t have any sense of introspection and then gets into a rambling and completely wrong history lesson. That’s what these guys do.
The better question to ask is what happens after the end of OpenAI/Tesla/etc? AI may take your job away, but not because of robots replicating your labor, just good old-fashioned economic collapse.
There have been a lot of headlines the past couple years about companies stating they are doing layoffs or slowing hiring because of AI. I would bet the average adult pays way more attention to news headlines than FRED reports.
I also don't see why everyone would dismiss the statements of large company CEOs about why they are making hiring/firing decisions, regardless of what some statistics say.
>According to FRED/Indeed[1], software job openings have been roughly flat for 2-3 years, and they've actually been slightly increasing again.
None of this contradicts OP's claim, because at least anecdotally, juniors/interns are getting disproportionately squeezed by AI. Why hire an intern to write random scripts/tests for you, when claude code does the same thing? Therefore overall job posting could be flat or slightly rising, but that's only because everyone is rushing to hire senior/principals staff to wrangle all the AI agents, offsetting the junior losses.
Is that data useful at all? Indeed postings are a poor proxy for how many people actually get hired. One of the major problems we have is that employment statistics are largely just estimates, and don’t reflect reality on the ground. Factor in the Trump admin firing most of the BLS and other agencies for not giving him the numbers he wants, and there really is no reliable data.
I feel like the junior problem contributes more heavily than people might think. The people on top see juniors as replaceable since they view them as cheap menial labor, whereas most seniors at least acknowledge the human element as part of the benefit
Today's juniors are tomorrow's seniors, or more importantly today's seniors' yesterday.
They do the dirty, repetitive work, learn the systems inside out, take note of the flaws, and fix them if they are motivated and the system/process allows.
Thinking them as replaceable, worthless gears is allowing your organization rot from inside. I can't believe people can't see it.
Plenty of people see it - but, to a hiring team, a junior is an extremely risky investment. They demand a high cost relative to when they can start contributing actual value, may not work out, or may hop ship the moment they become competent. It is rational for a business to want to eliminate this risk. It's possible that everyone is acting rationally here, knowing it will lead to a result that is not favorable down the line - because the immediate benefit is too great to consider the latter.
In other words the gamble of hiring expensive juniors with shiny degrees is greater to them than the gamble of not having competent seniors a few years down the line. And that risk may be overblown - people are still hiring some juniors, it's not like it has stopped entirely - so future seniors will likely just be worth more than they are currently. To some, that may be worth the risk, especially if you believe AI will continue to get stronger.
I am not saying I agree with this decision making, more pointing out the thought process. We have had to have similar discussions where I am but are still hiring juniors, FYI. That's basically all we're hiring right now, actually, because the market for strong juniors is very good right now.
It's not an economic decision, it's a cultural one. Are you investing to build something useful and sustainable? Or are you exploiting for a profitable quarter?
I read someone compare the mindset to that of a drug-dealer. In any given neighborhood, a handful of people get very wealthy, at the expense of the stability and potential of everyone else. Our elite are drug-dealers - literally, in some cases. And conditions are deteriorating about how you'd expect.
> In other words the gamble of hiring expensive juniors with shiny degrees is greater to them than the gamble of not having competent seniors a few years down the line.
I mean, writing the code which makes mon^H^H^H^H provides value for minimum cost is the ultimate goal of a software company, but any competent CS grad or anyone with basic algorithms knowledge knows that greedy algorithms can't solve all problems. Sometimes the company needs to look ahead, try, fail and backtrack.
Nerdy analogies aside, self-sabotaging whole sector with greedy shortsightedness is a pretty monumental misstep. It's painful yet unbelievably hilarious at the same time. Pure dark comedy.
The problem is that it's systemic. The entire system rewards the short term thinking, so that even people with some awareness of what's happening tend to contribute to it all. People are fantastically good at finding reasons to work at places like OpenAI, Anthropic, Google, Meta, Palantir, X, etc. And once they're there, they similarly figure out how to justify the actions they're taking.
and what happens in half a generation or so when those seniors start retiring? the only way software production will meet demand is if the fewer seniors out there are propped up by way more competent ai than we have now. that also means the work will fundamentally change from being massively nerdy to moderately nerdy with the ability to work with ai. many of the people in the computer industry now just wont be attracted to that type of work.. what will they do? become physicists or mathematicians? and what type of person is tomorrows senior software developer?
edit: maybe todays computer nerds will become tomorrows backyard hackers, the only ones able to beat the ai.
If people need to be retained for many years, is the solution to give a bunch of stock that vests over many years? It would be interesting if such incentives (the need to hang onto talent that has been incubated for many years) could bring about a return to one-company careers.
not just fast and loose though… I bet the number of companies who are not drowning in “AI strategies” meetings is roughly equal to number of ICE Agents who are Democrats
This would have been a great revelation for decision-makers across the economy to have had about 20 years ago. Instead, they took every opportunity to turn the job market into the Hunger Games. Congrats to the people who survived the Cornucopia; the rest of us have been bleeding out for well over a decade.
It's not that juniors are replaceable, but that hiring them is a high variance move. Few, if any, know whether a candidate is just memorizing leetcode and is going to be a dud, costing you effort before they get a PIP, or you are hiring a very talented individual that will be contributing in 2 weeks
.
With seniors, you risk less, just because the track record makes the very worst candidates unlikely.
The use-cases for data science and other engineers are different. AI is not uniformly good at all kinds of development.
There is an issue with execs pushing it though. You have people at the top of the company with little to no idea how people work attempting to micromanage tool usage. It is as if you had a group of execs determining what IDEs people could use.
No-one is getting fired because of AI. The start of this year is the start of companies beginning to use AI. The reason layoffs are happening is because of the massive overhiring after Covid.
How long after COVID are we going to be able to keep using this excuse? This is starting to feel like the politician blaming his predecessor even though he's been in office for years. In the year 2033, Company X lays off another 10,000, just as it did each year since 2023, again blaming massive over-hiring during COVID, ten years ago.
> How long after COVID are we going to be able to keep using this excuse?
I am with you but if you look what happened after COVID it is a big line going waaaaay up. COVID was a significant event and there is no way around it, no? the OPs comment is invalid because we below the pre-COVID (by miles) but COVID should be taken into account (everyone seems to use it to further some agenda by looking at just one particular aspect of what happened post-COVID)
> It is as if you had a group of execs determining what IDEs people could use.
its worse than that; its more like determining what ide you use and also mandating how much time you spend in it, and then chewing you out at review time because you used jira and confluence too much instead of writing md files in the blessed ide of their choice
Using claude and friends takes all the fun out of the job, so I'm not surprised engineers are not enthusiastic. It's cool for 1 month then you realize we went from solving problems and implementing algos and optimizing slow code and fixing security issues and other fun stuff, to writing prompts all day long.
Not really managers, I would put the new role more in the senior engineer / architect category. Those still have to deal with deeply technical things like design, architecture, problem decomposition, research, domain expertise, code review, collaborating with technical peers -- all of which (people) managers don't typically do.
If you ever wanted to climb the senior technical ladder, this is now the quickest way to experience it. Except instead of other people you get to work with agents which, while a very different experience, requires largely the same skills.
So yes, your job is not what it was before, but with career growth it typically was not anyway.
I have a similar experience. I seldom use it to test to see its current state, and it generally (85% of the time) gives wrong answers. Then I discuss this with a couple of friends:
Me: I tried $AI recently, I asked $question, it hallucinated.
Them: But it sucks at that.
Me: Then what's good at? It's useful if it helps me out of a ditch.
Them: It depends on the domain...
These guys are not evangelists or anything, but colleagues who want to reduce their workloads. If it can't help with what I need, then how can it help me at all?
At the end of the day, I don't plan to use this at daily capacity, but with all the resources poured into this, it's still underwhelming.
A friend of mine has copilot integrated with his storage appliance that all the business docs are hosted on for his firm. He says it's amazing.
My company uses Sharepoint, and can digest all of the documents I have access to on that, one drive, teams, outlooks, etc. across my tenant. Most of the time, it's pretty useless.
There must be some reason for these two disparate experiences. It's the same product offering. I couldn't tell you.
Reminds me of a bounty I received recently. Someone essentially exposed a bedrock agent that had access to the companies internal documents to the internet unauthenticated. They actually had the reports and notes for other bug bounties that had been reported to them as well.
Tell claude what you do and ask it where it can be the most helpful. It is true that the tool has to be learned, and won't help everywhere. If you are doing web dev just to make a tool, it is purely magical. I've found it to be mostly useless in making geed helm charts.
I generally use them for researching things which I was unable to find anywhere else. For example, for Gemini I have two extreme examples:
I asked for a concept in Tango music, with a long prompt explaining what I'm looking for. It brought me back a single, Spanish YouTube Video explaining it perfectly alongside its slightly wrong summary, but the video was spot on, and I got what I needed.
Then I asked for something else about a musical instrument, again with a very detailed prompt, and it gave me a very confident answer suggesting that mine is broken and needs to be serviced. After an e-mail to the maker of the said instrument, giving the same model number (and providing a serial) and asking the same question, I got a reply saying that it's supposed to that and it's perfectly fine, it turned out that Gemini hallucinated pretty wildly.
For programming I don't use AI at all. I have a habit of reading library references and writing code directly by RTFM'ing the official docs of what I'm working with. It provides more depth, and I do nail the correct usage in less time.
The opposite happened to me. I asked Gemini about a type of Vietnamese dance called "nhảy sạp" and it returned a good sounding summary along with a video it claimed to explain the dance and how it worked. The video was from the Knowledge Academy and titled, "What is SAP?"
Funny, I was supposed to be the expert in my company, but I was run over by the demo folks, while I was uselessly preaching about evaluation, safeguards, guardrails, observability.
For mine it’s worse because we have new leadership who believes in it to a far larger extent than it can deliver. Now a massive amount of our workforce is building up proofs of concepts and spitting out tons of effectively useless output to look good because of how strongly they’ve signaled it’s good for careers here to fully embrace it. It’s a massive mess and there’s nobody to clean it up, and the voices advocating for rigor or good engineering practices are being sidelined.
It’s full out mania. As someone raised in and who escaped a cult, I am having to use every tool in my very large toolbox to stay sane while I wait for this to pass and die down or make my move towards a place that still cares whether their product works.
If the majority of engineers decide to rot their brains and abandon best practices, the industry will eventually implode. Stay true to your beliefs and use the bare minimum of AI to keep your job.
We’re in what I would call the “dark ages” of tech. There will be a new renaissance led by those who used this as an opportunity to build skills and tools that are genuinely useful and ingenious.
If you keep a long-term horizon this is the perfect opportunity to work on a solo project in stealth mode. Or build professional connections with others who see things the way you do.
When people talk about one’s salary being an imperative to them understanding something, they are talking about exactly you. “This’ll all wash over and we’ll be back to the good old days that I’m used to” has never happened. Ever.
At my company everyone’s salary and career ladder are determined by exactly how much they dive into AI and show enthusiasm for it, regardless of whether they’re using it for something useful or they’re just competing for how much money they can burn
Well actually it did happen. Greco-Roman intellectual tradition was lost when Rome collapsed and institutions of knowledge with it. Islamic scholars preserved much of this knowledge during the dark ages but in the western world Christian religious dogma reigned supreme.
During the renaissance western thinkers pieced together lost information and we got the scientific revolution.
Kind of wild that you completely ignored the example I gave of exactly this happening in my original comment.
And speaking of people whose salary dictates their understanding of something, let’s talk about Sam Altman and the rest of SV currently spinning a fairytale about AI which just so happens to justify astronomical valuations for their companies.
AI isn't going away, but leadership expectations to (say) increase "efficiency" by 50% in the next 6 months through "AI" will. Eventually. After lots of fudging of numbers and general reluctance to admit that the Emperor's clothes are looking awfully translucent.
If LOC and tokenmaxxing is the future, nobody will have a job.
I use AI all day every day, I’m not a luddite, I’m someone who has seen people take the same shitty shortcuts to working systems they are now. They’re wasting tons of money and smarter competitors who can actually think clearly about the benefits and costs are gonna eat their lunch.
Early stages of any major disruptive technology will have hype due to get-rich-quick folks. Dot-com boom & bust of 2000 is similar. But the underlying technology (internet) defined our lives forever.
I don't know why people are comparing the Day-1 of one technology with the Day-1000 of another. Yes, AI is useless in many fields - NOW. But you can't imagine doing any work without in a couple years.
Like the kids used to ask - 'How did they build Google without Google?'
Now their kids will ask - 'How did they build chatGPT without chatGPT'?
ChatGPT has been around for 4 years at this point. Not very long, but I’ve heard of the ‘imagine what it’ll do in one year’ spiel quite a few times by now.
2 things - it’s not day 1 for AI, and it’s also not dot-com (which dropped the nasdaq 80% btw). It’s the entire American economy right now. When it can’t deliver anything approaching its hype, just like all the data centers that can’t deliver on power, the profit margins that can’t deliver, and the promises of massive 500% revenue increases this fiscal year… sorry, I was raised in a cult and know what the fuck I’m seeing, sadly among a lot of otherwise intelligent people here.
I expect I’ll be using LLMs now and in the future, but the public is far more right about the companies and the people running them than the tech “insiders” here.
You replace half of a team with AI. Salary cost go immediately down, but team output can keep up for some time. You don't see the technical debt, the security issues and the prompt injection which will result in wrong invoices being sent. In six months suddenly there will be a big problem, but this quarter a lot of shareholders are happy about the cost-cutting. You may even be promoted by the time shit hits the fan, and it won't even be your problem anymore.
On the other hand there probably also is a general correction in the market after the covid hiring spree.
The reality is most of them are so divorced from reality that they think they are infallible and AI will pick up the slack because they want it to be true.
And specifically, their expectations as to what will positively impact the stock price. Shareholder value this quarter is more important than keeping the company afloat next quarter.
AI could be a huge net benefit, and justify large layoffs.
AI could be a huge short-term benefit, justify layoffs now, so long as you (the exec doing the laying off) don't have to worry about the long term
AI could have middling net benefit, but be a great excuse to justify layoffs now. In this scenario, the people laid off and those that remain bear the cost (one, losing their jobs; those that remain, burning out with the extra workload)
etc etc, many scenarios to consider...
It can be both if for the majority of layoffs, AI is just a scapegoat to act as cover for cuts made for financial reasons or offshoring and not the actual cause.
From what I've seen many efforts to replace roles such as customer service with AI are being rolled back or downscaled due to intolerably high error rates and general incapability. While these segments won't come out unscathed I don't think the actual impact will end up being as severe as feared.
You're apparently assuming that AI related layoffs are rational, based on those making the decisions having good information about what their own organizations are achieving with AI.
I think this is far from the truth. In many companies AI has become a religion, not a new technology to be evaluated and judged. Employees are told to use AI, and report how much they are using, and all understand the consequences of giving the wrong answer. The CEO hears the tales of rampant AI use and productivity that he is demanding to hear, then pats himself on the back and initiates another layoff. Meanwhile in the trenches little if anything has actually changed.
can happen at the same time when businesses speculatively fire workers to replace with AI. The lack of results might bite them in the ass and the bubble might pop. Or not, but they are going long on their AI position
I totally understand where you are coming from and my personal take is LLMs are to "stuff" as a drill driver is to a screw driver. They are a tool, just a tool. ... bear with ...
I over floored several rooms in my house (UK, '20s build) with plywood before laying insulation, heating mats and laminate floor boards for the final finish. I don't have a staple gun so I screwed the boards down at roughly 600mm c/c across the floorboards and 300mm along them.
What the blazes has that got to do with LLMs?
Well, I used a nearly inappropriate method for a job and blasted through it nearly as fast as the best method! If I had used a manual screwdriver I would have been at it nearly forever and ended up with a very limp wrist. I do own an old school ratchet screwdriver and that would have speeded things up but still been slow. I did use yellow passivated screws with sharp threads and a notch to initiate biting into the wood - rather more expensive than a staple or a nail.
So I burned through my tokens (screws instead of nails/staples) faster than if I had used a pneumatic nail/staple gun.
Anyway. LLMs are tools. They can be good tools in the right hands or rip your fingers off in the wrong hands.
Running with this analogy, the two sides of the AI argument are the people who think they can fire their plumber and electrician now that they have a drill driver, and the people who know it doesn't work that way...
Quite. My larger drill driver will wrench your wrist unless you know how to set the speed/mode/etc correctly and know how to brace yourself correctly.
At the moment, I think that a LLM needs skilled hands too. Have a casual chat - that's fine but for work ... be aware.
I recently dumped a wikimedia (our knowledge base is a wiki) formatted table into a LLM (on prem) and asked it to sort the list on the first column. It lost a few rows for some reason. No problem - I know how my tools work but it was a bit odd!
Your statement is a bit contradictory. That is, the article about "the growing disconnect between AI insiders and everyone else" pretty clearly states that "everyone else" is scared about job losses and the extreme inequality they see advanced AI causing. This is in line with your second to last sentence.
But the first part of your comment is basically saying "AI insiders think the tech is super awesome and powerful, while other engineers think it doesn't stand up to the hype." Well, if the AI is indeed not as good a tech as its boosters are saying, well, this would be great news for everyone scared about job losses and widening inequality if AI turned out to be a nothing burger.
No and it has been said already elsewhere in this thread: decision makers are not entirely rational, they might fire entire departments even if the AI revolution isn’t here quite yet
That's likely because it takes an entirely different approach to make it work. Augmenting your existing flow with "sophisticated auto complete" isn't as interesting and isn't actually using the tools how they were designed to be used.
I'm not going to pass judgement either way; we'll see how it all shakes out.
I just know for me, personally, I love computers and making them do what I want and in the AI era I am somehow using them even more and doing even more.
Smart guy phoning it in now - realized a few weeks ago that he “notices” something interesting to share, but is really paraphrasing a recently released paper that found it - without giving paper credit.
Wasn't Karpathy the guy who used to work for tesla and that tried to convince everyone that you only need cameras for self-driving and that by 2025 there wouldn't be anymore cars without self-driving capabilities to sell?
Underwhelmed is the absolute correct word to use here.
Absolutely everyone raves about this but other than a few basic computer related tasks I’ve not seen compelling use cases that justify the billions being lit on fire trying to pursue it.
My cynical take is the crypto bro’s needed something to do with their useless GPU’s after the crash and found the perfect answer in LLM’s.
It’s primarily about confidence and motivations. People with high confidence at what they do are supremely unmotivated to use something like AI to solve problems they don’t have.
People with low confidence will be super excited for AI because it solves problems they weren’t even thinking about.
Executives that don’t write code are super excited about AI because hopefully it means they can continue to high low confidence people, which are plentiful and cost less.
I am sitting on the sidelines watching in disbelief. I don’t use AI and don’t plan to. I used to write JavaScript for a living and still get JavaScript job alerts from a lot of job boards. The compensation for JavaScript work is starting to shoot through the roof as employers are moving away from garbage like React and Angular. The recent jobs are becoming fewer and are more reliant upon people with tons of experience that can actually program. Clearly AI is not replacing positions for higher talent with greater than 8-12 years experience.
"I refuse to pick up the magic hammer that nails things in by me just thinking about it while holding it in my hand; nosiree, give me that old fashioned hammer so I can sit here and nail some nails into a 2x4 while the guy using the other tool is building whole slop neighborhoods. Ha, that guy is so dumb and I'm too cool because I won't ever use that hammer."
I don't get it. Proudly saying you don't plan to use better tools is not some 'cool' look or the brag you think it is. You're just making yourself less valuable and being ignorant on purpose.
This website is literally unrecognizable from 10 years ago. I don't even know where to go now. /r/accelerate seems to be the only place with people who aren't blinded by some kind of emotional bias, plain stubbornness, or straight up stupidity.
Totally right! The folks who were very recently telling us we were all going to be trading NFTs in the metaverse are the clear eyed optimists not motivated by anything but rational consideration for the truth.
It seems like you get personally offended by people using their critical reasoning abilities.
I know a folk who did a PhD in the area, and work at one of those frontier labs as a researcher, and privately he is as sceptical as the most "stubborn" HN denizen you mention.
Unbounded enthusiasm for AI without any reservations is something that can only be born out of minds utterly deprived of imagination and creativity.
When you use the term "luddite" in the way you do, you reveal that you aren't aware of who the Luddites actually were. Luddites weren't anti-technology; many of them were experts at using advanced machinery. What they opposed was the poor quality output of automated factories and the use of machinery to circumvent apprenticeships and decent wages.
As for your promise of a great leap at some vague point in the future, that's such a widely-mocked AI industry trope at this point that it's a little embarrassing you went there.
The only thing that will be embarrassing is how badly your comments, and those like yours will age.
I don't know what happened to this place, but it went from actual young people sharing information on the newest things in tech, tech philosophy, interesting stuff; to now old men yelling at the clouds about the new tech.
As a senior dev who has been using these tools to their fullest effectiveness in production environments, until AI can reduce the entropy of a codebase while still adding capability I will continue to be underwhelmed.
Even SOTA models when used in agents in simple NLP tasks such as text classification still fail more times than acceptable when evaluated against a realistic evaluation dataset with sufficient example variety and with some adversarial prompts included.
Improving such uses cases is mostly an artisanal endeavor, sometimes a few-shot prompt improves things, sometimes it improves things at the expense of kind of overfitting it, sometimes structured reasoning works, sometimes it doesn't, or sometimes it works and then the latency and token explodes, etc etc....
And yet a lot of teams don't see this problem because they don't care much about evaluations, and will only find this issues in production a few months after deployment.
"Yes, it sucks now, but believe me it won't be for long" spiel has been hyped for several years now.
Oh, don't get me wrong, these tools are amazing. But just yesterday a very small refactoring resulted in 480 fully duplicated lines in a 5000-line codebase (on top of extremely bad DB access patterns) despite all the best shamanic rituals this world has to offer [1].
So yeah, senior engineers especially use these tools daily, and keep being completely honest about their issues and shortcomings. Unlike the hype and scam artists.
[1] Oh, sorry. I meant to say skills, context engineering and management, memory, prompt engineering.
" But just yesterday a very small refactoring resulted in 480 fully duplicated lines in a 5000-line codebase (on top of extremely bad DB access patterns) despite all the best shamanic rituals this world has to offer."
And even staying within the comfort of AI enthusiasm: Google wasn't exactly leading in this race. If you have this much confidence in what those presenters and engineers at Google told you, you now have some opportunities to make a lot of money.
Anyone here who is currently 'underwhelmed'; please get through all 5 levels here and then say the same thing.
This is just the beginning. I seriously can't believe this place turned into neo-boomerism ideology on tech. I honestly don't get it, just makes me think everyone here talking about being seniors and architecture and blah blah; don't actually know shit, and aren't actually good at what they do.
That is the completed instructions for the fifth level, I leave it as an exercise to the reader to actually read more and find the rest of the steps on their own.
I spent some time chatting with Google engineer who put this together, Ayo Adedeji, at UCLA's SAIRS conference.
You asked about Google and what impressed me so much, going through this exercise, while not exactly helpful for me and my work directly (I'm doing similar things but completely in the Azure ecosystem), it is definitely a great display of how agents are more than just an 'LLM' that everyone here seems to think is equivalent to AI.
It's seriously the opposite feeling of imposter syndrome at this point, I'm in my 30's, a senior data engineer myself at a F200 company; I can't believe so many of my peers are so behind and ignorant of what is going on, confident enough to makes publicly lasting comments about how 'unreliable', 'bad', 'slop'; 'AI will never this or that'.
"AI-insiders" are trying to market their tools to you. See Anthropic's continuous lithany of "all programmers will be replaced in 6 months" while they struggle to make their TUI API wrapper consume less than 2-4 GB of RAM (they brought it down from 68 GB[1]), or have a decent uptime.
> When did Hacker news start becoming a luddite, bad takes everywhere I look, feels like everyone is '50 year old burnt out guy' that has no idea what is going on vibe?
Much to the opposite, I think healthy skepticism is a sign of maturity. The overeager embracing of hype cycles is extremely cringe.
> I just got back from a SAIRS conference at UCLA and talked directly with some of the presenters and engineers at Google.
Cringe, as I was saying.
Conferences are just mutual fart smelling, swagger, and expensing trips on company momey. I am not against it, but treating your participation in some conference as a sign of the future is very silly.
Every conference I participated always overhyped every current bullshit.
There is emotional bias and stubbornness in nearly all of your responses in this thread, the very same traits you lambasted HN broadly for in another comment. Rather than calling people, "stupid and wrong", why don't you make your case?
If you don't want to be bothered to argue your points, and this place truly chaps your ass to the degree it does, why even waste your time commenting at all when, according to you, there's a more fun place with bigger brains that-a-way, as far as you're concerned? points
I mean, it takes more energy and effort to be angry and annoyed than to just move on and leave us luddites in the dust.
It is pretty emotional seeing a place with people you respected and learned from for so long, where you could rely on for the place to find the newest and most interesting things happening in tech, where people in the know discussed the technical aspects; to now neo-luddites everywhere bashing shit they don't understand, ON A FUCKING TECH FORUM; like THE tech forum.
I feel like I'm living in some kind of bizzarro world now when I read anything AI related on HN. It's insane.
This place actually hates all technology after the invention of Lisp. And there's the common online incentive to dunk on things that also exists here. Hence the infamous Dropbox comment and others.
But it's also been anti-Javascript, anti-cloud, anti-social-media, anti-crypto, anti-React, and so on.
I would therefore not in a million years expect it to be pro-LLM, and this is so obvious to me that I'm a bit suspicious of your motives for acting confused about it, as if it was ever any different.
> But it's also been anti-Javascript, anti-cloud, anti-social-media, anti-crypto, anti-React, and so on.
It was never any of these things, and you're misremembering if you think it was. There's never been a mono-opinion held by some all-encompassing hivemind.
It's literally unbearable now. I don't know how the place that once used to be exciting and deep in the know; is now old-man-yells-at-clouds ignorant of what is happening. It's actually really sad. /g/ and /r/accelerate seem like the last bastions of actual intelligent people discussing these things.
Shocking that people who are in data science/ML are excited about data science/ML, and people in jobs not interested in that area are not interested in it.
It's like a programmer being surprised that a worker in $random_job wants to keep doing their job, and not learn how to be a programmer instead.
There's this weird unspoken assumption in a lot of these HN posts that any layoffs or lack of hiring is due to companies shirking on providing the cushy jobs they owe software engineers. Actually, they hire engineers to get stuff done. If it's true that AI is just a big 'ol scam and it doesn't even work, then I guess we'll see the companies that insist on nothing but the finest artisanily hand-typed organic code rocket to the top of the charts on app downloads, sales, revenue, and market cap.
This is basically how most engineers talk to their managers, politely implying - "can you see how this decision has a short term payoff but a long term consequence?"
Before LLMs I only worked at one place that "only hired seniors and above" and now its the most commonplace thing in the world.
Nobody owes me anything, I already have the skills I need, where will the juniors come from that these companies are going to need in a few years? We don't need extremist stances in either camp, we need balance.
> Nobody owes me anything, I already have the skills I need, where will the juniors come from that these companies are going to need in a few years? We don't need extremist stances in either camp, we need balance.
Seems a bit like asking where the bread will come from, if no-one is forced to bake it.
> If it's true that AI is just a big 'ol scam and it doesn't even work, then I guess we'll see the companies that insist on nothing but the finest artisanily hand-typed organic code rocket to the top of the charts on app downloads, sales, revenue, and market cap.
AI works fine to get a vibe coded BS version of the app. No doubt there. But eventually, especially once scale hits your app, it will devolve into an unholy mess of low performance and (extremely) high cost if you do not have a bunch of senior talent able and willing to clean up after the AI mess.
Unfortunately, our capitalist economy only rewards the metrics you mentioned... but by the time the house of cards collapses, either from financial issues stemming from the above or because the tech debt explodes, it's too late to turn the ship around.
And I've even heard rumors of software engineers that don't even write apps or write code that runs on the internet at all. They say some of them don't even use javascript or python! The horror.
I get it, but as a "AI expert and senior leader" myself in my 1,000 people organization (in relative terms), the disconnect I have is:
A lot of what non-believers say matches "enthusiasm on the ground is lacking as results rarely live up to the extremely rosy promises". They would then say they need 2 weeks to work on a specific project, the good old way, maybe with some light AI use along the way.
But then I'm like "hmm actually let me try this real quick" and I prompt Claude for 3 minutes, and 30 minutes later it has one-shotted the whole "two weeks project". It then gets reviewed and merged by the "non-believers". This happens repeatedly.
So overall, I think the lack of enthusiasm is largely a skill issue. Not having the skill is fine, but not being willing to learn the skill is the real issue.
I see things changing, as "non-believers" eventually start to realize that they need to evolve or be toast. But it's slower than I imagined.
I am a strong believer and selected as power user because of AI usage metrics, but I also see perverse incentives -- a colleague was desperately searching for me on the Claude token usage leaderboard (I was part of a different group he did not have access to) -- it was clear he was actively trying to climb that leaderboard.
Meanehile our average PR loc balooned to ~2000loc -- generated with Claude, reviewed with copilot but colleagues also review it with Claude because it gives valid nitpicks that bump up your github stats, while missing glaring functional/architectural issues, overenginerring issues.
No way this doesn't blow up down the road with the massive bloat we're creating while getting high on the "good progress" we're making.
Yes, your 3 minutes prompt got merged.
So was my friends(ex-programmer now manager) non-ai generated PR that a technical TL got stuckstuck on for 2 weeks.
Different perspective? Survivor bias? High authority?
Blame your engineering culture not AI if metrics such as Github stats, number of nitpick reviews and token usage is what is used to judge one's performance.
In a sane engineering culture, actual customer-visible impact is what is measured, and AI is just a tool to improve that metric, but to improve it massively.
> But then I'm like "hmm actually let me try this real quick" and I prompt Claude for 3 minutes, and 30 minutes later it has one-shotted the whole "two weeks project". It then gets reviewed and merged by the "non-believers". This happens repeatedly.
this is a nice anecdote but i think the real issue is the forcing and kpi-nization of llms top-down for nearly everything
there are still code-quality issues, prompting issues for long-running tasks, some things are just faster and more deterministic with normal code generators or just find-and-replace etc
people are annoyed at the force-feeding of llms/ai into everything even when its not needed
somethings can be one-shotted and some things cant, and that is fine and perfectly normal but execs don't like that because its not the new hotness
> somethings can be one-shotted and some things cant
True but my point is that people vastly underestimate what is one-shottable.
In my experience, 80% of the times an average "non-believer" SW engineer with 7 years experience says something is not one-shottable, I, with my 15 years of experience, think it is fact one-shottable. And 20% of the time, I do verify that by one-shotting it on my free time.
I believe that this has happened in some cases but am very skeptical that it is widespread and generalizeable at this point. My own experience is that software engineers thinking they can easily solve a problem in a domain they know nothing about overrate their ability to do so ~99% of the time.
Unsure of this really tracks tho. How are you evaluating for the bias that they’re not merging it because you’re “their leader of 1000 people org” and not because you’re actually an engineer deep in the trenches that knows the second or third order effects of slop?
This is a genuine question btw, I see plenty of instances of this in my own org.
1. I am also on the receiving end of this. My boss often codes and vibecodes, and no one feels like they have to merge their stuff. We only merge it if it meets the high quality standard we have. And there is no drama for blocking a PR in our culture.
2. I am fairly deep in the trenches myself and I know when my PRs are high quality and when they are not. And that does not correlate with use of AI in my experience.
Well "non-believers" don't see any gain from being faster, right? That'll just set expectations of "do a lot more for same". Fear of being "toast" will get you the loyalty you'd expect from fear.
the best way I found to deal with non-believers is to have claude run code reviews on their own work. I’ll point it to an older commit and get like 3-page markdown file :) works really, really well.
on one-shotting 3 minute prompt in 30 minutes though, software is a living organism and early gains can (and often result) in later pains. I do not use this type of argument as it relates to AI as the follow-up as the organism spreads its wings to production seldom makes its way to HN (if this 30 minute one-shot results in a huge security breach I doubt you would be back here with a follow-up, you will quietly handle it…)
You can get it to generate a 3-page markdown file for any random code, or its own code it just generated. If requested it will produce a seemingly plausible looking review with recommendations and possible issues.
How impressed someone get from that will depend on the recipient.
I've been on this ride about three or four times over decades. Every new major wave of technology takes a surprisingly long time to be adopted, despite advantages that seem obvious to the evangelists.
I had the exact same experience with, for example, rolling out fully virtualized infrastructure (VMware ESXi) when that was a new concept.
The resistance was just incredible!
"That's not secure!" was the most common push-back, despite all evidence being that VM-level isolation combined with VLANs was much better isolation than huge consolidated servers running dozens of apps.
"It's slower!" was another common complaint, pointing at the 20% overheads that were the norm at the time (before CPU hardware offload features such as nested page tables). Sure, sure, in benchmarks, but in practice putting a small VM on a big host meant that it inherited the fast network and fibre adapters and hence could burst far above the performance you'd get from a low end "pizza box" with a pair of mechanical drives in a RAID10.
I see the same kind of naive, uninformed push-back against AI. And that's from people that are at least aware of it. I regularly talk to developers that have never even heard of tools like Codex, Gemini CLI, or whatever! This just hasn't percolated through the wider industry to the level that it has in Silicon Valley.
Speaking of security, the scenarios are oddly similar. Sure, prompt injection is a thing, but modern LLMs are vastly "more secure" in a certain sense than traditional solutions.
Consider Data Loss Prevention (DLP) policy engines. Most use nothing more than simple regular expression patterns looking for things like credit card numbers, social security numbers, etc... Similarly, there are policy engines that look for swearwords, internal project code names being sent to third-parties, etc...
All of those are trivially bypassed even by accident! Simply screenshot a spreadsheet and attach the PNG. Swear at the customer in a language other than English. Put spaces in between the characters in each s w e a r word. Whatever.
None of those tricks work against a modern AI. Even if you very carefully phrase a hurtful statement while avoiding the banned word list, the AI will know that's hurtful and flag it. Even if you use an obscure language. Even if you embed it into a meme picture. It doesn't matter, it'll flag it!
This is a true step change in capability.
It'll take a while for people to be dragged into the future, kicking and screaming the whole way there.
You're not forced to use only an LLM for data loss prevention! You can combine it with regex. You can also feed the output of the regex matches to the LLM as extra "context".
Similarly, I was just flipping through the SQL Server 2025 docs on vector indexes. One of their demos was a "hybrid" search that combined exact text match with semantic vector embedding proximity match.
I think people are really underestimating how poorly today's tweens think of AI. "That looks like chatgpt" is an insult. Kids avoid things because they heard somewhere that AI might have been involved and have a sense that means it is bad or immoral or illegal or cheating in some nebulous way, and it's reinforced by their teachers telling them that using AI for homework is cheating.
I think this next generation is going to come up fundamentally believing that AI is generally a bad thing, and it's going to surprise older people.
> I think people are really underestimating how poorly today's tweens think of AI.
I think you might be really underestimating how poorly today's adults think of AI. Whenever I see a blog post that starts with an obvious AI hero image, when it has the "It's not X, it's Y" framing, when it has anything that smells like AI, I immediately discount what that person is saying as I assume they are unable to think for themselves.
> Whenever I see a blog post that starts with an obvious AI hero image, when it has the "It's not X, it's Y" framing, when it has anything that smells like AI
yes, n=1 (ok n=2 i guess) but noticing that is an immediate back button press for me but its getting harder and harder to avoid as search results become inundated with this stuff
Far as AI gen images they still make me nauseous due to uncanny-valley stuff. Still see a lot of non-standard number of fingers; so much content elicits a weird double-take and gut dropping feel.
The kids are smarter than most people give them credit for. They see their future being destroyed in real time, and AI is only accelerating it and largely being celebrated/promoted/used by the same people currently destroying their future. To them, there are few benefits beyond being able to cheat on their homework, and an enormous amount of downsides.
I think it's only a matter of time before we see some more serious, organized opposition to AI (and perhaps even the internet and other technologies) by these young people.
For some kids, they see their parents get themselves in a mountain of college debt, work for 50 years and struggle to afford necessities, and decide maybe trying to be a streamer/tiktokker is worth a gamble and could set them up for life instead.
Makes sense. I think it’s hard to argue against someone that uses the platform and others as an example of entrepreneurial pursuits. Not “all social media is bad” when to use different lens types.
You might be surprised by how many of them are
aware of the harms of social media, while acknowledging that it’s impossible not to engage with it. It’s not their fault we built the toxic slot machine world for them that we have. And besides, I’m pretty sure my boomer parents spend about as much time scrolling slop on Facebook as kids do on TikTok.
My partner was working at an event and a co-worker had prepared a poster using AI - a teenage kid at the event pointed out how the poster "has AI smudges".
You know how your parents are weirdly shitty at recognizing obvious photoshops? Kids are constantly surprised that we adults can't recognize obvious AI images.
In the 80s, 90s and 00s that's what they thought about coding.
Then when the salaries got good every pretended to have always been a nerd and really into everything nerd. With the result that they kicked all the nerds out.
If you consider what assemblers and compilers do programming, sure.
But men didn't kick them out, technology did. Von Numan famously forbid the Eniac from ever being used for assembly when you had a perfectly cheap secretary pool to do the assembly by hand.
Low creativity repetitive work requiring great attention to detail is what the early female programmers did and what was automated first.
If we ever get deterministic AI the same will happen up the chain. I'm not holding my breath for the current generation of models, or the upcoming ones I've seen in papers.
That's underselling their role. One of those ladies doing the assembling for Von Numan was Grace Hopper, who then used that expertise to develop the first compilers.
I have noticed similar sentiments among some teenagers. It's not a universal sentiment but those who hate AIs really hate them with a passion.
In the meanwhile there is a rising tide of feel good AI content targeted at old people on Facebook. My mother has been sharing with me many "funny videos" that are very obviously AI generated. She evidently does not care, and according what I hear from others she is far from the only old person who gets sucked into "slop." I hesitate to use this word but it captures the feeling too well for me to pass it up.
I don't have data but I sense there is an inverse correlation between age and disgust towards AI generated content.
I can't recall a piece of technology in which the age distribution of the people embracing it was similar to what we're seeing with AI. In the past, this stuff has almost always been picked up by the young first and foremost, but the embrace of AI seems mostly to be coming from elder millennials through boomers (I'll admit this is anecdotal, so it's possible this is an observation of my own bubble).
When I get a message from a co-worker that seems to have been written by an LLM, I am incredibly turned off and instantly think less of the person. It can be easy to spot: key words bolded, acknowledging that I'm right, longer and with a different tone than their typical messages, with neat bullet points.
It feels a little disrespectful. It feels a little pointless (why am I bothering talking to you if I can get the same result from the AI). I have no idea whether you've given the problem any actual thought, or if you're just copy-pasting an answer. I have no idea if you actually believe what you're telling me (or if you've even read it or understand it).
pr comments from a human that is generated by ai has got me feeling the same... like why this person even here? its totally disrespectful; i want a person to interact with not a machine with a meatsuit.
[X] Tweets and instagram comments presented as "what society is thinking"
[X] Ties Luigi Mangione and the California warehouse fire to Gen Z discontent (about AI?).
[X] Statistics being used to support the title with little to no regards to continuity: "those respondents who said that AI makes them “nervous” grew from 50% to 52% during the same period" => percentage was 52% in 2023, 50% in 2024 and 52% in 2025, seems mostly flat to me, with the real jump being in 2022-2023 with 39%.
I didn't say it was devoid of substance, the poll part is actually interesting (and worth discussing!) it's just that it actually appears *after* the sloppy tweets and "someone pretended to shoot at Sam Altman's house" screenshot as if that was somehow relevant.
Good catch on the 52→50→52 "growth." The actual Stanford report has more interesting data than TechCrunch pulled out - the gap between industry practitioners and academic researchers on safety concerns is arguably the more striking finding, but that doesn't make as good a headline as "public vs elites."
I was talking recently to someone who teaches AI-adjacent courses at a US university (not in a computer science department) and they said that enrollment in their class is lower than expected, which they think is likely due to the severity of the AI backlash among students on campus.
AI applications that would help normal people in a significant way are pretty lacking, so I'm not surprised. So much conversation about AI products is cycles of "this tech will change everything" without material backup outside of coding agents.
How much of the workforce is organising and other information dissemination or transformation?
I'm more on the skeptical side than the evangelist, but I can see how large parts of such things could theoretically be shifted away from humans. Planning someone's agenda, preparing relevant documents, arranging and coordinating things, translations (speech or text), narration, grammar checking.... AI is a whole lot of hot air when considering the "second 80%" of the work involved in any of these tasks, but that's still a lot of jobs that may make little sense to start studying these years, until you have some idea how the field will develop or if there's a giant surplus of, say, French-native Spanish language experts. At least for those for whom a given study is not a real passion and they might as well choose something else
> Planning someone's agenda, preparing relevant documents, arranging and coordinating things, translations (speech or text), narration, grammar checking
the issue is, these things "lie" subtly and not so subtly (they make up issues, rename agendas, forget questions and change meanings all the time) and for me that is a deal-breaker for a business tool that i need to rely on
If it's fundamentals of ML, I'm surprised to hear that.
If it's "how to use ChatGPT for creative writing" then I'm not surprised. Why would someone take a class from a teacher who has had only just as much experience with these tools as their students have?
I actually feel the opposite. I don't think people from outside CS will have that much interest into the very basics of AI because there is usually a huge gap between "this is how back propagation works" to any AI model that is remotely useful. And if you are interested in the fundamentals themselves you would probably be majoring CS anyway.
A course on how to use existing AI tools will be pointless, but if there is anything I know about college students is they love taking easy courses for easy credits.
Agree… OP said “not CS” so doesn’t seem surprising. If we’re going by anecdotes, AI classes in the CS dept have risen in popularity in the past few years.
Students don't enroll in a class for various reasons, but most likely because it's useless (or at least people perceive it as useless). At top universities, even notoriously challenging courses have a decent class size.
The biggest visible AI impact, for me, is vibe coding. For that, I am convinced that the hype will collapse and will throw back the most enthusiast companies by years.
On the downside we have untrustworthy, doom or glory praising CEOs, companies slashing jobs, AI companies going into military business, hacks, spam, psychosis, general anxiety and uncertainty.
Even if you don't believe the hype and know that AI is just statistics, there is nothing to be positive about. I can't blame anyone to dismiss it. Maybe it's even the best thing that can happen, big tech won't take a sane route without civic supervision and calibration.
a person can have full faith in the potential value of ai science and simultaneously have zero faith in the current crop of business stewards of that science.
no one is questioning the underlying model mathematics, they are questioning deceptive & reckless stewards.
I think most people oustide the area do not care and do not know about who's on top, and the negative perception is much more related to how the tech will enable users to misuse it (replacing phone lines/support, AI art, things losing quality, etc) than about the companies themselves.
Yes I believe we're quickly approaching crypto territory, where distributed ledgers certainly have their valuable use cases, but the overwhelming _mindshare_ is active scamming and/or monkey jpegs.
There needs to be a concerted focus on real value for end users and less "yeah the terminator will take your job and raise your kids in your absence"
I think there is a lot of truth to what you say, particularly when it comes to caring rather than parroting; however as part of my personal and civil life I interact with a lot of non-tech people in non-tech capacities, and a surprising number of them raise unprompted complaints about people like Sam Altman and Elon Musk. Musk I understand everyone knowing about; between Tesla, SpaceX, the Thai boys football team, a very public inclination to raise his hand, and a position in the US government he is meaningfully famous. However how Sam Altman has managed to get his name out there in the wrong way very quickly to a bunch of Brits I don't know.
AI continues to be a stupidly vague term, and the example I keep going back to is present in this article
Meaningful advances in medical diagnosis are not coming from chatbot companies. Some are coming from machine learning methods. Perhaps measuring public sentiment about such a vagary is not a very productive way to quantify anything
That said, I continue to also be frustrated with people using the abstract concept of a new technology as a substitute for the institutions that use that technology to exert power in the world and what they do with that power, which is - as many in the comments already point out - is what the vast majority of people are actually mad about, and right to be
> Right now, as I'm writing this comment, AI = LLMs and image generation. That's it. It's as simple as that
I think agentic harnesses add a lot to LLMs, even if many are just simple loops. They are a separate thing from LLMs, are they not?
I get the feeling that even if we stopped shipping new models today, new far more useful products would be getting shipped for years, just with harness improvements. Or, am I way off base here?
I think it's not that difficult to see why a technology that will likely trigger widespread unemployment during a cost of living crisis, an arms race with China, along with all the alignment concerns, might not be hugely popular with the public.
Maybe I'd be a bit more optimistic if someone could explain a realistic economic scenario for how we're going to transition into our utopian abundant future without a depression or a revolution.
Pretty simple: The centaur of big-tech/government will pay people not to eat them. (i.e. UBI)
The incentives are, how you say, aligned.
The deeper issue I see is the psychological crisis for a species who believes it doesn't deserve to live if it isn't performing economically valuable activity, entering a world where it is unprofitable for it to be employed. (If I were the AI, I'd come up with some kind of fake jobs to keep the humans sane.)
Agreed, this article seems to be dancing around the point: WHY are the Gen Z hating AI? We have a political ruling class that is all too willing to throw everyone under the bus if they aren't living up to some expectation, and the political class is being driven by an economic ruling class that largely seems to have the same opinion.
Gen Z would likely have a very different opinion if their basic living necessities were available to them.
> a realistic economic scenario for how we're going to transition into our utopian abundant future
One aspect almost certainly has to be data centers being run as utilities. That forces transparency, resists monopolization and gives public commissions a say in e.g. expansion.
It is just obnoxious the gap between thought leaders and everyone else.
I was at a panel last week. The most pro-AI person was an account executive from a big fintech company.
EVERYONE else - a data scientist that works in AI, regulatory compliance, cybersec, and marketing, took the position of "hey this is great and will change things, but let's pump the brakes... a lot."
Random people cure cancer for their dog, every business can vibe code an app to make their operations more efficient, anyone can launch a business with 10% of the effort it used to take.
The AI companies are only capturing like 5% of the value produced with this tech right now.
It is worth pointing out that we got here despite all of the “alignment” research and safetyism surrounding the models. As it turns out, the models don’t wake up and start destroying things. We knew this all along, but every time a new article came along and anthropomorphized and exaggerated another experiment it fed the clickbait machine.
The fundamental alignment issue is aligning the companies themselves with society, not the models with the companies. Widespread unemployment is not aligned with society, but it is aligned with Anthropic and OpenAI if it makes them rich.
Therefore the only “harms” the companies will take seriously are those which also harm the company. For example reputational harms from enabling scams aren’t allowed.
Perhaps all of this isn’t fair, since companies actively subverted safety research for profitability. But then I would go back to my earlier point of over-indexing on unintended behaviors and under-indexing on intended ones.
I don't know how many times I've seen some Google AI summary or ChatGPT with references that, when I checked, did not say what what the AI summary said. If a high school student falsified references in a paper like this, they would get a bad or failing grade. This is bad, not acceptable, the teacher would say.
But we have been sold to use these constantly falsified AI summaries as the go-to source of "truth" by all levels of society. We're trading truth for an illusion of short-term gains. This will not have good consequences.
Always has been since the ZIRP era. The ‘make something people want’ phrase was coined by a famous Silicon Valley investor. I heard he runs a popular forum.
Well we can easily see that the "abundance" people are wrong(for example everyone can't have a penthouse apartment overlooking Central Park, no matter how capable the robots become).
An alternative possibility that inequality is about to explode between those who profit from AI/robotic labor and those displaced by it.
Agreed. As a kid it felt there was so much energy to make things better, to fight the system. So depressing growing up and seeing so many peers and idols becoming the same inward-looking grey old farts they used to mock.
There is certainly some logic behind the old joke about young people with no heart and old people with no brain. It's natural to become a bit more conservative as you age. Though I would clarify that I think it is natural to become more of a normal conservative; the current conservative party in the US is ... not.
I'm not seeing that. Trump support in 2024 was pretty strong across the board. The born-in-1960s edged out the other decades, but it was not by a wide margin (and I consider GenX more of a 1970s phenomenon than 1960s anyway).
If you want to pick a generation to complain about, look how hard the younger folks swung in favor of Trump in 2020 and then even more in 2024.
In case you're wondering who they mean by "AI experts", I checked the Pew poll:
> Note: “AI experts” refer to individuals whose work or research relates to AI. The AI experts surveyed are those who were authors or presenters at an AI-related conference in 2023 or 2024 and live in the U.S. Expert views are only representative of those who responded.
My wife has a very serious health issue, that has caused more suffering then words could describe. o1-preview was the first ai that actually proved useful. From there on, each improvement on ai caused an incremental improvement in her situation. Even recently we were able to pinpoint exactly what was causing her flare, and solve the situation the same day, just by prompting a claude opus conversation where i’ve shared all her health notes. But if i weren’t a data freak and haven’t been collecting data about her issues (what she does/takes and how she feels) for so long i dont think we would had been able to get this far. So i think ai appeals to people with problems that can be solved by finding patterns in data. People that say ai makes mistakes don’t understand that the power is in finding patterns, not in finding THE right answer. You need to prompt from that prespective
Been saying this for a bit but the things I’ve seen associated with AI seem to be the things that it’s pretty mid at. Coding, automated actions etc. I wholeheartedly believe adoption and perception would be better if the things it was amazing at were pushed more.
Take log review for example. Whether it’s admin or security LLMs are incredible at reading awfully formatted logs and even using those to pull meaning from other logs as well. Like turn an hour long log review into a 10 minute log review type thing.
They’ve gotta be feigning it right? I just don’t understand how you could be so out of touch with what happens when wealth becomes this concentrated. This isn’t the first go around at this.
I don't think the disconnect is very surprising to the "insiders".
Your Dario's and Sam's know exactly what they are doing. They know it's going to cause a lot of job displacement, even if the technology isn't perfect. They are trying to get the C-suite elite hyped up about it, and the hyperscalers are along for the ride as well. There's so much money to be made.
They could not care less about what joe schmoe on the street thinks about it.
This AI rollout has been fundamentally rushed and fucked from the very beginning and I think the people who are responsible for doing it this way have done more irreparable damage to society than any single group of humans in the entire history of the species, and I mean it.
It’s always only ever about how the new model is faster, better, smarter. Or how the tech will be bringing ruin to the job market and someone should probably do something about that some time soon. Zero efforts to create any sort of educational content - how it even works, how to vet its output, how to have an eye for confabulation, how to use it as thinking enhancement rather than replacement, to keep in mind that it’s trained to please and will literally generate anything to cause users to click the thumbs up button. Nope, it’s just “ModelGPClaude can make mistakes! Better be careful!”
And then everyone’s surprised when an utterly improvident handling of 4o kicks off the biggest concentrated wave of AI psychosis seen yet. Because, surprise! When you give people a model that’s trained to anthropomorphize itself, people who have no idea about any of this tech and have no access to education about any of it might believe it’s more than it is! Boy, who’d’ve thunk; isn’t the world complex?!
This was a symptom of this exact same disease. I have far less worry about the tech and far more worry about how the disconnected venture capital caste is inflicting it upon us.
> This AI rollout has been fundamentally rushed and fucked from the very beginning
fake it till you make it has been modus operandi for tech for almost as long as i've been alive... i feel like this is the apotheosis of this kind of thinking...
> Nope, it’s just “ModelGPClaude can make mistakes! Better be careful!”
"use at your own risk" and "no guarantees warranted or expressed" is basically in every single eula from tech as well... its not a new trend sadly...
The lack of federal permitting standards for AI data centers is really going to bite the industry in the ass. We also probably need something akin to the WARN Act for AI-related layoffs. (Possibly with multi-year benefits for large companies.)
Giant leaps in innovation almost always have a reaction like this.
It's new, people fear it. Sometimes justified, usually not.
People greatly feared the car because of the number of horse-related jobs it would displace.
President Benjamin Harrison and First Lady Caroline Harrison feared electricity so much they refused to operate light switches to avoid being shocked. They had staff turn lights on/off for them.
Looking back at these we might laugh.
We're largely in the same boat now.
It's possible AI will destroy us all, but judging from history, the irrational reactions to something new isn't exactly unprecedented.
Many innovations are also on the refuse pile of history. Indoor gas lighting[1] is one. People were quite justifiably skeptical of electricity, when its relatively short-lived predecessor frequently killed people in explosions, carbon monoxide poisoning, etc.
> when its relatively short-lived predecessor frequently killed people
If only it were this obvious when the polluted air isn't your home but the entire planet, killing not your grandma but taking a few healthy years of life from everyone simultaneously. Maybe people would feel like we need to reverse priorities rather than go full steam ahead on newly created energy demand and see about cleaning it up later
Every invention is touted as the next electricity, or the next internet (crypto scams anyone?)
Meanwhile not every invention is. Electricity and internet are electricity and internet, and very few inventions come even close to that. Meanwhile LLMs have had arguably a net negative effect on the world at large.
My own anecdotal experience is yes, there is a real visceral hatred of AI among Gen-Z. You have to look at it through a lens where they already feel like there's been a massive amount of intergenerational theft against them - particularly with the housing market putting owning a home out of reach, along with the evaporation of the concept of a stable career. Now they are going through education learning skills that they are incessantly hearing will have no purpose and there will not be jobs for them.
It's hard not to see that they have a point. If AI is so great and going to save so much money - how about starting by paying some of that forward? Suddenly when you ask the billionaires or AI tech elite to share any of the wealth they are so confident they will generate, everyone backs away fast and starts to behave like it is all a speculative venture. So which one is it?
In 2022 the world was open arms, welcoming AI advancements.
However, since 2022, OpenAI and all of its original founding researchers, had their dramatic fallout, and began screaming in public saying crazy people things like "the end is coming."
Why did they insist on force launching ChatGPT? Google at the time refused to launch their own version (it was their own research that gave birth to LLMs) based chat because they knew all of the negative outcomes and unreliability of it all was just a poor product experience.
Instead of launch quietly like DALL-E and keep it fun and experiemental, nope, they threw it up online and moved full-steam ahead.
"THE END IS COMING" Sam Altman said. "AI WILL TAKE YOUR JOBS WITHIN 5 YEARS" Dario said. "AGI IS ALMOST HERE" Elon Musk said.
The disconnect is because these specific men, making those specific bold crazy person claims, with zealous cult following employees (including many of us here in this forum), kept marching ahead. Not only that, no one asked the rest of the world if they even wanted this technology EVERYWHERE.
This technology could have been so cool if it were given the breathing room to find usecases for it. Natural Language programming has been tried for a half a century, and it finally arrives.
Yet, it's so tainted by all the crazy person speak, and doomsday messaging, it's also thrown out there in such a haphazard way that have burned so many bridges, this technology is truely toxic. The fact that Gen-A and Gen-Z now have to waste brain power speculating if something is AI generated, is such a waste, but here we are. Welcome to the shit storm that was entirely made by those men.
A silicon savior to finally free capital from the dependence on labor with all its pesky demands like sick leave or a living wage.
You can see this in the literal deification going on in VC circles. AGI is the capitalist version of the Second Coming, God coming down to earth to redeem them by finally solving the contradictions in their world view.
Unfortunately for them and fortunately for the rest of us, it's not all they hope it to be.
Regardless, I think we are going to see an acceleration of AI research.
I just wish my wife is more serious about camping and learning survival skills. I think Shit is going to hit the fan in the next 5-10 years but she thinks that’s crazy. Oh well maybe I am crazy.
Thinking about AI-induced(or perceived)-layoffs that triggers another depression which then triggers riots in the city, or something like a future war triggering oil going up crazily which in turn triggering the shortage of fertilizer and every other oil products, which further triggers China to put a stop on exporting some key chemical products, which then triggers more shortages and then what not, I think it’s a perfect sane possibility to live in the wilderness for at least a couple of weeks.
You might have better luck in suburbia, growing vegetables in your yard, trading with neighbors, and taking turns patrolling at night than trying to rough it in the wild.
Haven't we learned anything from The Walking Dead?
I have seen this shift myself. A year ago everyone was super excited by AI. Now, if you exit the tech ecosystem, most people have become decidedly “meh” about the tech.
“Is that some nonsense ChatGPT told you?” Has turned into an almost cynical mocking in response to someone commenting about an issue.
The hype seems to have run its course. I’m a fan and use it constantly, but it’s also clear there’s serious storm clouds and headwinds on the horizon.
The tone deafness of the tech community is so unbearable. Either too on the spectrum, too ambitious (the world is fine cause I’m getting mine), or too isolated from non-tech people, to realise most people despise what they’re creating.
There’s also a lack of willingness to ‘bring along’ the public. It’s just “make the god thing; ask for permission later”.
I work with LLMs extensively and daily and they are very useful. BUT dear god, absolutely nothing about them is intelligent.
If you work at the edge of context you know what I mean. Even within context, if the system was truly intelligent, the way that Euclid was intelligent, why do I need /superpowers and 50 cycles to get a certain implementation right?
Why is the AI not one-shotting obscure but simple business logic cases with optimal code? Whoops pattern never seen before! There is no thought to it, zero. The LLM is just shotgunning token prediction and context management until something sticks. The amount of complexity you get out of language is certainly fascinating and surprising at times but it's not intelligence - maybe part of it?
Sell it as skills or whatever, but all you do every day is fancy ways of context management to guardrail the token predictor algorithm into predicting the tokens that you want.
I think it's pretty clear that the problems with AI are:
1. Overhyped. Try writing a blog post that doesn't sound like it. Everyone is sick of reading it now.
2. Affecting the wrong people. It used to be the rich got richer and the poor got poorer. But now a lot of the middle class will get poorer
3. Severely damages the work hard way out. Competition will become brutal if there's almost no barrier to entry. This will drive down profit, affect hiring and will become a conveyor belt of people trying to win the business lottery. This will make moats even more essential.
4. The obvious theft of creative works which destroys dreams and livelihoods.
No wonder the younger generation are against it. Those of us in the middle are still just hoping at least we can get through somehow. At least we have hope.
Automation can free humanity from toil, but automation in the hand of billionaires that does the work of white collar, educated people in a period of economic and cultural turmoil, with no plan to employ them all than hoping UBI descends from heaven unto the world, is the recipe for societal disaster on a massive scale.
People are anti AI for obvious and valid reasons, but I think we should focus on where the profit goes and not on hating the technology itself.
Of course, if people are fired and only capital owners / AI experts get to earn anything then this is wrong and a revolution is obviously needed and unavoidable.
But for me, the best outcome would be if it was AI that did all the jobs so people could focus on doing what they want, not that we'd go back to pre-AI era..
Initially however we need to balance between full wealth redistribution and keeping the incentive to develop AI further.
Of course by AI I mean really useful AI, the real part, not the marketing part.
> The United States reported the lowest trust in its own government to regulate AI responsibly of any country surveyed, at 31%.
It seems US citizens are really against the current administration, just using the fact that AI investment is intrinsecally connected to it to voice their opposition.
> Country-level expectations follow similar patterns to the earlier sentiment trends.
Nigeria, Japan, Mexico, the United Arab Emirates, South Korea, and India all expected AI to create more jobs than it eliminates, with shares above 60%. The United States and Canada sat at the opposite end, where 67% and 68% of respondents expected AI to eliminate jobs and disrupt industries.
Globally, the disconnect is not growing. It's really just an U.S. problem (spilling to neighbouring Canada too).
So, no luddites in sight, again. It's just a public perception over a polemic topic being leveraged for ideological reasons sinking AI on US only.
I think that identifies an issue that is going to cause a real problem for the US in the future. The society is deeply politicised and polarised to the extent that essentially inanimate objects are regarded as having deep political and social significance. When there is political change, it is going to sweep back in the other direction.
It also seems like people on all sides within the AI debate have been fanning those flames thinking is will work in the short-term...and it won't. Big tech played that game in many countries in the early 2010s and it didn't end well.
It must be noted that the U.S. does allow inanimate object makers to fund politicians and such practices are widespread.
If all is well, then it's all good: no need to blame anyone, campaings get funded, etc. If one major crisis occours though, the country self-immolates by design.
Corporate contributions to Federal politicians and candidates are illegal in the US.
The New York Times is allowed to spend money like anyone else praising or slagging politicians, but that’s the First Amendment, not funding candidates.
> Corporate contributions to Federal politicians and candidates are illegal in the US.
And that's why the whole system is divided into two parties that both, each, funnel all their support to the presidential campaign (and then to taking over seats to guarantee more lobbying).
This whole thing would fall apart without lobbying.
One of the most hilarious AI-vangelical posts I've seen recently is from Steve Yegg through Simon Willison [0]....
> The TL;DR is that Google engineering appears to have the same AI adoption footprint as John Deere, the tractor company. Most of the industry has the same internal adoption curve: 20% agentic power users, 20% outright refusers, 60% still using Cursor or equivalent chat tool. It turns out Google has this curve too... [0]
Ummmm... Steve. You think Google might be able to figure out a super huge awesome new thing from 1 out of 5 of their employees. Or, given this is a consistent curve across the industry (even at Google)... Maybe AI is only about a fifth as cool and helpful as you and the enthusiasts think it is?
What the tech elite fail to understand is that we are at historic levels of wealth and income inequality. Access to healthcare is determined by one’s employment which makes what I’m about to explain a matter of life and death.
It doesn’t matter if you think it’s all going to work out and AI will bring an unprecedented era of abundance. That is not the current state.
Now what do you think happens when we dramatically expand productivity with AI? Well, we’re already seeing unprecedented layoffs in tech. And it’s easy to draw the conclusion that unless something structural changes all of the productivity gains from AI will go to investors not workers. Leaving said workers without access to healthcare or housing.
And of course let’s not forget that the tech elite in question supported Trump in the last election - someone who has done everything in his power to reduce healthcare access among the low income / unemployed population. This isn’t fucking rocket science guys.
Paraphrasing the classic, it's not AI that people are unhappy with, it's their life around AI. The world generally appears to have become a harsher and more dangerous place - even though it hasn't. But people and especially tabloid press like finding scapegoats and participating in mass hysteria. The anti-AI hysteria is going to go away soon while AI isn't. It's just another tool, like cars or factories. Granted, it brings some danger, but at the same time it brings overwhelmingly more good.
The minor benefits of vibecoding unusable prototypes or lazy cretins "writing" blogs with AI can't quite compare to the benefits of cars and factories, don't you think?
This reads like such a cope. The only people who are hysterical about AI are the people pushing it, pushing the investments, pushing the AGI risk, pushing the marketing and promising to push workers out of their jobs. Listen to Sam or Dario for 10 minutes and tell me they’re not hysterical themselves. Sam compares himself with Oppenheimer, making direct nuclear weapon analogies, and warns of the dangers of what he is producing, yet the people who are concerned about this are hysterical?
You are in a massive bubble my colleague, and I hope you have held some small doubts in your mind so when it pops you will have something to hold onto.
If "AI" was just free local and open models running on consumer hardware, fewer people would have an issue with it. Which highlights that the issue is with the hyper scalers, the rhetoric, the corporations, the marketing, etc etc.
We are ever so close to nearing the point where 90% of our AI usage can go through providers of open models, who all compete with each other to drive down prices and prevent rug pulls, leaving Dario and Sam holding empty bags.
Fewer, sure, but maybe less than you suggest. Plenty of harms are just as easy under a regime of open models only. Job losses, spamming, scraping the internet, data centers, scams, hacking etc are all possible with open weight models now.
Nah, the issue is more who controls access to these tools. People (rightfully) don't like billionaires or the elite ruling class very much. Without all the hype and investments it wouldn't be seen as such a big deal - just a neat technology.
Free open models are still capable of flooding art communities with slop images, which is worth sympathy, and is not included in your "Which highlights that the issue is with the hyper scalers, the rhetoric, the corporations, the marketing, etc etc".
Without the hoards of grifters who latched onto the AI bubble there would be less slop, and the community would find a way to deal with the bad slop, and would be far more accepting of the good slop.
reply