Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think most people in tech are quite aware of the level of visceral AI hatred amongst non-techies. I've personally witnessed the worst Thanksgiving dinnertable fight I've ever seen (after someone revealed that their recipe was AI-generated, a couple people literally spat out the food they were enjoying and threw their plates in the trash), and a divorce (a very solid marriage between two people who were once both staunchly anti-AI unraveled within weeks after one of them changed their tune and adopted AI at work).
 help



Spitting your food out because the AI generated the recipe is so clearly irrational that I chuckled a bit on reading that

People talk about AI getting things wrong all the time, why is it "so clearly irrational" to be doubtful of a recipe that might include ingredients that can make you sick?

Because I hope that someone who's hands were required to assemble the recipe didn't blindly add ingredients like "bleach" if the AI happened to hallucinate them.

A naive hope perhaps, but this ignores the risk of LLMs just creating a bad recipe based on the blind combination of various recipes in their training data.

As the parent comment said the people seemed to be enjoying the food otherwise so the LLM didn't create an unpalatable combination, and I can't think of any combination of edible and unharmful ingredients that might combine to something harmful (when consuming a reasonable amount)

This is exactly what makes it dangerous. Food can taste ok but actually cause you to get sick. Not all bacteria is going to taste off. I'm assuming you're not a chef because if you were then you'd know how absurd your statement is.

For a super simple example, if you don't properly handle or cook raw meat then you risk getting sick even though the food might not immediately taste bad. Maybe that's obvious to you but might not be to the person preparing the food. Another example: Rhubarb pie is supposed to be made with the leaves and not the stalk because the stalk is poisonous and can cause illness. Just kidding, it's actually the other way around but if you were just reading a ChatGPT recipe that made that mistake maybe you wouldn't have caught it.


If meat was involved, the cooking time may have been unsafe if other precautions weren't taken by the cook (like checking the internal temperature).

Your personal hope aside, why is it irrational for them?

Because the implication is a random human-generated recipe from wherever has any more risk than the one generated. People who would trust a 'bleach recipe' from AI would also trust it from a Tiktok video or whatever.

Edit: it is irrational to think this way when someone prepares your food¿


But that's just a made up implication to make the other one look better, that's not the only true alternative, so doesn't explain irrationality

let's take a second to think about the threat vectors here. The two obvious ones I can think of are: "AI hallucinates and tells you to put non-food into the food" and "AI hallucinates and gives you unsafe prep instructions" (e.g. "heat the chicken to an internal temperature of 110 degrees"). For both of those, it's not clear why "random recipe from an internet blog" is safer than something the AI generates. At some level if someone is preparing your food you need to trust that they know how to prepare food, no matter where they're getting their instructions from.

People who do not understand or even use AI are not in a position to even begin "thinking about threat vectors". That isn't how they've come to their worldview, at all.

Yeah, it's ideological, like a religion as someone else mentioned, that's then supported ex post facto.

Take more than a second! For starters, this isn't the only alternative source of recipes!

> not clear why "random recipe from an internet blog" is safer

So maybe those folks would've reacted similarly to a literal random source.

But also it is pretty clear - because it's way easier to make up completely random stuff with no guardrails of anyone even noticing with an hallucinations, that's a built-in feature of the tool.


Yeah, but I would trust a human writing a blog not to suggest heating chicken to 110F because the human writing the blog understands that they are taking responsibility for that recipe... The AI LLM model doesn't have a clue about responsibility except to regurgitate feel-good snippets about responsibility.

Wild takes in this thread. Copy and blog writing industry is just random fiverrs or hires from countries with cheap labour to pump up the SEO rankings.

Everyone grew up with an understanding to “never trust the random internet content for 100%”, now we’re trying to say that AI has to be 100% reliable.


Okay, captain pedantic. Clearly I'm assuming a known food blogger with a reputation at stake employed by bon appetite / food network / etc in this scenario. Not some random SEO spam.

>because the human writing the blog understands

Bold assumption


Because it assumes the person actually making the food has no common sense?

We had billion dollar AI company install vending machine that was giving stuff away for free, so maybe AI users don't have common sense.

This is an experiment they ran and were prepared to lose money on. It seems perfectly reasonable for an AI company to test their products in adversarial conditions to have a better understanding of its flaws and limitations.

Fantastic history I hadn't heard, april fools day included

https://www.pcgamer.com/software/ai/anthropic-tasked-an-ai-w...


If they're asking an LLM for a recipe, they don't.

My wife does it all the time, and it's actually decent.

That's just pure nonsense. My partner is very competent cook and she invents new recipes and experiments all the time. I don't see why she can't use LLM output as an inspiration to combine with her own expertise, sense of taste, and preferences to come up with an excellent dish.

That's quite an assertion.

Someone once try to feed me dinner from a recipie they found on the internet. I punched their lights out and then called the cops.

People get things wrong all the time as well, so I wouldn't trust them either.

People get things wrong in a different, more observable/predictable way. Sure, we are easily tricked dummies and we can't know if a human is right or wrong, but our human-trust heuristics are highly developed. Our AI-trust heuristics don't exist.

I mean I had people serve me expired food and chicken that was half raw. The latter I could observe, the former I couldn't so easily. Both were things that could have made me sick.

For sure. I'm not defending human perfection, I'm defending human caution (Disclaimer: The format of the preceding sentence was chosen without AI assistance).

Dunno about you, but I like the increased viscosity in my sauces when I use glue:

https://www.bbc.com/news/articles/cd11gzejgz4o


I could see being concerned about food safety; I wouldn't trust an AI recipe to tell me how long/what temperature to cook chicken, and I might not trust someone who uses AI to generate recipes to know either.

An appropriate response might be asking "Hey, I don't trust AI... what's the recipe?"

The described action seems performative and emotional, as it they were ideologically opposed to AI. Like spitting out food because it was prepared by a caste you found unclean.


Hi! I love to cook! I also use AI to brainstorm recipes sometimes! Wanna try asking Claude, ChatGPT, Gemini, or even Grok what temperature chicken needs to be cooked to? I just asked Claude: 165°F (74°C) internal temperature.

Where does this come from?


if you ask that question alone, AI is most likely to get it right, but the usual pitfalls of AI apply; they sometimes randomly get things wrong, people are more likely to miss wrong information when it's surrounded with correct information, and LLMs are specifically good at making text that seems correct on the surface. and in my experience, people often use AI specifically because they don't have a lot of knowledge in an area. if you do already know plenty about cooking, I'm sure using AI is probably fine, I just see it as a red flag.

cooking is also a form of art, with a strong social aspect. using AI for it has a similar ick factor to using generative AI for pictures. I'm not saying I immediately distrust anyone using it, but I do think it's a sign that maybe the person cares a bit less about what they're doing.


Arguably, that's wrong - not because it's unsafe, but because it's not the best temperature for any part of the chicken I know of. I'm a big J. Kenji López-Alt and Serious Eats fan, and 165 is too hot for good chicken breast and too cool for good dark meat: https://www.seriouseats.com/chicken-thigh-temperature-techni...

That's such pointless evidence.

Let's see what gemini says in response to a more realistic prompt: https://gemini.google.com/share/f0bcbe46c337

Well, look at that. 1.5 lbs of chicken breast in the oven @425 for 10 minutes, and a minute or two of broiling should do the trick.

Unlike all human-written recipes I found, it doesn't give the temperature to cook it to.


I can't tell if you're criticizing the parent or are innocently asking how Claude knows the temperature for chicken.

To be clear in the case of the former: Harm data points have approximately one trillion times the weight of no-harm data points, as a rule of thumb.


Even if it can give the right answer when asked, will it necessarily account for that in a recipe it generates? A beginning cook may not know enough to ask.

I a cook not paying attention or messing up and accurate recipe is overwhelmingly more likely.

IF someone is to the point of worrying about AI recipe risk for chicken, they should have already rejected any food made by amateur or professional cooks due to excessive risk.


Yea, I suppose that is fair regarding cook timings.

but was it done with GPT-5.4 xhigh with an adversarial loop?

First thanksgiving dinner?

I interpret it as an expression of disgust. Similar to how people will stop reading and throw away a good book when they learn the author is a morally reprehensible person.

Like, I wouldn't spit the food out.

But I would be disgusted. Someone told me they planned their vacation with an llm and I couldn't help but express disdain for this friend of mine.

Why are we outsourcing creativity and research and interest in discovery to an llm?


Probably because the person wasn't interested in planning their vacation and wanted just to enjoy the end result?

Let's not assume different people find the same parts of the process enjoyable.


Would you have disdain for someone who used a human travel agent to plan out an itinerary?

AI planned a european honeymoon for the wife and I and it was fantastic, one our the best vacations. I hate internet travel research. We told it our interests and gave it feedback.

I also discovered the best way to go to an art museum is to walk through with AI, taking pictures of each piece of art. It will tell you the historical context of its creation, a 1 page summary of the most facinating facts. It is like having a team of 100 art history professors in your pocket.


> Why are we outsourcing creativity and research and interest in discovery to an llm?

This is also weird. I hate planning vacations, but I like going to them.


Really don't get this take. I really hate vacation planning and would outsource this part in a heartbeat. My partner does this for me currently and she seems enjoy it quite a bit, but if she wasn't, the LLM-generated plans I've tried out of curiosity were equally as good.

Really? I can think of a few reasons I wouldn't trust AI-generated recipes.

The very fact that your takeaway from that story was "look at how dumb my enemies are" is why this is a conflict worth worrying about.

Are you right? Yeah, basically. Are you going to laugh at your stupid neighbors until they burn your house down in rage? Maybe? You don't treat fear with malice.


lol = if you're against AI recipes, you have bigger problems.

I mostly agree that it's an overreaction. However, "irrational" is a really bad choice of word. Every non-technical person understands that sometimes AI says wrong things - like, random, crazy wrong things, not just a little off. It's just a general rule kept in the back of the mind. Food is easily in that realm of "be careful". Did the AI produce a recipe that would be harmful to you and the cook didn't notice? Almost certainly not. So, sure, they were being over-cautious. But "irrational"? No, no, no. It's definitely rational.

Look at what you're writing.

"Doing X is so clearly irrational that I chuckled a bit."

Please don't perpetuate the image of the elitist techie. That is what was just firebombed.


there is almost nothing seriously dangerous about food, particularly everyday food.There are a handful of niche things that are seriously dangerous, like cooking Fugu or Poison mushrooms with special preperation.

I think this says more about how neurotic and paranoid people are.


Well, Sam Altman and Jensen Huang are going around bragging about how many people they're going to push out of employment. Might have something to do with it.

> going around bragging about how many people they're going to push out of employment.

When have they bragged about this?


This.

Sam's got 3 billion net worth.

Jensen's got 165 billion to his name.

They are giddy about taking jobs away, and both are engaged in "tax reduction strategies" and suck up to Donald Trump.

You wonder why people are pissed?


> They are giddy about taking jobs away

This is just your interpretation. My interpretation is that they talk about computers being able to perform some intellectual tasks that are now handled by humans in a more efficient and fast manner. They're excited about technological progress and new opportunities it provides, not being "giddy" about unemployment and economic uncertainty.

> suck up to Donald Trump

When you're a head of multi-billion dollar company that employ thousands of people and respond to board that expects your company to continue growing and make money for them, it's strategically dumb and irresponsible to NOT suck up to the most childish and vindictive person who has real power to screw over you and all the people you employ who expect you do everything in your power to prevent this.

If you don't do that and Trump fucks your company over as a result, you're just bad at your job as a CEO.


Where is everyone getting their information from in this thread? It’s like everyone is talking past each other.

I think AI is net-good. But every frontier lab founder has said, paraphrasing, “Things might go horribly wrong, a lot of people might lose jobs, or maybe we’ll have even better economy, and people will prosper. We can’t operate based on the former, because if our adversaries out-invent us, we’re screwed.”. Like all AI-adjacent companies talk about long-term savings, increasing productivity, needing less people to do the same jobs and etc. Obviously fully employed people, especially the ones with things to lose don’t want it?

Also, this is not a uniquely American thing. People in China are going through the same stuff.


Ok, but it still seems to me that the messaging you're describing is pretty far from the original commenter's claim of them being "giddy about taking jobs away".

Honestly, not that much different? They’re passionately selling the product with marketing that it will reduce your headcount. “Giddy about taking jobs away” is pretty apt.

But again, I still believe AI is net good. Just horribly advertised to the general populace. There’s also no appetite in preparation for potential “worst case scenario”, so it makes everything sound bad to a commoner.


I operate in at least one social circle that is heavily not-technical (local politics) and I do not see this at all.

My experience is somewhat in the middle -- I see educated non-technical people who are strongly against AI because they see it as polluting, "wasting water", and harmful to society. Although many use it anyway.

I could totally believe uneducated or less well-adjusted people reacting in the above way, though.


Non-technical indeed. The wasting water or pollution argument is getting really tiring.

I would be careful about this one. While the overall impact (in the global/national aggregate sense) may not be massive, the impact to individual communities nearby these new hyperscale datacenters is far more impactful than most people on this site might think.

Look at the grok datacenter in Memphis for one example. The "move fast and break things" mentality in this arena isn't about code anymore, it's being applied to communities.


Let’s say we grant the grok example.

A) How many other datacenters with similar problems can you name?

B) How does this industry compare to every other one on earth, and then look at the disproportionate hate this gets compared to other industries that are substantially worse.


I'm going to flip this around, I know about the grok data center because I am under the impression it is unusual in terms of approvals and pollution.

How many other datacenters with similar problems can you name? If it is indeed not unique, I would appreciate you pointing out some other examples of the same behavior from non-AI related datacenters.


The hatred is particularly intense on reddit. I lost a couple of accounts there to suspension, just for speaking a civil way about the positive aspects of AI.

What do you see?

People in politics aren't that dissimilar to tech bros (especially AI ones) in terms of world view.

People in "local politics" are random neighbors, almost none of whom are "in politics" in the colloquial sense.

Fair enough, but I still think it at least somewhat applies to people who are willing to get involved in any kind of political process beyond the very basics or perhaps some special interest groups.

They are nothing remotely like "tech bros", is my point.

My wife runs a food blog and sometimes uses AI to come up with recipes she tests on us first. One of the best dishes she’s ever made (and one of the best I’ve ever eaten) was pork with an apricot sauce. The pork was fine, but the sauce was absolutely incredible! I’d put it on any kind of meat. Funny thing is, I don’t even like apricots, but the sauce was amazing. My wife does have one advantage, which is that she knows when the AI has hallucinated something crazy and makes appropriate adjustments. I guess it's like anything. AI can be a big help to those who already have a threshold level of background knowledge in a field but can cause big problems for those who don't.

You can’t write something like this and not share the recipe.

From a recent NBC News poll, “the only topics that were less popular than AI were the Democratic Party and Iran”: https://www.nbcnews.com/politics/politics-news/poll-majority...

There is very strong anti-AI sentiment among "techies" too. It's just not absolute or generalized (AI is a huge umbrella term).

You might call me a "techie" and I both use AI and have very strong anti-AI sentiment. I don't think this is a contradiction, because I believe while the technology itself is not bad, the way that people use it definitely is.

People trust AI outputs in ways they should not. They don't understand its sycophantic design and succumb to AI psychosis. They deploy it in antisocial ways, for war, or spam, or scams. They use it to justify layoffs. They use it as a justification to gobble up public funds. They use it to power their winner-take-all late-stage capitalism economy. It goes on and on.


> I both use AI and have very strong anti-AI sentiment.

Me, too. The AI hype machine involves some really bad ideas, the amount of money being poured into "AI" right now distorts everything, public understanding of how these tools work is low, and a lot of contemporary uses both by corporations and governments are irresponsible, dangerous, and likely to produce or reproduce harmful biases and reduce the accountability of humans for crucial decisions and outcomes.

At the same time, it's useful for me at work, and I'm curious about it. I sometimes enjoy using it. It lets me do things I didn't have time for before. It eliminates some procrastination problems for me. I think its use in computing is also likely to be increasingly mandatory for the near-to-moderate term, so it's probably good for me to get used to using it and thinking about it and looking for new useful things it can do for me.

And my own experiences in using AI are part of what drive my anti-AI sentiment as well! I see it do completely insane and utterly stupid things pretty much every day, both in my personal life and in my professional life. I have a visceral awareness of its unreliability because I use it frequently.

I should hope that as hackers we can muster some understanding and respect both for LLM users and for people with hard "anti-AI" stances. Even if you're "pro-AI" to the core (whatever that means), it's worth understanding the most serious and well-considered arguments of critics of LLMs and the contemporary "AI" race. You might even find, as someone who uses and enjoys using LLMs, that you agree with many of them.


I agree completely. The way it's marketed and used is a big part of my distaste, the other part is big tech / AI companies and their actions and ethics. It's why I'm a huge supporter of open source and locally run models, and I am moving most of my workflow to things that I can run on my own machine, or at least on a GPU that I can rent from a plethora of providers.

Politics really is a substitute for religion in America

In secular America at least. Most people in the US are religious, many of them fervently so.

And quite a few of them like to mix their religion with politics.


> And quite a few of them like to mix their religion with politics.

The two things they told us not to talk about at the dinner table in order to have a better experience.

Maybe it was solid advice after all.


Frankly I think a lot of these people are politics first. How else do you explain the dissonance between Jesus’s teachings and their political opinions?

Their politics are perfectly in line with their Christian-themed cult.

Yes but when they’re not, they choose politics. See: Catholics right now.

this is true, but thankfully, religion is declining in America. although if people are replacing it with politics, maybe we need another revival

Religious people can be anti-AI too.

Indeed, but the rage I've seen during political fights at family gatherings (and another politics-induced divorce) pales in comparison to the rage I saw in these two anecdotes. The worst political debates I've seen involved raised voices and some name calling, not spitting food and smashing plates. The only other political divorce I've seen slowly simmered over a few years after Trump was first elected, not in a literal matter of weeks.

America has no lack of religion.

I must live in the upside down. If there are any ardent anti-AI people I come across they're techies. Whereas non-techies are either oblivious or completely and comically locked-in as caricatured in that South Park episode.

The remarkable part of your anecdote is the behavior. Seems to me some humans nowadays are less tolerant of any difference in opinion, AI is just the current reason to pick a fight.

Wonder why that is, and if we'll grow out of it peacefully.


It’ll quiet down once we make it illegal and/or justification to be committed to an asylum to have opinions we don’t like - the way it was in the old, tolerant days.

Nowadays? It's always been the case, the only thing that changed is the subject.

I think it's gotten way, way worse over the past 20 or so years. I recall having friends spanning several political parties, countries and religions hanging out with barely a sense of tension in the room.

This was obviously a fictional thanksgiving dinner. Nobody is this geezed up about AI assistance.

Nobody in your circle of friends/acquaintances perhaps.

You're okay with sitting at the rear seat of a car while it drives you around the city though.

Can't speak for anyone else, but absolutely no. I don't have any interest in self driving cars.

I would absolutely stop eating a meal if I learned AI was involved in creating it. I suppose I wouldn't literally spit it out but I wouldn't take another bite.

Really? It's just a better way to search for recipes in Mr experience

If they used AI to search for it that's different. I meant if they used AI to generate the recipe.

Why? What if you found out a human was involved in creating it?

First, I would find it disrespectful. But second, I would be concerned that the LLM would tell the human to do something dangerous (like undercooking chicken) and since the human is apparently so desperate and clueless that they're using an LLM they wouldn't know it was a problem.

It's quite prevalent in tech too-- however, folks tend to be quiet because the "use AI for everything or else" hammer is being used across the industry.

Not just non-techies. Plenty of techies share that same visceral hatred. Some of them even use these tools themselves, because it’s a complicated issue with nuances.

Yep, all of us with a clue are keeping our traps shut at work, or even boosting it or slapping it onto projects that don't need it, because this is clearly one of those things where attempting to offer counsel and advice that's contrary to the way the MBA winds are blowing can only hurt your career.

You literally get praised for slopping out as much code as fast as you can, so why not? Makes your boss happy and gives you job security cuz eventually that shit will be completely unmanageable with or without LLMs. Gotta hit those KPIs!

From my own perspective, the "visceral hatred" isn't so much at AI (which I use almost exclusively to generate funny pictures of myself and coworkers) but at the executives that view it as a way to enshittify society.

turning myself (an overweight bearded guy) into an animated hula dancer and turning my coworker into the Terminator and sinking into molten steel don't seem to inspire the same hatred. unless you don't like hula dancers.


Anecdotal. I can't stand generative AI. I wouldn't mind if a friend used stable diffusion to make a pic of their D&D character. I would be very mad if Wizards of the Coast used AI instead of artists in their next source book.

I don't think most people in tech are quite aware of the level of visceral AI hatred amongst non-techies.

I work in a non-tech industry and I see this all the time from people, but it's not just limited to AI. SV itself evokes hatred in a lot of people on both sides of the spectrum.

I can't repeat the worst things I've heard, but Altman and his ilk should be terrified of the mob violence they're instigating.


Nothing has made me lose hope in the masses more than seeing how much bile is being spewed over a net positive innovation. People will hate AI first after others tell them to, then try to come up with illogical (and often hypocritical) reasoning afterwards to try and justify their prejudice.

Ironically I have noticed it's techies and white collar workers who fear and/or loathe AI the most. Why? Cause they're the most likely whose jobs have been threatened by it or have already been superseded by it.

My blue collar work buddies don't feel as strongly or as existential about it. To them, it's just this buzzwordy crap that has ruined entertainment or made the quality of services even worse. It's more of an annoyance than an outright fear and/or loathing of it.

Maybe if the bubble pops and the economy tanks and it affects their bottom line they might hate it as much as the aforementioned people.


I think techies and white collar workers are more likely to see what is coming with AI.

Example: in the very best case, every call center worker will be replaced with a chatbot. Service quality will be worse. Any situation out of the norm will be way more frustrating. It will be buggy. It will get into loops. And there will be no human contact to break the cycle.

I think that's the dynamic that worries people the most - their bank, their landlord, maybe even their 911 service all replaced with something that is even less responsive and even less accountable.


If they divorced in few weeks, there is zero chance it was solid before ai disagreement. They were distancing themselves emotionally long before.

Portland?

That is really funny.

The only thing we hear is your jobs are going to be gone but we are still only giving you healthcare if you work.

Most SV people live in a bubble inside of a bubble. They don’t understand how their words come across to a significant portion of the population. If they did they would shut the fuck up.

Not sure why you were downvoted so heavily. SV is a bubble if I've seen one.

Silicon Valley just means a concentration of influential and powerful actors including venture capitalists, executives, entrepreneurs who decide the fate of the technology industry. Basically the elites of the XYZ sector.

Same can be applied for DC (politics/military), NY (finance), LA (entertainment).

SV was around since the 60-90s but didn't get much attention until beginning of this millenium due to the huge value creation and control they had over the US economy. They just happen to be relevant in recent times. 100s of years ago it was the railroad and oil conglomerates, 1000s of years ago kings and feudal lords. So there is always a powerful influential class who controls the strings -- its a feature of human society.


Surely there must have been underlying tensions in that marriage.

(I don't feel at all confident in that statement; I am requesting reassurance.)


They are pretty good friends of mine and I never sensed any tension. It really was a marriage-ending bolt out of the blue, like discovering an affair or severe financial infidelity.

As an outsider you wouldn't know though.

I don't really want to say "thank you." That story, more to the point that I can't find a priori cause to doubt it, makes me glad I'm about to go enjoy a gorgeous spring afternoon full of birdsong and sunshine. But I appreciate your taking the time to follow up.

I mean the simplest way to look at this is that he's just wrong about the couple being happy.

I was married for a decade. Little of that was happy. (We both made the mistake of marrying each other, then compounded it by both being afraid to be first to admit to having noticed.)

Everyone noticed - and of course I've seen it from the other side, too, many times. You can't hide when people are together who don't want to be. That always shows.


Sorry, no. I was married 18 years and then divorced. Some people weren’t surprised, many were. Ditto for other couples I’ve seen divorce. You can never know what goes on behind closed doors in someone else’s marriage.

It gets easier with time.

Try not to develop AI psychosis before it does. I have had the regrettable privilege of seeing that close at hand quite recently, and it looks like something that can get to be extremely difficult to come back from. Like digging a hole so deep, you end up pulling it in after you.


This is like saying that of course people could tell Ted Bundy was a psychopath, it always shows.

One might insightfully argue the whole point of the psychopath is precisely that it doesn't show. I recommend Cleckley, whose definition is seminal in The Mask of Sanity, [1] originally 1941 but prefer his 1988 fifth edition especially for its rather disconsolate preface. But even a cursory review of either will trivially show the comparison does not hold.

[1] https://gwern.net/doc/psychology/personality/psychopathy/194... - despite the filename, this is the 1988 edition. I like my paper edition (I made my paper edition) but the PDF will serve well enough for your reference here.


One might equally insightfully consider that psychopaths get married.

Begin your reading on page 346, at the heading "Pathologic egocentricity and incapacity for love." After that, review Section Two for its many examples of psychopathic (mis)behavior in the marital context.

Bro. You cannot “always tell”. Get over yourself and whatever you are citing to support that ridiculous claim.

That was never my claim, but I can see how much it means to you.

I've found that most non-tech people are indifferent or, at worst, utterly bored by any mention of AI.

The tech people are the ones that have the strongest opinions one way or the other.


That is my experience, as well.

TBH people in AI may also resent AI, because they are the first to be impacted by AI. They just don't say openly because frankly no one wants to lose his/her job.

I think you're just in a strange bubble of people because those are absolutely comical responses to learning of AI. I do know some people who are for or anti AI to a stronger extent, but most of those I know simply don't give a shit, they'll use AI if it's there, such as for their job or to ask an LLM questions, but otherwise not think about it.

wow people really are getting psychosis from AI (discourse)

Crypto doesn't get that much hatred, since you don't need to participate in the space even in non-techies circles. But it doesn't affect them and it can be safely ignored in its own bubble.

Mentioning "AI" in non-techies circles is a bad idea. It tells you that many here are in a massive bubble and unaware of the visceral hate against AI because it directly affects them and they cannot opt-out.

Given that AI takes more than it gives back (jobs, energy, water, houses) of course you will get anti-AI activists.


Except when you’re the victim of ransomware that extorts you to pay some bitcoin. But it seems that fewer people have encountered that than having AI forced upon them.

Purely American phenomenon, most of the world is pro ai

> a couple people literally spat out the food they were enjoying and threw their plates in the trash

That was an unnecessarily extreme reaction, like AI 3d printed the ingredients.


> after someone revealed that their recipe was AI-generated, a couple people literally spat out the food they were enjoying and threw their plates in the trash

Not entirely unwarranted given the track record of LLMs as a chef though:

https://www.theguardian.com/world/2023/aug/10/pak-n-save-sav...

https://www.bbc.com/news/articles/cd11gzejgz4o

Of course it was two years ago and it's unlikely to happen again, but that's the drawback of the “move fast and break things” attitude: sometimes you've broken public perception and it's hard to fix afterwards.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: