Predictions indicate the AI bubble could burst or face a major correction between 2026 and 2027, as high infrastructure costs clash with low, slow-growing returns. While some analysts suggest a correction is already underway, others, including Benchmark’s Bill Gurley, anticipate a reset due to unsustainable investment.
Key Factors and Predictions:
Timeline: Many expect a significant market adjustment within 2–3 years (by 2027), with some predicting a crash as early as 2026.Root Causes: The bubble is fueled by immense capital expenditures on data centers—expected to exceed $1 trillion by 2028—and potentially low demand or slow adoption.
Profitability Concerns: Many AI companies currently burn cash, and it remains to be seen if they can survive without subsidized token costs.
Market Sentiment: According to a Polymarket poll, a significant number of traders are betting on a correction by the end of 2026.
If the expected high returns from AI fail to materialize by early 2027, a severe correction could follow, similar to past technology bubbles. However, a potential “soft landing” could occur if AI adoption slowly matures over decades.
Tag: AI
Agentic vs Mimetic
The future will belong to people with a very specific combination of personality traits and psychosexual neuroses. An AI might be able to code faster than you, but there is one advantage that humans still have. It’s called agency, or being highly agentic. The highly agentic are people who just do things. They don’t timidly wait for permission or consensus; they drive like bulldozers through whatever’s in their way. When they see something that could be changed in the world, they don’t write a lengthy critique—they change it. AIs are not capable of accessing whatever unpleasant childhood experience it is that gives you this hunger. Agency is now the most valuable commodity in Silicon Valley. In tech interviews, it’s common for candidates to be asked whether they’re “mimetic” or “agentic.” You do not want to say mimetic. Once, San Francisco drew in runaway children, artists, and freaks; today it’s an enormous magnet for highly agentic young men. I set out to meet them.
Child’s Play
Tech’s new generation and the end of thinking
by Sam Kriss
Harper’s Magazine
Note – Interesting article on San Francisco’s tech culture
Somehow people manage to live here. But of all the strange and maddening messages posted around this city, there was one particular type of billboard that the people of San Francisco couldn’t bear. People shuddered at the sight of it, or groaned, or covered their eyes. The advertiser was the most utterly despised startup in the entire tech landscape. Weirdly, its ads were the only ones I saw that appeared to be written in anything like English:
hi my name is roy
i got kicked out of school for cheating.
buy my cheating tool
cluely.com
The AI Bubble – Price to Earnings Ratio, History
Kai Ryssdal
A huge chunk of the S&P gains of late have come because of the hundreds of billions that are being invested in Artificial Intelligence. So, bearing in mind that the average p/e ratio on the S&P right now is 25, price to earnings ratio, what does that tell us?Michelle Lowry
People have very optimistic expectations of how fast these AI companies are going to grow, into the future.John Steinsson
There are kind of two episodes in the past where the price to earnings ration has shot up to really high levels. One was right before the great depression. One was in the late 1990’s, during the internet bubble.
What’s a price-earnings ratio anyway?
Kai explains the P/E ratio of the S&P 500, which is higher that it’s been since the early 2000s.
Marketplace – Nov 12, 2025
Note – transcription approximate, done by hand, not ai.
Wearable Recording Technology – With Artificial Intelligence
Both Ikeda and Comans are software engineers in suburban Sacramento and are just generally really into AI. When Ikeda first broached the idea of buying an AI wearable, partly to record and summarize their conversations about tech, Comans was reluctant.
“I said ‘absolutely not,’” said Comans. “This is a horrible idea. Like, why would I invite this in my life? And why would I put the most intimate moments of my life onto a server on the internet where I don’t know what’s going to happen with it?”
But Ikeda eventually convinced him. They were already sharing so much information online anyway. And at the very least, the Limitless transcripts were supposed to be only accessible to them.
While the technology isn’t perfect — it often mistakes Comans and Ikeda for one another, and doesn’t really understand sarcasm — the couple says, for the most part, it’s been helpful to have their conversations recorded.
Even the fights.
“The fact that it records the like, deeply unflattering things that you say right in a moment of weakness, or when you know you’re being really defensive, is kind of the stuff you really, actually need to see,” said Ikeda.
The Limitless Pendant is just one of a new generation of always-on AI wearables that have recently hit the market. There’s the similar Amazon-backed Bee, a bracelet that records and transcribes everything you say. And there’s the yet-to-be-seen device OpenAI and iPhone designer Jonny Ive are reportedly collaborating on.
The covert recorder wasn’t one of the newest generations of AI wearables. But Miller sees the possibility of a near future where everyone needs to assume they’re being recorded at all times.
It’s made her less trusting and more careful about what she says.
“It’s a strange thing that part of being a human is a weak memory,” said Miller. “We did not evolve to have every single thing we’re saying or doing to be remembered.”
From her perspective, even if AI never forgets, maybe humans still should.
What it’s like to have an AI wearable record everything you say
A new generation of wearable AI gadgets can record, transcribe and analyze your every interaction. Here’s how early adopters say it’s changing their relationships.
AI vs Persian Etiquette
Why AI Chatbots Can’t Process Persian Social Etiquette
An anonymous reader quotes a report from Ars Technica:
If an Iranian taxi driver waves away your payment, saying, “Be my guest this time,” accepting their offer would be a cultural disaster. They expect you to insist on paying — probably three times — before they’ll take your money. This dance of refusal and counter-refusal, called taarof, governs countless daily interactions in Persian culture. And AI models are terrible at it.
New research released earlier this month titled “We Politely Insist: Your LLM Must Learn the Persian Art of Taarof” shows that mainstream AI language models from OpenAI, Anthropic, and Meta fail to absorb these Persian social rituals, correctly navigating taarof situations only 34 to 42 percent of the time. Native Persian speakers, by contrast, get it right 82 percent of the time. This performance gap persists across large language models such as GPT-4o, Claude 3.5 Haiku, Llama 3, DeepSeek V3, and Dorna, a Persian-tuned variant of Llama 3.
A study led by Nikta Gohari Sadr of Brock University, along with researchers from Emory University and other institutions, introduces “TAAROFBENCH,” the first benchmark for measuring how well AI systems reproduce this intricate cultural practice. The researchers’ findings show how recent AI models default to Western-style directness, completely missing the cultural cues that govern everyday interactions for millions of Persian speakers worldwide.
“Cultural missteps in high-consequence settings can derail negotiations, damage relationships, and reinforce stereotypes,” the researchers write.“Taarof, a core element of Persian etiquette, is a system of ritual politeness where what is said often differs from what is meant,” the researchers write. “It takes the form of ritualized exchanges: offering repeatedly despite initial refusals, declining gifts while the giver insists, and deflecting compliments while the other party reaffirms them. This ‘polite verbal wrestling’ (Rafiee, 1991) involves a delicate dance of offer and refusal, insistence and resistance, which shapes everyday interactions in Iranian culture, creating implicit rules for how generosity, gratitude, and requests are expressed.”
Definitions vs Usage – Wittgenstein, Google Gemini on.
What did Wittgenstein say about word definitions vs usage?
Ludwig Wittgenstein’s later philosophy, primarily in his work Philosophical Investigations, fundamentally shifted the emphasis from definition to usage when considering the meaning of words.
He strongly argued against the traditional view—which he called the “Augustinian picture”—that every word has a fixed, corresponding object or mental idea that constitutes its meaning (a kind of static definition).
Instead, he proposed the famous maxim:
“For a large class of cases—though not for all—in which we employ the word ‘meaning,’ it can be defined thus: the meaning of a word is its use in the language.” (Philosophical Investigations, §43)
Here are the key points of his contrast between definition and usage:
1. Meaning is Use
- Words as Tools: Wittgenstein compared words not to labels for objects, but to tools in a toolbox (hammer, pliers, saw, etc.). Just as the function of a hammer is defined by how it is used, the meaning of a word is defined by the ways it is used in practice in the shared activities of a linguistic community.
- Context Dependence: The meaning of a word is not an abstract essence, but emerges from the context of its application. For example, the word “Water!” can be a request, a warning, or an answer, depending on the situation. The single, static definition is insufficient; the multiple ways the word is used are what determine its meaning.
2. The Critique of Definition
- “Family Resemblances”: Wittgenstein challenged the idea that a single, essential definition could capture the meaning of many words. He famously used the example of the word “game.” When you try to define what is common to board games, card games, ball games, and Olympic games, you find no single feature shared by all. Instead, there are only “a complicated network of similarities overlapping and criss-crossing,” which he called family resemblances.
- Usage is Prior: For an external definition to be understood (even an ostensive definition like pointing and naming), you must already understand how the word is to be used. If someone points to a red object and says “red,” how do you know if they are naming its color, its shape, or the object itself? The prior understanding of the rules of use (the “language-game”) is what gives the definition its power.
In summary, for Wittgenstein, usage is the primary source and criterion of a word’s meaning. Trying to find a single, fixed definition is often a futile search for an essence that does not exist and leads to philosophical confusion, whereas observing the word’s actual function in a “language-game” reveals its true meaning.
For a deeper dive into the concept, check out Wittgenstein on Language and Reality: The Philosophy on Constructing Meaning.
AI Software Development – Slashdot on
Replit CEO on AI Breakthroughs: ‘We Don’t Care About Professional Coders Anymore’
Replit, an AI coding startup platform, has made a dramatic pivot away from professional programmers in a fundamental shift in how software may be created in the future. “We don’t care about professional coders anymore,” CEO Amjad Masad told Semafor, as the company refocuses on helping non-developers build software using AI.
The strategic shift follows the September launch of Replit’s “Agent” tool, which can create working applications from simple text commands. The tool, powered by Anthropic’s Claude 3.5 Sonnet AI model, has driven a five-fold revenue increase in six months. The move marks a significant departure for Replit, which built its business providing online coding tools for software developers. The company is now betting that AI will make traditional programming skills less crucial, allowing non-technical users to create software through natural language instructions.
Black Parrot
In my experience, business “analysts” don’t know what the hell they want, let alone how to specify it.
I’ve been asked to create reports that add pounds + gallons, and it’s almost impossible to get them to understand why that’s nonsense.
caseih
Maybe AI would be able to tell them what they want makes no sense, whereas a human programmer has to worry about tact and of course not getting fired.
AleRunner
My main experience with AI in programming is that it regularly produces things that don’t exist. References to libraries that have never been made. Function calls that should exist but don’t and so on. There are things that it’s useful for and can do much more quickly and accurately than a human but immediately the smallest amount of thought is needed, it does what you tell it to whether or not that’s possible. I expect that we’ll get a bunch of cars soon where the code is set up to use the “eco brake” for emergency breaking because there ought to be an eco brake so it makes sense to use it, even if nobody has invented such a thing yet.
Given their own hallucinations, I doubt the AI is going to save us from product management that has no idea what could and couldn’t work. Rather just allow them to get lots further before they find out that what they are trying to do is impossible.
Does a humidifier make your house feel warmer? Asking Google AI
Yes, a humidifier can make your house feel warmer, even though it doesn’t actually raise the air temperature, because increased humidity slows down the evaporation of sweat from your skin, making you perceive the temperature as higher; essentially, you feel warmer due to the moisture in the air, not because the room is physically warmer.
Key points about humidifiers and temperature perception:
Sweat evaporation:
When the air is humid, sweat evaporates more slowly from your skin, causing you to feel warmer. Body temperature regulation:
Your body uses sweat evaporation to cool itself, so less evaporation in humid air means you feel warmer.
Comfort factor:Proper humidity levels can significantly improve your comfort level, making a room feel warmer even if the actual temperature remains the same.
Rufus – Amazon Chatbot, Clippy like Popup AI Assistant


Rufus is an AI-powered shopping experience that lets you ask all kinds of shopping questions in the Amazon Shopping app and on Amazon.com.
To get started with Rufus:
Go to http://www.amazon.com and on the top left side of the navigation bar you’ll find the Rufus button.
Select the Rufus button, and a chat window will appear on the bottom left corner of the page.
With the Rufus chat window open, you can ask a question or choose one of the suggested questions.
When you’re done, you can click the X button to close the chat window or the minimize button to drop it down the lower left-hand corner of the page.
Note: Rufus is only available on smartphones in the Amazon Shopping app and on Amazon.com.
https://www.amazon.com/gp/help/customer/display.html?nodeId=Tvh55TTsQ5XQSFc7Pr
A comment link regarding Rufus:
https://www.amazonforum.com/s/question/0D56Q0000DVM4TrSQL/how-to-disable-rufus
This AI stuff is fake hot garbage. I don’t want or need a dumbed down respose to anything I could’ve easily looked up my self.
Is Amazon trying yet another AI that is about as useful as a clam in terms of research skills?
Rufus appears on browsers too. I use Edge and Chrome …..Rufus is there. I hate it.
AI Transcription, Problems With
SAN FRANCISCO (AP) — Tech behemoth OpenAI has touted its artificial intelligence-powered transcription tool Whisper as having near “human level robustness and accuracy.”
But Whisper has a major flaw: It is prone to making up chunks of text or even entire sentences, according to interviews with more than a dozen software engineers, developers and academic researchers. Those experts said some of the invented text — known in the industry as hallucinations — can include racial commentary, violent rhetoric and even imagined medical treatments.
Experts said that such fabrications are problematic because Whisper is being used in a slew of industries worldwide to translate and transcribe interviews, generate text in popular consumer technologies and create subtitles for videos.
More concerning, they said, is a rush by medical centers to utilize Whisper-based tools to transcribe patients’ consultations with doctors, despite OpenAI’ s warnings that the tool should not be used in “high-risk domains.”
Researchers say an AI-powered transcription tool used in hospitals invents things no one ever said
AI Hallucination – Definition, Example of
In the field of artificial intelligence (AI), a hallucination or artificial hallucination (also called confabulation or delusion) is a response generated by AI which contains false or misleading information presented as fact. This term draws a loose analogy with human psychology, where hallucination typically involves false percepts. However, there’s a key difference: AI hallucination is associated with unjustified responses or beliefs rather than perceptual experiences.
For example, a chatbot powered by large language models (LLMs), like ChatGPT, may embed plausible-sounding random falsehoods within its generated content. Researchers have recognized this issue, and by 2023, analysts estimated that chatbots hallucinate as much as 27% of the time, with factual errors present in 46% of their responses. Detecting and mitigating these hallucinations pose significant challenges for practical deployment and reliability of LLMs in real-world scenarios. Some researchers believe the specific term “AI hallucination” unreasonably anthropomorphizes computers.
https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)
As an example, Googling to find a movie with three actors, Rainn Wilson, Kevin Hart, and Jason Statham, and Google says they were all in Crank, when only Statham was:
movie with rainn wilson kevin hart jason statham
Rainn Wilson, Kevin Hart, and Jason Statham star in Crank, a 2006 action movie that ends with Chelios breaking Verona’s neck in a helicopter. Some critics say the movie is better than expected and powered by a great performance from Statham.

Sartre – man is condemned to be free, ChatGpt on
What did Sartre mean when he said man is condemned to be free?
ChatGPT
When Jean-Paul Sartre famously stated that “man is condemned to be free,” he was expressing a fundamental idea in existentialist philosophy. This statement reflects Sartre’s view that human beings are unique in the sense that they possess radical freedom and are responsible for their choices and actions in a world devoid of inherent meaning or values.Sartre’s existentialism begins with the premise that existence precedes essence. In other words, human beings first exist, and then they define their essence through their choices and actions. Unlike inanimate objects or animals, humans do not have a predetermined nature or purpose. Instead, they are free to define themselves and their values through their choices.
The idea of being “condemned” to be free means that this freedom comes with a burden. It is not an easy or comfortable state of being. With freedom comes the responsibility to make choices, and these choices can be anxiety-inducing and challenging. Sartre believed that many people try to evade this freedom by conforming to societal norms, adopting predetermined roles, or denying their ability to choose. However, he argued that such attempts at evasion lead to “bad faith,” a state of self-deception in which individuals deny their own freedom and responsibility.
In essence, Sartre’s statement “man is condemned to be free” underscores the idea that human existence is marked by the necessity of making choices and taking responsibility for those choices. It’s a call to embrace one’s freedom and to confront the inherent uncertainty and anxiety that come with it, rather than seeking refuge in conformity or denial of one’s true nature. Sartre believed that by fully accepting and embracing our freedom, we can lead authentic and meaningful lives.
AI Revolution – 20 Year Prediction Window
And it’s about time. Machine thinking is another area where early expectations were not fulfilled. Attempts to invent artificial intelligence are generally dated to 1956, and a summer workshop at Dartmouth College for scientists with a pioneering interest in “machines that use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.” At the time, machines with humanlike intelligence were often predicted to be about twenty years away. Now, they’re often predicted to be … well, about twenty years away.
The futurist philosopher Nick Bostrom has a cynical take on this. Twenty years is “a sweet spot for prognosticators of radical change,” he writes: nearer, and you’d expect to be seeing prototypes by now; further away, and it’s not so attention-grabbing. Besides, says Bostrom, “twenty years may also be close to the typical duration remaining of a forecaster’s career, bounding the reputational risk of a bold prediction.”
Fifty Inventions That Shaped the Modern Economy
Tim Harford
NOTE -> Copyright © 2017
Camus on Handling the Meaningless of Existence, ChatGPT on
Did Camus say something to the effect that the proper response to the meaningless of life is to invent rules for it like a game such as football?
Albert Camus did not explicitly say that the proper response to the meaninglessness of life is to invent rules for it like a game such as football in his philosophical works. However, his philosophical ideas do touch on the themes of absurdity and the human search for meaning in a seemingly indifferent universe.
Camus is best known for his concept of the “absurd.” He argued that life is inherently meaningless and that humans seek meaning and purpose in a world that offers none. One of his most famous statements about the absurd comes from his essay “The Myth of Sisyphus,” in which he compares the human condition to that of Sisyphus, a figure in Greek mythology who is condemned to roll a boulder up a hill for all eternity, only to have it roll back down each time he nears the top. Camus concludes that we must imagine Sisyphus as happy because he has embraced the absurdity of his task and found meaning in the act of pushing the boulder, even though the task itself is meaningless.
While Camus did not specifically suggest inventing rules for life like a game of football, his writings often explore the idea that individuals must confront the absurdity of existence and create their own meaning and values. This can involve pursuing passions, embracing relationships, and engaging in activities that give life a sense of purpose, even if those pursuits are ultimately arbitrary in the grand scheme of things. In this sense, his philosophy encourages individuals to make choices and find meaning in a world that lacks inherent meaning or purpose.
Perhaps the most important outcome from contemporary advances in artificial intelligence is that it compels us to understand what it means to be human. – Grady Booch
