AI Hub /aihub Âé¶ąAPP Wed, 04 Feb 2026 13:42:02 +0000 en-US hourly 1 AI Company Announces Ads Will Be Embedded in Chatbots /aihub/articles/ai-company-announces-ads-will-be-embedded-in-chatbots/ Wed, 04 Feb 2026 13:40:15 +0000 /aihub/?p=1270 Read More]]>
A 3d graphic of a blue digital bag of money marked by a dollar sign, hovering over on circuit board

OpenAI has officially announced , marking a significant shift in the business model of the world’s most prominent generative AI platform. While the company frames this move as a means to expand access for free users, the decision raises critical questions for higher education about data privacy, research integrity, and the digital divide.

OpenAI’s new “Go” tier, at a prescription rate of $8 per month, as well as the standard free version will soon feature sponsored content, the company announced. Higher-priced tiers like Plus, Team, and Enterprise will remain ad-free. For universities like UMA, this creates a tiered experience: students, faculty and staff who can afford premium subscriptions will have access to more robust tools, while those on free or low-cost tiers may encounter commercial influence. The ads will appear at the bottom of responses when OPenAI identifies a “relevant sponsored product” based on the conversation.

OpenAI maintains an “Answer Independence” principle, asserting that ads will never influence the core response. However, critics like Cory Doctorow point to the “enshittification” of search engines, where sponsored content eventually crowds out organic results, as a warning. 

A covering the announcement cited the company’s as a factor in the decision, as it considers a public stock offering.

, a prominent voice on AI in education, characterizes OpenAI’s pivot toward advertising as the “freemium model of enshittification that every tech since social media has followed as doctrine.” He  warns that ChatGPT holds a “significant advantage|” over social media predecessors, because users do not just have their data inferred from digital behaviors like swipes and likes, they “simply tell it” their most personal medical, financial, and academic details, which can then be used to target them with “unprecedented precision.”

In a research context, the concern is whether a student seeking, for example, reliable sources on climate change might eventually see sponsored white papers or corporate-backed studies prioritized over peer-reviewed academic journals.

As OpenAI moves toward a multi-billion dollar advertising revenue model, faculty might consider how “sponsored conversations” affect student information literacy. The shift emphasizes the need for students to distinguish between synthesized AI knowledge and commercial persuasion. Google, and by extension Gemini and NotebookLM, the tools provided to signed-on UMS community members as part of the Google Workspace for Education agreement, is not currently courting a sponsored chatbot revenue model.

Read more from Leon Furze in his post:

]]>
The Tool to Help AI Think Twice /aihub/articles/the-tool-to-help-ai-think-twice/ Tue, 27 Jan 2026 23:23:38 +0000 /aihub/?p=1252 Read More]]>
A robotic hand reaches out with a light bulb grasped between its fingers.

Text from generative AI systems can sometimes read as overconfident, with a tendency to overlook errors or fabrications. Researchers are trying to tackle that. Using “metacognitive” frameworks, developers are working on ways to give large language models tools for monitoring their internal reasoning processes before the text starts to flow. 

Researchers at the University of Oxford and the University of Sussex developed a mathematical framework called a “metacognitive state vector” to monitor AI internal states, to puzzle out whether a system could identify its own errors before generating a response. The group found that AI models can successfully use internal signals to “self-correct,” reducing hallucination rates and improving accuracy by distinguishing between confident knowledge and guesswork.

The research introduces the vector, a mathematical tool that evaluates AI performance across five key dimensions: confidence in the answer, identifying contradictory data, prioritizing problems, and a “mood check” – the model’s internal temperature or urgency for the stakes of the problem. This tool allows the system to switch from “System 1” thinking, which is fast and intuitive, to “System 2” processes, slow and deliberative, when detecting a high-stakes or confusing problem.

Researchers said this technology could transform how people interact with generative AI tools. Metacognitive AI could explain why it is uncertain about a specific historical fact or flag a potential contradiction in a scientific hypothesis it is helping a student explore. This transparency could have big implications for AI literacy, shifting the focus from the output to evaluation of the so-called reasoning process itself.

In the field, this kind of self-evaluation loop could help users to surface limitations. If an AI tool could eventually declare, “I am only 40% confident in this medical diagnosis,” or “these two sources I found contradict each other,” and explain why. For humans, it takes rigorous brain work and critical thinking to earn confidence in results. Overconfidence is what happens when you skip the effort – we want robots and students to be able to work smarter and łó˛ą°ů»ĺ±đ°ů.Ěý

Read the full story on

]]>
Why AI Detectors Struggle to Identify Artificial Text /aihub/articles/why-ai-detectors-struggle-to-identify-artificial-text/ Mon, 05 Jan 2026 16:11:13 +0000 /aihub/?p=1211 Read More]]>
1024px magnifying glass with focus on paper

A recent piece from the journal Live Science highlights a persistent technical challenge in the world of generative AI: software designed to detect AI-written text is failing to keep pace with the technology it’s meant to police.

The “Training Data” Problem 

According to the report, the core issue lies in how detection tools are built. Most current detectors are “learning-based,” meaning they are trained on vast datasets of known human writing, compared with known AI writing. They function by identifying statistical patterns, specifically looking for text that closely resembles the AI data they were trained on.

However, this reliance on training data creates a significant gap. As researcher Ambuj Tewari points out, when new, more advanced AI models are released, they generate text with different patterns and higher complexity than previous generations. Because this new output differs substantially from the detector’s older “training corpus,” the software frequently fails to flag it, resulting in a drop in accuracy.

A Game of Catch-Up 

The result is a perpetual game of “cat and mouse” where detection tools are functionally obsolete the moment a new large language model (LLM) enters the market. Until detectors can identify AI writing based on universal characteristics rather than historical comparisons, identifying synthetic text will likely remain unreliable.

False Positives

Beyond missing AI text, these tools face growing scrutiny for flagging authentic human writing as artificial. Research suggests that non-native English speakers are disproportionately at risk, as their writing often employs simpler, more predictable sentence structures that algorithms mistake for machine logic. A piece from December 2025 on the site Proofademic of relying on those tools. 

Read the full story in Live Science on AI detection technology:

]]>
Economists Weigh the Risks of a Potential AI Bubble /aihub/articles/economists-weigh-the-risks-of-a-potential-ai-bubble/ Fri, 02 Jan 2026 15:21:39 +0000 /aihub/?p=1198
A soap bubble floats across a green landscape.

Is the artificial intelligence sector a “gold rush” destined for a golden age, or are investors sprinting toward an economic cliff? As tech giants pour hundreds of billions into data centers and GPUs, comparisons to the Dot-com crash of the early 2000s have moved from whisper networks to front-page news.

In a bearish forecast released in late October 2025, Forrester Research for corporate AI spending. The firm’s “2026 Technology & Security Predictions” report projects that enterprises will defer 25 percent of their planned AI spending to 2027 as the gap between vendor promises and actual value widens. According to the study, fewer than a third of decision-makers can currently tie their AI investments to tangible financial growth, leading CEOs to prioritize immediate return on investment over experimental deployments.

David Cahn of Sequoia Capital has notably pointed to the “billion-dollar hole” between the revenue the industry needs to generate to pay off its infrastructure debts and the actual earnings of AI companies.

However, not all market watchers predict a catastrophic burst.

Analysts at Wedbush Securities have , characterizing the current moment not as hype, but as the start of the “4th Industrial Revolution.” Their research suggests that even if stock valuations correct, the underlying utility of AI represents a foundational shift in productivity, similar to the adoption of the PC.

Furthermore, economic historian Carlota Perez, author of , suggests that bubbles are a natural, if painful, part of deploying new “general purpose technologies.” In her view, even if the bubble bursts, the installation phase leaves behind vital infrastructure, much like the fiber optic cables laid during the Dot-com boom, that eventually fuels a more economically sustainable age of deployment.

For educators and students observing the market, expect volatility, but don’t discount the utility that remains after the hype cycle cools.
[Back to AI Hub Home Page] ]]> US Judge Approves a $1.5 Billion Copyright Settlement with Anthropic /aihub/articles/us-judge-approves-a-1-5-billion-copyright-settlement-with-anthropic/ Mon, 06 Oct 2025 17:46:25 +0000 /aihub/?p=1100 Read More]]>

A judge’s gavel and a golden balance scale sit on a wooden desk in front of a computer screen displaying abstract colorful figures, symbolizing law and technology.
Image is licensed as Creative Commons, BY-NC 4.0.

A U.S. federal judge on Sept. 25 to a landmark $1.5 billion copyright settlement between the AI firm Anthropic and a group of authors who sued them. The agreement marks the first major resolution in litigation over generative AI training and copyright. 

The case hinges on authors’ claims that Anthropic improperly used millions of their copyrighted works, many unlicensed, to train its large language model, Claude. 

Judge William Alsup, who earlier had pushed back on parts of the deal, said he now considers the settlement “fair,” though final approval depends on satisfying procedural steps, including notice to authors and potential objections. The named plaintiffs include Andrea Bartz, Charles Graeber, Kirk Wallace Johnson. 

The agreement directs Anthropic to pay authors and publishers about $3,000 for each of the books covered by the agreement, which includes about 465,000 works, according to an attorney representing the authors. It does not apply to future books.

The plaintiff says the settlement signals a legal basis for accountability – that AI developers can’t sidestep laws or creators’ rights. 

Leaders in the publishing industry also back the pact as a “major step” toward reining in unchecked infringement. The Association of American Publishers called the settlement a “major step in the right direction in holding AI developers accountable for reckless and unabashed infringement.”

Anthropic frames the settlement as an opportunity to shift toward developing “safe AI systems.” 
The Anthropic case had been set to go to trial in December last year, with potential damages tallying up to hundreds of billions of dollars. Intellectual property rights advocates say the move is for rivals like OpenAI, Microsoft and Meta.

]]>
AI’s Power Surge: Report Finds Data Centers Could Double Global Electricity Demand /aihub/articles/ais-power-surge-report-finds-data-centers-could-double-global-electricity-demand/ Thu, 25 Sep 2025 23:42:34 +0000 /aihub/?p=998
High voltage electricity towers against a vibrant sunset sky.
Photo via Adobe Stock by ABCDstock.

A warns that the same technology driving innovation in fields from medicine to climate modeling could more than double global electricity demand from data centers by 2030. The findings raise urgent questions about whether AI’s promise can be squared with environmental limits.

The report draws together global and regional modelling and datasets, as well as interviews with governments, regulators, technology companies, the energy industry and experts in international development.

The authors found that while AI can help optimize grids and speed clean-energy innovation, tech companies’ ballooning appetite for power risks straining infrastructure and impeding climate progress unless new breakthroughs in clean energy and efficiency meet the new capacity.

[Back to AI Hub Home Page]

]]>
Companies Update Numbers on AI’s Environmental Impact /aihub/articles/companies-update-numbers-on-ais-environmental-impact/ Wed, 24 Sep 2025 14:21:35 +0000 /aihub/?p=655 Read More]]>

Companies Update Numbers on AI’s Environmental Impact

A water drop with impact ripples formed from lines, triangles and particle style design.
Image via Adobe Stock, by creator: pickup.

Ethan Mollick, Associate Professor at The Wharton School and author of Co-Intelligence, Living and Working with AI [Portfolio | April 2024], rounded up some reporting in September on the use of energy and water in operating large language models like ChatGPT. 

Multiple sources of data have surfaced on environmental impacts per generative AI prompt. 

.00024 kilowatt hours of power and .26 milliliters of water per prompt. 

.0003 kilowatt hours and .38 milliliters of water. 

“That is the same energy as one Google search in 2008 and the equivalent of 6 drops of water,” Mollick said. He added that Google reported a drop in energy use per prompt over the last year by a factor of 33. 

He cautioned that while those numbers match some independent direct measures, they are smaller than what French artificial intelligence startup Mistral AI SAS reported for their older model per query, which was 50 milliliters of water and 1.14 grams of carbon emitted per average query. 

A piece in The Verge .   

The site spoke to Shaolei Ren, an associate professor of electrical and computer engineering at the University of California, Riverside, who said Google left out key data in its study, leaving the story of Gemini’s environmental impact with significant gaps. 

One of those gaps is that Google omitted indirect water use in its estimates, focusing on water that data centers use in cooling systems to keep servers from overheating. 

A more accurate accounting of environmental damage would include impacts on local water resources and carbon emissions that factor in the current mix of clean and dirty energy of the local power grid.

This would be an apt time to mention that Jon Ipolito, Professor for New Media at the University of Maine, developed a still-evolving interactive tool for estimating and comparing environmental impacts of AI.

Check it out here:

[Back to AI Hub Home Page]

]]>
Educators Compare AI to Plastics /aihub/articles/educators-compare-ai-to-plastics/ Wed, 24 Sep 2025 04:27:12 +0000 /aihub/?p=559 Read More]]>
A photo of colorful plastics mixed with rocks on a shore.
Photo by Wolfram Burner, under an Attribution-NonCommercial 2.0 license.

AI and education researchers Jasper Roe, Leon Furze, and Mike Perkins that urges educators to use carefully crafted metaphors to sharpen AI literacy. Furze proposed one metaphor that frames generative AI as similar to plastics: malleable, scalable, low cost at the point of use, but sticky and potentially harmful in digital environments. 

Like bits of plastic, fragments of synthetic content are durable and can cycle into content that people read and cite, or into the source material that models train on. 

The paper examines other metaphors educators can use for AI literacy, including AI as a funhouse mirror, a map, an echo chamber, and a black box. The authors provide detailed sample learning activities for each of those metaphors.



[Back to AI Hub Home Page]

]]>
Researchers Uncover Reasons for Hallucinations from Generative AI /aihub/articles/post-1/ Tue, 23 Sep 2025 23:42:48 +0000 /aihub/?p=471

Researchers Uncover Reasons for Hallucinations from Generative AI

A mural depicting two abstracted faces in colorful shapes, gazing in the same direction toward the left of the frame.
Image by creator rawpixel.com, under a CC0 1.0 Universal license.

In mid-September, researcher Leon Chlon and his colleagues, Ahmed Karim and Maggie Chlon, arguing that large language models (LLMs) sometimes invent facts because of flaws in the way they compress information. The problem comes partly from the order in which models read prompts. Key details that appear late in a prompt can slip down the list of priorities or be missed altogether, leading the system to fill in gaps with guesswork, the team wrote.

The study introduces a new way to pinpoint when a model doesn’t have enough information to answer, and is more likely to compose hallucinations. On his Substack, Chlon : reshuffling the same prompt in different orders. By averaging the model’s answers, researchers can detect when confidence is real and when it’s just a side effect of prompt wording or order.

Using an example in medical diagnosis, Chlon demonstrated how an LLM misfires if symptoms appear late in a prompt, a phenomenon he called the Extraction Pathway Problem.
Google Staff Architect Mohammad Ghodratigohar analyzed Chlon’s paper in a YouTube video, demonstrating how to introduce technical, code-level solutions to predict and reduce hallucinations in a model.

[Back to AI Hub Home Page]

]]>