AI News
Latest news and trends from the world of artificial intelligence
Apple's STARFlow-V proves that generative video does not strictly require a diffusion architecture
With STARFlow-V, Apple has introduced a video generation model that diverges technically from competitors like Sora, Veo, and Runway. Designed for greater stability, particularly with longer clips, STARFlow-V relies on "Normalizing Flows" rather than the diffusion models that currently dominate the field.
Deepmind CEO Demis Hassabis predicts three major AI trends for 2026
Demis Hassabis, CEO of Google Deepmind, expects the next year to bring major progress in multimodal models, interactive video worlds, and more reliable AI agents.
LeCun calls Silicon Valley " hypnotized" by genai and pivots to "non-generative" world models
Yann LeCun, Meta's outgoing AI scientist, is launching a new startup built around "world models" - systems designed to understand physical reality rather than just generate text. LeCun argues that Silicon Valley is currently "hypnotized" by generative AI, and he intends to build his project with a heavy reliance on European talent.
OpenAI insists its shopping suggestions shouldn't be seen as advertising
Paid ChatGPT users recently reported seeing a prompt asking them to connect their account with Target for shopping, which some interpreted as an advertisement. But an OpenAI product manager denied that any live ad tests were running and suggested that the prompt was either not real or not an ad. OpenAI's chief researcher acknowledged that the prompt could "feel like" advertising and said the feature has been disabled while the company works on improving accuracy and adding user controls to limit similar suggestions.
Maybe we shouldn't start RSI on purpose.
Please, just please, don't start RSI on purpose. For years, AI x-risk people have warned us that a huge danger comes with AI capable of RSI, and even the mere existence of it poses a threat. We were afraid we would accidentally miss the point of no return, and now, so many people (not even in major AI companies, but in smaller labs too) are trying to bring that point closer *purposefully*. Programs sometimes don't work as we expect them to, even when ***we*** are the ones designing them. How would making the hallucination machine do this job produce something so powerful with *working* guardrails?
AI agents in GitHub and GitLab workflows create new enterprise security risks
Aikido Security warns that plugging AI agents into GitHub and GitLab workflows opens up a serious vulnerability in enterprise environments. Attackers can slip hidden instructions into issues, pull requests, or commits, leading to potential secret leaks or workflow alterations.
NYT sues AI search engine Perplexity for alleged content misuse
The New York Times has sued Perplexity, alleging the startup built its AI product on widespread copyright infringement by scraping and summarizing millions of Times articles, which the publisher says replaces the need for users to visit its website. The lawsuit claims Perplexity bypassed technical barriers meant to block automated access, using hidden tactics even after the Times tried to stop it through robots.txt and IP blocking; the complaint also cites examples where Perplexity’s summaries were so comprehensive that they eliminated user visits to the original site. In addition to copyright issues, the Times argues that Perplexity’s AI damages its brand by generating false information and attributing it to the newspaper, including fabricated quotes and recommendations for unsafe products.
Google outlines MIRAS and Titans, a possible path toward continuously learning AI
Google is formally introducing its Titans architecture a year after the original paper, alongside the MIRAS framework. Both efforts focus on models that can learn continuously and maintain a functional long-term memory that updates during real-world use.
Google gathers triple OpenAI's AI data through its search monopoly
Cloudflare data shows Google accesses 3–5 times more web content than AI rivals like OpenAI, Anthropic, and Microsoft by linking its search and AI crawlers. Website owners cannot block Google’s AI data collection without also disappearing from search results, forcing them to choose between visibility and control over their content. Cloudflare’s CEO argues this practice entrenches Google’s dominance and leaves publishers unable to negotiate fair terms for AI training unless Google separates its crawlers.
OpenAI ordered to turn over 20 million ChatGPT chats to the New York Times
A federal judge ordered OpenAI to give the New York Times 20 million anonymized ChatGPT logs for a copyright lawsuit, dismissing OpenAI’s privacy objections. The logs aim to clarify if OpenAI used Times content to train its AI and whether evidence was manipulated. The case is part of a broader legal battle over AI companies using copyrighted material without permission.
Archive
About Sources
AI news are automatically downloaded from various sources and translated using AI. Updates occur twice a day.