AI News
Latest news and trends from the world of artificial intelligence
AI Policy Tuesday: The Case for Regulating AI Companies, Not AI Models
Wim Howson Creutzberg will discuss the case for treating business entities — as opposed to models or use cases — as the main focal unit for preemptive risk regulation of advanced AI systems.
Google expands AI Mode with visual search and new features
Google is extending its AI Mode with a visual search feature that lets users search for images using natural language, refine results, and shop directly. Both text and image inputs are accepted, and each result includes a link to the original source. The feature leverages Gemini 2.5’s multimodal abilities and a "visual search fan-out" method, which identifies main and secondary elements in images and runs multiple background searches at once to deliver detailed answers. Shopping integration is a core focus: users can specify what they want without relying on traditional filters, get relevant product suggestions, and further narrow their search. The feature is powered by Google’s Shopping Graph and is first available in English in the USA.
OpenAI unveils Sora 2 video model with realistic physics, high-quality audio, and a new social app
OpenAI's new Sora 2 model pushes AI video closer to the mainstream, adding more realistic physics, better control, and, for the first time, high-quality audio. The launch also includes a Sora iOS app built for sharing AI-generated videos with friends.
The Dragon Hatchling: The Missing Link between the Transformer and Models of the Brain
The relationship between computing systems and the brain has served as motivation for pioneering theoreticians since John von Neumann and Alan Turing. Uniform, scale-free biological networks, such as the brain, have powerful properties, including generalizing over time, which is the main barrier for Machine Learning on the path to Universal Reasoning Models. We introduce `Dragon Hatchling' (BDH), a new Large Language Model architecture based on a scale-free biologically inspired network of locally-interacting neuron particles. BDH couples strong theoretical foundations and inherent interpretability without sacrificing Transformer-like performance. BDH is a practical, performant state-of-the-art attention-based state space sequence learning architecture. In addition to being a graph model, BDH admits a GPU-friendly formulation. It exhibits Transformer-like scaling laws: empirically BDH rivals GPT2 performance on language and translation tasks, at the same number of parameters (10M to 1B), for the same training data. BDH can be represented as a brain model. The working memory of BDH during inference entirely relies on synaptic plasticity with Hebbian learning using spiking neurons. We confirm empirically that specific, individual synapses strengthen connection whenever BDH hears or reasons about a specific concept while processing language inputs. The neuron interaction network of BDH is a graph of high modularity with heavy-tailed degree distribution. The BDH model is biologically plausible, explaining one possible mechanism which human neurons could use to achieve speech. BDH is designed for interpretability. Activation vectors of BDH are sparse and positive. We demonstrate monosemanticity in BDH on language tasks. Interpretability of state, which goes beyond interpretability of neurons and model parameters, is an inherent feature of the BDH architecture.
FlashMLA
Recent updates on FlashMLA, including the release of sparse attention kernels and performance improvements.
FlashMLA
FlashMLA: Efficient Multi-head Latent Attention Kernels * **2025.09.29 Release of Sparse Attention Kernels**: Releasing the token-level sparse attention kernels with DeepSeek-V3.2, achieving up to 640 TFlops during prefilling and 410 TFlops during decoding. Also released a deep-dive blog for our new FP8 sparse decoding kernel. * **2025.08.01 Kernels for MHA on SM100**: NVIDIA's PR for MHA forward / backward kernels on SM100! * **2025.04.22 Deep-Dive Blog**: Share the technical details behind the new FlashMLA kernel! * **2025.04.22 Performance Update**: New release of Flash MLA, delivering 5% ~ 15% performance improvement, achieving up to 660 TFlops on NVIDIA H800 SXM5 GPUs. Fully compatible. Simply upgrade for immediate performance boost! 🚀🚀🚀
Adobe’s video editing app Premiere arrives on iPhones
Adobe's Premiere app is available on mobile devices, offering a variety of editing features, including some that are AI-powered.
Adobe’s video editing app Premiere arrives on iPhones
Adobe's Premiere app is available on mobile devices, offering a variety of editing features, including some that are AI-powered.
Adobe’s video editing app Premiere arrives on iPhones
Adobe's Premiere app is available on mobile devices, offering a variety of editing features, including some that are AI-powered.
Deepseek slashes API prices by up to 75 percent with its latest V3.2 model
Deepseek has launched Deepseek-V3.2-Exp, a language model featuring a new attention architecture called Deepseek Sparse Attention, which enables more efficient handling of longer texts and builds on the previous V3.1-Terminus model. Benchmark results show that V3.2-Exp delivers similar performance to V3.1-Terminus, but at a significantly lower cost. In response, Deepseek is cutting API prices for V3.2-Exp by 50 to 75 percent and making the model instantly accessible via app, web, and API.
Archive
About Sources
AI news are automatically downloaded from various sources and translated using AI. Updates occur twice a day.