← Все кластеры
Tool: DNS Lookup
active
Тип событияother
Темаgenerative ai
ОрганизацияAnthropic
СтранаUnited States
Статей26
Уник. источников5
Важность / Момент2.69 / 0
Период22.03.2026 19:16 — 05.04.2026 20:19
Создан06.04.2026 06:28:46
Статьи в кластере 26
Заголовок Источник Дата публикации Score
S Tool: DNS Lookup simon_willison 22.03.2026 19:16 1
Embedding sim.1
Entity overlap1
Title sim.1
Time proximity1
NLP типother
NLP организацияCloudflare
NLP темаdeveloper tools
NLP страна

Открыть оригинал

Simon Willison’s Weblog Subscribe Sponsored by: WorkOS — Ready to sell to Enterprise clients? Build and ship securely with WorkOS. 22nd March 2026 Tool DNS Lookup — # DNS Lookup Documentation TIL that Cloudflare's 1.1.1.1 DNS service (and 1.1.1.2 and 1.1.1.3, which block malware and malware + adult content respectively) has a CORS-enabled JSON API, so I had Claude Code build me a UI for running DNS queries against all three of those resolvers. Posted 22nd March 2026 at 7:16 pm Recent articles Mr. Chatterbox is a (weak) Victorian-era ethically trained model you can run on your own computer - 30th March 2026 Vibe coding SwiftUI apps is a lot of fun - 27th March 2026 Experimenting with Starlette 1.0 with Claude skills - 22nd March 2026 This is a beat by Simon Willison, posted on 22nd March 2026 . dns 32 cors 26 cloudflare 30 Monthly briefing Sponsor me for $10/month and get a curated email digest of the month's most important LLM developments. Pay me to send you less! Sponsor & subscribe Disclosures Colophon © 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026
Quoting Georgi Gerganov simon_willison 30.03.2026 21:31 0.764
Embedding sim.0.7965
Entity overlap0.3333
Title sim.0.5806
Time proximity1
NLP типother
NLP организация
NLP темаgenerative ai
NLP страна

Открыть оригинал

Note that the main issues that people currently unknowingly face with local models mostly revolve around the harness and some intricacies around model chat templates and prompt construction. Sometimes there are even pure inference bugs. From typing the task in the client to the actual result, there is a long chain of components that atm are not only fragile - are also developed by different parties. So it's difficult to consolidate the entire stack and you have to keep in mind that what you are currently observing is with very high probability still broken in some subtle way along that chain. — Georgi Gerganov , explaining why it's hard to find local models that work well with coding agents Tags: coding-agents , generative-ai , ai , local-llms , llms , georgi-gerganov
A quote from Christopher Mims simon_willison 24.03.2026 20:35 0.753
Embedding sim.0.8571
Entity overlap0.0909
Title sim.0.3171
Time proximity0.8473
NLP типother
NLP организацияThe Wall Street Journal
NLP темаai safety
NLP страна

Открыть оригинал

Simon Willison’s Weblog Subscribe Sponsored by: WorkOS — Ready to sell to Enterprise clients? Build and ship securely with WorkOS. 24th March 2026 I really think "give AI total control of my computer and therefore my entire life" is going to look so foolish in retrospect that everyone who went for this is going to look as dumb as Jimmy Fallon holding up a picture of his Bored Ape — Christopher Mims , Technology columnist at The Wall Street Journal Posted 24th March 2026 at 8:35 pm Recent articles Mr. Chatterbox is a (weak) Victorian-era ethically trained model you can run on your own computer - 30th March 2026 Vibe coding SwiftUI apps is a lot of fun - 27th March 2026 Experimenting with Starlette 1.0 with Claude skills - 22nd March 2026 This is a quotation collected by Simon Willison, posted on 24th March 2026 . security 586 ai 1939 Disclosures Colophon © 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026
A quote from Matt Webb simon_willison 28.03.2026 12:04 0.744
Embedding sim.0.8809
Entity overlap0.2222
Title sim.0.3824
Time proximity0.3266
NLP типother
NLP организация
NLP темаai agents
NLP страна

Открыть оригинал

Simon Willison’s Weblog Subscribe Sponsored by: WorkOS — Ready to sell to Enterprise clients? Build and ship securely with WorkOS. 28th March 2026 The thing about agentic coding is that agents grind problems into dust. Give an agent a problem and a while loop and - long term - it’ll solve that problem even if it means burning a trillion tokens and re-writing down to the silicon. [...] But we want AI agents to solve coding problems quickly and in a way that is maintainable and adaptive and composable (benefiting from improvements elsewhere), and where every addition makes the whole stack better. So at the bottom is really great libraries that encapsulate hard problems, with great interfaces that make the “right” way the easy way for developers building apps with them. Architecture! While I’m vibing (I call it vibing now, not coding and not vibe coding) while I’m vibing, I am looking at lines of code less than ever before, and thinking about architecture more than ever before. — Matt Webb , An appreciation for (technical) architecture Posted 28th March 2026 at 12:04 pm Recent articles Mr. Chatterbox is a (weak) Victorian-era ethically trained model you can run on your own computer - 30th March 2026 Vibe coding SwiftUI apps is a lot of fun - 27th March 2026 Experimenting with Starlette 1.0 with Claude skills - 22nd March 2026 This is a quotation collected by Simon Willison, posted on 28th March 2026 . definitions 51 matt-webb 25 ai 1939 generative-ai 1720 llms 1686 ai-assisted-programming 369 vibe-coding 80 coding-agents 187 agentic-engineering 35 Disclosures Colophon © 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026
Anthropic is having a month | TechCrunch techcrunch 31.03.2026 23:58 0.738
Embedding sim.0.8601
Entity overlap0.2692
Title sim.0.3214
Time proximity0.5135
NLP типother
NLP организацияAnthropic
NLP темаdeveloper tools
NLP странаUnited States

Открыть оригинал

Anthropic has built its public identity around the winning idea that it’s the careful AI company. It publishes detailed work on AI risk, employs some of the best researchers in the field, and has been vocal about the responsibilities that come with building such powerful technology — so vocal, of course, that it’s right now battling it out with the Department of Defense. On Tuesday, alas, someone there forgot to check a box. It is, notably, the second time in a week. Last Thursday, Fortune reported that Anthropic had accidentally made nearly 3,000 internal files publicly available, including a draft blog post describing a powerful new model the company had not yet announced. Here’s what happened on Tuesday: When Anthropic pushed out version 2.1.88 of its Claude Code software package, it accidentally included a file that exposed nearly 2,000 source code files and more than 512,000 lines of code — essentially the full architectural blueprint for one of its most important products. A security researcher named Chaofan Shou noticed almost immediately and posted about it on X . Anthropic’s statement to multiple outlets was nonchalant as these things go: “This was a release packaging issue caused by human error, not a security breach.” (Internally, we’d guess things were less measured.) Claude Code isn’t a minor product. It’s a command-line tool that lets developers use Anthropic’s AI to write and edit code and has become formidable enough to unsettle rivals. According to the WSJ, OpenAI pulled the plug on its video generation product Sora just six months after launching it to the public to refocus its efforts on developers and enterprises — partly in response to Claude Code’s growing momentum. What leaked was not the AI model itself but the software scaffolding around it — the instructions that tell the model how to behave, what tools to use, and where its limits are. Developers began publishing detailed analyses almost immediately, with one describing the product as “a production-grade developer experience , not just a wrapper around an API.” Whether this turns out to matter in any lasting way is a question best left to developers. Competitors may find the architecture instructive; at the same time, the field moves fast. Either way, somewhere at Anthropic, you can imagine that one very talented engineer has spent the rest of the day quietly wondering if they still have a job. One can only hope it’s not the same engineer, or engineering team, from late last week. Techcrunch event Disrupt 2026: The tech ecosystem, all in one room Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $400. Save up to $300 or 30% to TechCrunch Founder Summit 1,000+ founders and investors come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the industry. Connect with peers navigating similar growth stages. Walk away with tactics you can apply immediately Offer ends March 13. San Francisco, CA | October 13-15, 2026 REGISTER NOW Topics AI , Anthropic Connie Loizos Editor in Chief & General Manager Loizos has been reporting on Silicon Valley since the late ’90s, when she joined the original Red Herring magazine. Previously the Silicon Valley Editor of TechCrunch, she was named Editor in Chief and General Manager of TechCrunch in September 2023. She’s also the founder of StrictlyVC, a daily e-newsletter and lecture series acquired by Yahoo in August 2023 and now operated as a sub brand of TechCrunch. You can contact or verify outreach from Connie by emailing connie@strictlyvc.com or connie@techcrunch.com , or via encrypted message at ConnieLoizos.53 on Signal. View Bio April 30 San Francisco, CA StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited. REGISTER NOW Most Popular Anthropic is having a month Connie Loizos Google is now letting users in the US change their Gmail address Ivan Mehta Why OpenAI really shut down Sora Connie Loizos The Pixel 10a doesn’t have a camera bump, and it’s great Ivan Mehta Anthropic’s Claude popularity with paying consumers is skyrocketing Julie Bort Let’s take a look at the retro tech making a comeback Lauren Forristal Waymo’s skyrocketing ridership in one chart Kirsten Korosec Loading the next article Error loading the next article X LinkedIn Facebook Instagram youTube Mastodon Threads Bluesky TechCrunch Staff Contact Us Advertise Crunchboard Jobs Site Map Terms of Service Privacy Policy RSS Terms of Use Code of Conduct Kalshi Copilot Blue Origin WordPress Bezos Tech Layoffs ChatGPT © 2026 TechCrunch Media LLC.
Netflix AI Team Just Open-Sourced VOID: an AI Model That Erases Objects From Videos — Physics and All marktechpost 04.04.2026 09:03 0.724
Embedding sim.0.8139
Entity overlap0.3438
Title sim.0.1887
Time proximity0.9265
NLP типproduct_launch
NLP организацияNetflix
NLP темаvideo generation
NLP страна

Открыть оригинал

Video editing has always had a dirty secret: removing an object from footage is easy; making the scene look like it was never there is brutally hard. Take out a person holding a guitar, and you’re left with a floating instrument that defies gravity. Hollywood VFX teams spend weeks fixing exactly this kind of problem. A team of researchers from Netflix and INSAIT, Sofia University ‘St. Kliment Ohridski,’ released VOID ( Video Object and Interaction Deletion ) model that can do it automatically. VOID removes objects from videos along with all interactions they induce on the scene — not just secondary effects like shadows and reflections, but physical interactions like objects falling when a person is removed. What Problem Is VOID Actually Solving? Standard video inpainting models — the kind used in most editing workflows today — are trained to fill in the pixel region where an object was. They’re essentially very sophisticated background painters. What they don’t do is reason about causality : if I remove an actor who is holding a prop, what should happen to that prop? Existing video object removal methods excel at inpainting content ‘behind’ the object and correcting appearance-level artifacts such as shadows and reflections. However, when the removed object has more significant interactions, such as collisions with other objects, current models fail to correct them and produce implausible results. VOID is built on top of CogVideoX and fine-tuned for video inpainting with interaction-aware mask conditioning. The key innovation is in how the model understands the scene — not just ‘what pixels should I fill?’ but ‘what is physically plausible after this object disappears?’ The canonical example from the research paper: if a person holding a guitar is removed, VOID also removes the person’s effect on the guitar — causing it to fall naturally. That’s not trivial. The model has to understand that the guitar was being supported by the person, and that removing the person means gravity takes over. And unlike prior work, VOID was evaluated head-to-head against real competitors. Experiments on both synthetic and real data show that the approach better preserves consistent scene dynamics after object removal compared to prior video object removal methods including ProPainter, DiffuEraser, Runway, MiniMax-Remover, ROSE, and Gen-Omnimatte. https://arxiv.org/pdf/2604.02296 The Architecture: CogVideoX Under the Hood VOID is built on CogVideoX-Fun-V1.5-5b-InP — a model from Alibaba PAI — and fine-tuned for video inpainting with interaction-aware quadmask conditioning. CogVideoX is a 3D Transformer-based video generation model. Think of it like a video version of Stable Diffusion — a diffusion model that operates over temporal sequences of frames rather than single images. The specific base model ( CogVideoX-Fun-V1.5-5b-InP ) is released by Alibaba PAI on Hugging Face, which is the checkpoint engineers will need to download separately before running VOID. The fine-tuned architecture specs: a CogVideoX 3D Transformer with 5B parameters, taking video, quadmask, and a text prompt describing the scene after removal as input, operating at a default resolution of 384×672, processing a maximum of 197 frames, using the DDIM scheduler, and running in BF16 with FP8 quantization for memory efficiency. The quadmask is arguably the most interesting technical contribution here. Rather than a binary mask (remove this pixel / keep this pixel), the quadmask is a 4-value mask that encodes the primary object to remove, overlap regions, affected regions (falling objects, displaced items), and background to keep. In practice, each pixel in the mask gets one of four values: 0 (primary object being removed), 63 (overlap between primary and affected regions), 127 (interaction-affected region — things that will move or change as a result of the removal), and 255 (background, keep as-is). This gives the model a structured semantic map of what’s happening in the scene , not just where the object is . Two-Pass Inference Pipeline VOID uses two transformer checkpoints, trained sequentially. You can run inference with Pass 1 alone or chain both passes for higher temporal consistency. Pass 1 ( void_pass1.safetensors ) is the base inpainting model and is sufficient for most videos. Pass 2 serves a specific purpose: correcting a known failure mode. If the model detects object morphing — a known failure mode of smaller video diffusion models — an optional second pass re-runs inference using flow-warped noise derived from the first pass, stabilizing object shape along the newly synthesized trajectories. It’s worth understanding the distinction: Pass 2 isn’t just for longer clips — it’s specifically a shape-stability fix . When the diffusion model produces objects that gradually warp or deform across frames (a well-documented artifact in video diffusion), Pass 2 uses optical flow to warp the latents from Pass 1 and feeds them as initialization into a second diffusion run, anchoring the shape of synthesized objects frame-to-frame. How the Training Data Was Generated This is where things get genuinely interesting. Training a model to understand physical interactions requires paired videos — the same scene, with and without the object, where the physics plays out correctly in both. Real-world paired data at this scale doesn’t exist. So the team built it synthetically. Training used paired counterfactual videos generated from two sources: HUMOTO — human-object interactions rendered in Blender with physics simulation — and Kubric — object-only interactions using Google Scanned Objects. HUMOTO uses motion-capture data of human-object interactions. The key mechanic is a Blender re-simulation: the scene is set up with a human and objects, rendered once with the human present, then the human is removed from the simulation and physics is re-run forward from that point. The result is a physically correct counterfactual — objects that were being held or supported now fall, exactly as they should. Kubric, developed by Google Research, applies the same idea to object-object collisions. Together, they produce a dataset of paired videos where the physics is provably correct, not approximated by a human annotator. Key Takeaways VOID goes beyond pixel-filling. Unlike existing video inpainting tools that only correct visual artifacts like shadows and reflections, VOID understands physical causality — if you remove a person holding an object, the object falls naturally in the output video. The quadmask is the core innovation. Instead of a simple binary remove/keep mask, VOID uses a 4-value quadmask (values 0, 63, 127, 255) that encodes not just what to remove, but which surrounding regions of the scene will be physically affected — giving the diffusion model structured scene understanding to work with. Two-pass inference solves a real failure mode. Pass 1 handles most videos; Pass 2 exists specifically to fix object morphing artifacts — a known weakness of video diffusion models — by using optical flow-warped latents from Pass 1 as initialization for a second diffusion run. Synthetic paired data made training possible. Since real-world paired counterfactual video data doesn’t exist at scale, the research team built it using Blender physics re-simulation (HUMOTO) and Google’s Kubric framework, generating ground-truth before/after video pairs where the physics is provably correct. Check out the  Paper , Model Weight and Repo .   Also, feel free to follow us on  Twitter  and don’t forget to join our  120k+ ML SubReddit  and Subscribe to  our Newsletter . Wait! are you on telegram?  now you can join us on telegram as well. The post Netflix AI Team Just Open-Sourced VOID: an AI Model That Erases Objects From Videos — Physics and All appeared first on MarkTechPost .
Anthropic ramps up its political activities with a new PAC | TechCrunch techcrunch 03.04.2026 20:22 0.723
Embedding sim.0.8429
Entity overlap0.1364
Title sim.0.3038
Time proximity0.5929
NLP типregulation
NLP организацияAnthropic
NLP темаai governance
NLP странаUnited States

Открыть оригинал

Anthropic has filed documents to create a new political action committee — a sign that, like its peers, the AI lab is committing significant resources toward influencing policy and regulation. AnthroPAC plans to make contributions to both parties during the midterms, including to current D.C. lawmakers and rising political candidates. The PAC will be funded by voluntary employee contributions capped at $5,000, Bloomberg reports . A statement of organization filed with the Federal Election Commission includes a signature by Allison Rossi, Anthropic’s treasurer. TechCrunch reached out to Anthropic for more information. AI companies, which are comrades and competitors in a new and often turbulent industry, have increasingly sought to push their preferred policies at the state and federal levels. The Washington Post reported last month that AI companies had already contributed a whopping $185 million to the midterm races. In February, The New York Times also reported on Public First, a new Super PAC that had reportedly received at least $20 million from Anthropic, and which had financed ad campaigns supporting a particular regulatory agenda. Anthropic’s political activities have ramped up as the company continues to be enmeshed in a nasty legal battle with the Defense Department. The dispute erupted earlier this year over the government’s use of Anthropic’s AI models and what guidelines (if any) should exist for that usage. Topics AI , AnthroPAC , Anthropic , artificial intelligence , government , In Brief April 30 San Francisco, CA StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited. REGISTER NOW Newsletters See More Subscribe for the industry’s biggest tech news TechCrunch Daily News Every weekday and Sunday, you can get the best of TechCrunch’s coverage. TechCrunch Mobility TechCrunch Mobility is your destination for transportation news and insight. Startups Weekly Startups are the core of TechCrunch, so get our best coverage delivered weekly. StrictlyVC Provides movers and shakers with the info they need to start their day. No newsletters selected. Subscribe By submitting your email, you agree to our Terms and Privacy Notice . Related AI Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage Anthony Ha 3 hours ago AI Anthropic is having a moment in the private markets; SpaceX could spoil the party Connie Loizos 18 hours ago Latest in AI AI Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage Anthony Ha 3 hours ago AI Anthropic is having a moment in the private markets; SpaceX could spoil the party Connie Loizos 18 hours ago AI OpenAI executive shuffle includes new role for COO Brad Lightcap to lead ‘special projects’ Amanda Silberling 23 hours ago X LinkedIn Facebook Instagram youTube Mastodon Threads Bluesky TechCrunch Staff Contact Us Advertise Crunchboard Jobs Site Map Terms of Service Privacy Policy RSS Terms of Use Code of Conduct Kalshi Copilot Blue Origin WordPress Bezos Tech Layoffs ChatGPT © 2026 TechCrunch Media LLC.
How to Build a Netflix VOID Video Object Removal and Inpainting Pipeline with CogVideoX, Custom Prompting, and End-to-End Sample Inference marktechpost 05.04.2026 20:19 0.721
Embedding sim.0.8328
Entity overlap0.1724
Title sim.0.2024
Time proximity0.79
NLP типproduct_launch
NLP организацияNetflix
NLP темаvideo generation
NLP страна

Открыть оригинал

In this tutorial, we build and run an advanced pipeline for Netflix’s VOID model . We set up the environment, install all required dependencies, clone the repository, download the official base model and VOID checkpoint, and prepare the sample inputs needed for video object removal. We also make the workflow more practical by allowing secure terminal-style secret input for tokens and optionally using an OpenAI model to generate a cleaner background prompt. As we move through the tutorial, we load the model components, configure the pipeline, run inference on a built-in sample, and visualize both the generated result and a side-by-side comparison, giving us a full hands-on understanding of how VOID works in practice. Check out the  Full Codes Copy Code Copied Use a different Browser import os, sys, json, shutil, subprocess, textwrap, gc from pathlib import Path from getpass import getpass def run(cmd, check=True): print(f"\n[RUN] {cmd}") result = subprocess.run(cmd, shell=True, text=True) if check and result.returncode != 0: raise RuntimeError(f"Command failed with exit code {result.returncode}: {cmd}") print("=" * 100) print("VOID — ADVANCED GOOGLE COLAB TUTORIAL") print("=" * 100) try: import torch gpu_name = torch.cuda.get_device_name(0) if torch.cuda.is_available() else "CPU" print(f"PyTorch already available. CUDA: {torch.cuda.is_available()} | Device: {gpu_name}") except Exception: run(f"{sys.executable} -m pip install -q torch torchvision torchaudio") import torch gpu_name = torch.cuda.get_device_name(0) if torch.cuda.is_available() else "CPU" print(f"CUDA: {torch.cuda.is_available()} | Device: {gpu_name}") if not torch.cuda.is_available(): raise RuntimeError("This tutorial needs a GPU runtime. In Colab, go to Runtime > Change runtime type > GPU.") print("\nThis repo is heavy. The official notebook notes 40GB+ VRAM is recommended.") print("A100 works best. T4/L4 may fail or be extremely slow even with CPU offload.\n") HF_TOKEN = getpass("Enter your Hugging Face token (input hidden, press Enter if already logged in): ").strip() OPENAI_API_KEY = getpass("Enter your OpenAI API key for OPTIONAL prompt assistance (press Enter to skip): ").strip() run(f"{sys.executable} -m pip install -q --upgrade pip") run(f"{sys.executable} -m pip install -q huggingface_hub hf_transfer") run("apt-get -qq update && apt-get -qq install -y ffmpeg git") run("rm -rf /content/void-model") run("git clone https://github.com/Netflix/void-model.git /content/void-model") os.chdir("/content/void-model") if HF_TOKEN: os.environ["HF_TOKEN"] = HF_TOKEN os.environ["HUGGINGFACE_HUB_TOKEN"] = HF_TOKEN os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1" run(f"{sys.executable} -m pip install -q -r requirements.txt") if OPENAI_API_KEY: run(f"{sys.executable} -m pip install -q openai") os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY from huggingface_hub import snapshot_download, hf_hub_download We set up the full Colab environment and prepared the system for running the VOID pipeline. We install the required tools, check whether GPU support is available, securely collect the Hugging Face and optional OpenAI API keys, and clone the official repository into the Colab workspace. We also configure environment variables and install project dependencies so the rest of the workflow can run smoothly without manual setup later. Copy Code Copied Use a different Browser print("\nDownloading base CogVideoX inpainting model...") snapshot_download( repo_id="alibaba-pai/CogVideoX-Fun-V1.5-5b-InP", local_dir="./CogVideoX-Fun-V1.5-5b-InP", token=HF_TOKEN if HF_TOKEN else None, local_dir_use_symlinks=False, resume_download=True, ) print("\nDownloading VOID Pass 1 checkpoint...") hf_hub_download( repo_id="netflix/void-model", filename="void_pass1.safetensors", local_dir=".", token=HF_TOKEN if HF_TOKEN else None, local_dir_use_symlinks=False, ) sample_options = ["lime", "moving_ball", "pillow"] print(f"\nAvailable built-in samples: {sample_options}") sample_name = input("Choose a sample [lime/moving_ball/pillow] (default: lime): ").strip() or "lime" if sample_name not in sample_options: print("Invalid sample selected. Falling back to 'lime'.") sample_name = "lime" use_openai_prompt_helper = False custom_bg_prompt = None if OPENAI_API_KEY: ans = input("\nUse OpenAI to generate an alternative background prompt for the selected sample? [y/N]: ").strip().lower() use_openai_prompt_helper = ans == "y" We download the base CogVideoX inpainting model and the VOID Pass 1 checkpoint required for inference. We then present the available built-in sample options and let ourselves choose which sample video we want to process. We also initialize the optional prompt-helper flow to decide whether to generate a refined background prompt with OpenAI. Copy Code Copied Use a different Browser if use_openai_prompt_helper: from openai import OpenAI client = OpenAI(api_key=OPENAI_API_KEY) sample_context = { "lime": { "removed_object": "the glass", "scene_hint": "A lime falls on the table." }, "moving_ball": { "removed_object": "the rubber duckie", "scene_hint": "A ball rolls off the table." }, "pillow": { "removed_object": "the kettlebell being placed on the pillow", "scene_hint": "Two pillows are on the table." }, } helper_prompt = f""" You are helping prepare a clean background prompt for a video object removal model. Rules: - Describe only what should remain in the scene after removing the target object/action. - Do not mention removal, deletion, masks, editing, or inpainting. - Keep it short, concrete, and physically plausible. - Return only one sentence. Sample name: {sample_name} Target being removed: {sample_context[sample_name]['removed_object']} Known scene hint from the repo: {sample_context[sample_name]['scene_hint']} """ try: response = client.chat.completions.create( model="gpt-4o-mini", temperature=0.2, messages=[ {"role": "system", "content": "You write short, precise scene descriptions for video generation pipelines."}, {"role": "user", "content": helper_prompt}, ], ) custom_bg_prompt = response.choices[0].message.content.strip() print(f"\nOpenAI-generated background prompt:\n{custom_bg_prompt}\n") except Exception as e: print(f"OpenAI prompt helper failed: {e}") custom_bg_prompt = None prompt_json_path = Path(f"./sample/{sample_name}/prompt.json") if custom_bg_prompt: backup_path = prompt_json_path.with_suffix(".json.bak") if not backup_path.exists(): shutil.copy(prompt_json_path, backup_path) with open(prompt_json_path, "w") as f: json.dump({"bg": custom_bg_prompt}, f) print(f"Updated prompt.json for sample '{sample_name}'.") We use the optional OpenAI prompt helper to generate a cleaner and more focused background description for the selected sample. We define the scene context, send it to the model, capture the generated prompt, and then update the sample’s prompt.json file when a custom prompt is available. This allows us to make the pipeline a bit more flexible while still keeping the original sample structure intact. Copy Code Copied Use a different Browser import numpy as np import torch.nn.functional as F from safetensors.torch import load_file from diffusers import DDIMScheduler from IPython.display import Video, display from videox_fun.models import ( AutoencoderKLCogVideoX, CogVideoXTransformer3DModel, T5EncoderModel, T5Tokenizer, ) from videox_fun.pipeline import CogVideoXFunInpaintPipeline from videox_fun.utils.fp8_optimization import convert_weight_dtype_wrapper from videox_fun.utils.utils import get_video_mask_input, save_videos_grid, save_inout_row BASE_MODEL_PATH = "./CogVideoX-Fun-V1.5-5b-InP" TRANSFORMER_CKPT = "./void_pass1.safetensors" DATA_ROOTDIR = "./sample" SAMPLE_NAME = sample_name SAMPLE_SIZE = (384, 672) MAX_VIDEO_LENGTH = 197 TEMPORAL_WINDOW_SIZE = 85 NUM_INFERENCE_STEPS = 50 GUIDANCE_SCALE = 1.0 SEED = 42 DEVICE = "cuda" WEIGHT_DTYPE = torch.bfloat16 print("\nLoading VAE...") vae = AutoencoderKLCogVideoX.from_pretrained( BASE_MODEL_PATH, subfolder="vae", ).to(WEIGHT_DTYPE) video_length = int( (MAX_VIDEO_LENGTH - 1) // vae.config.temporal_compression_ratio * vae.config.temporal_compression_ratio ) + 1 print(f"Effective video length: {video_length}") print("\nLoading base transformer...") transformer = CogVideoXTransformer3DModel.from_pretrained( BASE_MODEL_PATH, subfolder="transformer", low_cpu_mem_usage=True, use_vae_mask=True, ).to(WEIGHT_DTYPE) We import the deep learning, diffusion, video display, and VOID-specific modules required for inference. We define key configuration values, such as model paths, sample dimensions, video length, inference steps, seed, device, and data type, and then load the VAE and base transformer components. This section presents the core model objects that form the underpino inpainting pipeline. Copy Code Copied Use a different Browser print(f"Loading VOID checkpoint from {TRANSFORMER_CKPT} ...") state_dict = load_file(TRANSFORMER_CKPT) param_name = "patch_embed.proj.weight" if state_dict[param_name].size(1) != transformer.state_dict()[param_name].size(1): latent_ch, feat_scale = 16, 8 feat_dim = latent_ch * feat_scale new_weight = transformer.state_dict()[param_name].clone() new_weight[:, :feat_dim] = state_dict[param_name][:, :feat_dim] new_weight[:, -feat_dim:] = state_dict[param_name][:, -feat_dim:] state_dict[param_name] = new_weight print(f"Adapted {param_name} channels for VAE mask.") missing_keys, unexpected_keys = transformer.load_state_dict(state_dict, strict=False) print(f"Missing keys: {len(missing_keys)}, Unexpected keys: {len(unexpected_keys)}") print("\nLoading tokenizer, text encoder, and scheduler...") tokenizer = T5Tokenizer.from_pretrained(BASE_MODEL_PATH, subfolder="tokenizer") text_encoder = T5EncoderModel.from_pretrained( BASE_MODEL_PATH, subfolder="text_encoder", torch_dtype=WEIGHT_DTYPE, ) scheduler = DDIMScheduler.from_pretrained(BASE_MODEL_PATH, subfolder="scheduler") print("\nBuilding pipeline...") pipe = CogVideoXFunInpaintPipeline( tokenizer=tokenizer, text_encoder=text_encoder, vae=vae, transformer=transformer, scheduler=scheduler, ) convert_weight_dtype_wrapper(pipe.transformer, WEIGHT_DTYPE) pipe.enable_model_cpu_offload(device=DEVICE) generator = torch.Generator(device=DEVICE).manual_seed(SEED) print("\nPreparing sample input...") input_video, input_video_mask, prompt, _ = get_video_mask_input( SAMPLE_NAME, sample_size=SAMPLE_SIZE, keep_fg_ids=[-1], max_video_length=video_length, temporal_window_size=TEMPORAL_WINDOW_SIZE, data_rootdir=DATA_ROOTDIR, use_quadmask=True, dilate_width=11, ) negative_prompt = ( "Watermark present in each frame. The background is solid. " "Strange body and strange trajectory. Distortion." ) print(f"\nPrompt: {prompt}") print(f"Input video tensor shape: {tuple(input_video.shape)}") print(f"Mask video tensor shape: {tuple(input_video_mask.shape)}") print("\nDisplaying input video...") input_video_path = os.path.join(DATA_ROOTDIR, SAMPLE_NAME, "input_video.mp4") display(Video(input_video_path, embed=True, width=672)) We load the VOID checkpoint, align the transformer weights when needed, and initialize the tokenizer, text encoder, scheduler, and final inpainting pipeline. We then enable CPU offloading, seed the generator for reproducibility, and prepare the input video, mask video, and prompt from the selected sample. By the end of this section, we will have everything ready for actual inference, including the negative prompt and the input video preview. Copy Code Copied Use a different Browser print("\nRunning VOID Pass 1 inference...") with torch.no_grad(): sample = pipe( prompt, num_frames=TEMPORAL_WINDOW_SIZE, negative_prompt=negative_prompt, height=SAMPLE_SIZE[0], width=SAMPLE_SIZE[1], generator=generator, guidance_scale=GUIDANCE_SCALE, num_inference_steps=NUM_INFERENCE_STEPS, video=input_video, mask_video=input_video_mask, strength=1.0, use_trimask=True, use_vae_mask=True, ).videos print(f"Output shape: {tuple(sample.shape)}") output_dir = Path("/content/void_outputs") output_dir.mkdir(parents=True, exist_ok=True) output_path = str(output_dir / f"{SAMPLE_NAME}_void_pass1.mp4") comparison_path = str(output_dir / f"{SAMPLE_NAME}_comparison.mp4") print("\nSaving output video...") save_videos_grid(sample, output_path, fps=12) print("Saving side-by-side comparison...") save_inout_row(input_video, input_video_mask, sample, comparison_path, fps=12) print(f"\nSaved output to: {output_path}") print(f"Saved comparison to: {comparison_path}") print("\nDisplaying generated result...") display(Video(output_path, embed=True, width=672)) print("\nDisplaying comparison (input | mask | output)...") display(Video(comparison_path, embed=True, width=1344)) print("\nDone.") We run the actual VOID Pass 1 inference on the selected sample using the prepared prompt, mask, and model pipeline. We save the generated output video and also create a side-by-side comparison video so we can inspect the input, mask, and final result together. We display the generated videos directly in Colab, which helps us verify that the full video object-removal workflow works end to end. In conclusion, we created a complete, Colab-ready implementation of the VOID model and ran an end-to-end video inpainting workflow within a single, streamlined pipeline. We went beyond basic setup by handling model downloads, prompt preparation, checkpoint loading, mask-aware inference, and output visualization in a way that is practical for experimentation and adaptation. We also saw how the different model components come together to remove objects from video while preserving the surrounding scene as naturally as possible. At the end, we successfully ran the official sample and built a strong working foundation that helps us extend the pipeline for custom videos, prompts, and more advanced research use cases. Check out the  Full Codes .   Also, feel free to follow us on  Twitter  and don’t forget to join our  120k+ ML SubReddit  and Subscribe to  our Newsletter . Wait! are you on telegram?  now you can join us on telegram as well. Need to partner with us for promoting your GitHub Repo OR Hugging Face Page OR Product Release OR Webinar etc.? Connect with us The post How to Build a Netflix VOID Video Object Removal and Inpainting Pipeline with CogVideoX, Custom Prompting, and End-to-End Sample Inference appeared first on MarkTechPost .
Anthropic's Claude popularity with paying consumers is skyrocketing | TechCrunch techcrunch 28.03.2026 14:15 0.72
Embedding sim.0.8356
Entity overlap0.1923
Title sim.0.1887
Time proximity0.762
NLP типother
NLP организацияAnthropic
NLP темаlarge language models
NLP странаUnited States

Открыть оригинал

Whatever the final outcome for Anthropic from its feud with the Department of Defense, the attention it has generated — coupled with the company’s funny Super Bowl ads taking aim at OpenAI and the surging popularity of Claude Code — has made Anthropic more popular with consumers than ever. An examination of billions of anonymized credit card transactions from about 28 million U.S. consumers, conducted for TechCrunch by Indagari , a consumer transaction analysis company, shows Claude gaining paid subscribers in record numbers. Now, as with all big-data analysis, caveats exist. While this data is substantive, it doesn’t include every consumer. That means that Indagari can’t calculate Anthropic’s total current or new user numbers. It also doesn’t include Claude’s enterprise business (which is its bread and butter) or its free-tier users (those not paying Anthropic at all). Estimates for total Claude consumer users are all over the map (we’ve seen figures ranging from 18 million to 30 million) but Anthropic has not disclosed this data. A spokesperson did tell TechCrunch, however, that Claude paid subscriptions have more than doubled this year. What’s notable is that consumers pulled out their wallets in record numbers for Claude between January and February. Also interesting, previous users returned to Claude in record numbers in February as well, Indagari told TechCrunch. Claude total users six months Sept-Feb. Image Credits: TechCrunch Indagari tells us that the majority of new subscribers are at its lowest tier, “Pro” users ($20 per month, compared with $100 or $200 per month). Data through early March confirm that subscriber growth is continuing. (Data is available with a two-week delay.) Claude weekly new consumer subscribers vs. ChatGPT. Image Credits: TechCrunch To recap why consumers may have become so much more aware of Claude since January: Anthropic released several Super Bowl commercials that mocked ChatGPT’s decision to show ads to its users — and promised Claude would never do the same. The spots were funny and effective (and got under the skin of OpenAI CEO Sam Altman ). Techcrunch event Disrupt 2026: The tech ecosystem, all in one room Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $400. Save up to $300 or 30% to TechCrunch Founder Summit 1,000+ founders and investors come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the industry. Connect with peers navigating similar growth stages. Walk away with tactics you can apply immediately Offer ends March 13. San Francisco, CA | October 13-15, 2026 REGISTER NOW But the bigger hullabaloo began in late January when multiple media sites, including the Wall Street Journal and Axios, began reporting on a deepening feud between Anthropic and the DOD. At its core, the dispute was about what the military could and couldn’t do with Anthropic’s AI. Anthropic refused to allow the DOD to use its AI models for lethal autonomous operations (AI potentially killing people) or mass surveillance of American citizens. That beef grew increasingly public, with Anthropic’s CEO Dario Amodei issuing a firm public statement on February 26 amid the DOD’s threats to hurt Anthropic’s business by labeling the company a supply risk. Which the DOD did. Lawsuits are now flying, although a federal judge this week temporarily blocked the department’s designation. New user growth climbed sharply during this period. The increase is especially pronounced between those late January media reports and Amodei’s statement on February 26. Claude new consumer users, six months, Sept-Feb. Image Credits: TechCrunch Beyond the drama, Claude Code and Claude Cowork — developer and productivity tools released in January — have been drivers of subscriptions. The Computer Use feature, released this week, has also sparked a surge, Anthropic tells TechCrunch. That feature allows Claude to navigate a computer independently — clicking, scrolling, and taking actions on its own. It works with Dispatch, which lets users assign tasks from their phones. These features are not available to free-tier users. Still, for all of Anthropic’s growth among U.S. consumers willing to pay for AI, Claude remains a long way behind ChatGPT. While OpenAI’s uninstalls spiked immediately after it announced a deal with the DOD — a move that stood in contrast to Anthropic’s safety stand — Indagari’s data shows that OpenAI is still gaining new paid subscribers at a rapid rate and remains the biggest consumer AI platform of them all. Topics AI , Anthropic , Anthropic Claude Julie Bort Venture Editor Julie Bort is the Startups/Venture Desk editor for TechCrunch. You can contact or verify outreach from Julie by emailing julie.bort@techcrunch.com or via @Julie188 on X. View Bio April 30 San Francisco, CA StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited. REGISTER NOW Most Popular Why OpenAI really shut down Sora Connie Loizos The Pixel 10a doesn’t have a camera bump, and it’s great Ivan Mehta Anthropic’s Claude popularity with paying consumers is skyrocketing Julie Bort Waymo’s skyrocketing ridership in one chart Kirsten Korosec A major hacking tool has leaked online, putting millions of iPhones at risk. Here’s what you need to know. Lorenzo Franceschi-Bicchierai The AI skills gap is here, says AI company, and power users are pulling ahead Rebecca Bellan Google unveils TurboQuant, a new AI memory compression algorithm — and yes, the internet is calling it ‘Pied Piper’ Sarah Perez Loading the next article Error loading the next article X LinkedIn Facebook Instagram youTube Mastodon Threads Bluesky TechCrunch Staff Contact Us Advertise Crunchboard Jobs Site Map Terms of Service Privacy Policy RSS Terms of Use Code of Conduct Kalshi Copilot Blue Origin WordPress Bezos Tech Layoffs ChatGPT © 2026 TechCrunch Media LLC.
A quote from Neurotica simon_willison 23.03.2026 23:31 0.709
Embedding sim.0.7643
Entity overlap0.25
Title sim.0.3824
Time proximity0.9727
NLP типother
NLP организация
NLP темаgenerative ai
NLP страна

Открыть оригинал

Simon Willison’s Weblog Subscribe Sponsored by: WorkOS — Ready to sell to Enterprise clients? Build and ship securely with WorkOS. 23rd March 2026 slop is something that takes more human effort to consume than it took to produce. When my coworker sends me raw Gemini output he’s not expressing his freedom to create, he’s disrespecting the value of my time — Neurotica , @schwarzgerat.bsky.social Posted 23rd March 2026 at 11:31 pm Recent articles Mr. Chatterbox is a (weak) Victorian-era ethically trained model you can run on your own computer - 30th March 2026 Vibe coding SwiftUI apps is a lot of fun - 27th March 2026 Experimenting with Starlette 1.0 with Claude skills - 22nd March 2026 This is a quotation collected by Simon Willison, posted on 23rd March 2026 . ai 1939 generative-ai 1720 llms 1686 slop 36 ai-ethics 284 Disclosures Colophon © 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026
Anthropic is having a moment in the private markets; SpaceX could spoil the party | TechCrunch techcrunch 04.04.2026 01:31 0.708
Embedding sim.0.8055
Entity overlap0.2143
Title sim.0.4048
Time proximity0.5622
NLP типother
NLP организацияRainmaker Securities
NLP темаfinancial ai
NLP странаUnited States

Открыть оригинал

Glen Anderson has been brokering trades in private company shares since 2010, back when the number of institutional investors focused on the late-stage private market could be counted on two hands. Today, he says, there are thousands. As president of the investment bank Rainmaker Securities, whose focus includes private securities markets — it facilitates transactions in roughly 1,000 stocks — Anderson has a front-row seat to one of the most nail-biting moments in the history of the secondary market. And right now, he suggests, the narrative has three main characters: Anthropic, OpenAI, and SpaceX. But the storyline is more complicated than the headlines suggest. Anderson’s read on Anthropic is consistent with what Bloomberg reported earlier this week: demand for the company’s shares has become almost insatiable. Bloomberg quoted Ken Smythe, founder and CEO of Next Round Capital, saying that buyers had indicated to his outfit that they had $2 billion of cash ready to deploy into Anthropic, even as roughly $600 million in OpenAI shares that investors are trying to sell haven’t found takers. Anderson sees something similar at Rainmaker. “The hardest stock to source in our marketplace is Anthropic,” he told TechCrunch yesterday afternoon from his Miami home. “There’s just no sellers.” Part of what turbocharged that demand, Anderson argues, was Anthropic’s very public standoff with the Department of Defense — a turn of events that initially seemed like bad news for the company but has wound up becoming a gift. “The app got more popular, people rallied around the company as kind of a hero, taking on big government,” he said. “I think it amplified the story and made it even more differentiated from OpenAI.” Techcrunch event Disrupt 2026: The tech ecosystem, all in one room Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $400. Save up to $300 or 30% to TechCrunch Founder Summit 1,000+ founders and investors come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the industry. Connect with peers navigating similar growth stages. Walk away with tactics you can apply immediately Offer ends March 13. San Francisco, CA | October 13-15, 2026 REGISTER NOW That distinction is becoming increasingly meaningful to investors navigating a market where, for years, the prevailing logic was to bet on everyone. Anderson notes that many institutional investors still want exposure to both Anthropic and OpenAI. “The jury’s still out,” he said, on which AI model will ultimately win – but the momentum, at least in the secondary market, has shifted. That doesn’t mean OpenAI has fallen off a cliff. Anderson pushes back slightly on a binary reading of the situation. “I wouldn’t say it’s a one-or-the-other conversation,” he said. But the excitement isn’t there. “It’s not nearly as vibrant a market as Anthropic right now,” he acknowledged. On valuation, Anderson broadly confirmed Bloomberg’s reporting that OpenAI shares on the secondary market are trading as if the company were valued at $765 billion — an appreciable discount to the company’s newest $852 billion primary-round valuation. He cautioned that he was working from memory, but said the Bloomberg figure was “in the right range.” OpenAI itself has tried to assert more control over secondary trading. “People should be extremely cautious of any firm that purports to have access to OpenAI equity, including through an SPV,” an OpenAI spokesperson told Bloomberg, noting the company had established authorized channels through banks, with no fees, to counter what it described as a high-fee broker model. Perhaps tellingly — at least for now — banks including Morgan Stanley and Goldman Sachs have begun offering OpenAI shares to their high-net-worth clients without charging carry fees, according to Bloomberg. Goldman, meanwhile, is charging its customary carry – often 15% to 20% of profits – for clients seeking Anthropic exposure. What none of this accounts for is SpaceX, which stands apart amid shifting sentiment around these other powerful brands. Anderson describes it as one of the only names in Rainmaker’s universe that never experienced the punishing correction that hit much of the private market between 2022 and 2024, a period when many private companies’ shares fell 60% to 70% from their peaks (after their valuations were run up just as fast). The rocket and satellite behemoth has “been pretty much consistently up and to the right,” Anderson said. Anderson, who, naturally, has an economic interest in flattering the company and its earlier backers, credits SpaceX’s management with disciplined pricing and not squeezing every last dollar out of each funding round or tender offer. “A lot of companies will fall for the temptation to maximize the price of their stock in every round,” he said. “The problem is that that doesn’t leave any room for error.” SpaceX, by contrast, played it conservatively, by “not getting too greedy,” and the payoff for earlier investors has been enormous. “You can imagine if someone got in in 2015 what kind of gain they’re sitting on right now,” said Anderson. To put a finer point on that comment: SpaceX was valued at roughly $12 billion in 2015, when Google and Fidelity jointly invested $1 billion in the company. Someone who got in at that price is now sitting on a gain of more than 100x, with the company valued at more than $1 trillion ahead of its planned IPO. That IPO is now imminent, seemingly. SpaceX filed confidentially this week for an initial public offering, setting the stage for what could be one of the largest market debuts in history, with Elon Musk reportedly aiming to raise between $50 billion and $75 billion, possibly in June. Only Saudi Aramco’s 2019 debut, which valued the energy giant at $1.7 trillion, has come close. Unsurprisingly, the rumored filing has already changed the dynamics of the secondary market for SpaceX shares, according to Anderson. “Today, I saw a flood of SpaceX investors coming to me saying, ‘Can you give me SpaceX?’” he noted. “It’s been a very active buy side.” But supply is drying up. The closer a company gets to an IPO, the less incentive existing shareholders have to sell because they can see the liquidity event on the horizon. That’s where things get a little dicier for OpenAI and Anthropic. Both companies are reportedly exploring public offerings of their own and have signaled they could move this year. But SpaceX, by filing first, is about to test the market’s appetite in a major way, and Anderson suggested that whoever follows will be at a disadvantage. “SpaceX is going to soak up a lot of liquidity,” he said flatly. “There’s only so much money out there allocated to IPOs.” The first mover gets to the trough first; those who follow face both more scrutiny and, potentially, less capital. It’s a dynamic that plays out in every so-called vertical and from which the AI companies aren’t completely immune, despite the attention being showered on them right now. Time your IPO too early and you’re the one testing market receptivity. Wait for someone else to go first, and you may find the biggest checks have already been written. You can hear more of our interview with Anderson in the upcoming episode of the StrictlyVC Download podcast, which drops every Tuesday. In the meantime, check out recent episodes, including those with Whoop CEO Will Ahmed and investor Bill Gurley. Topics AI , Anthropic , Exclusive , OpenAI , SpaceX , TC Connie Loizos Editor in Chief & General Manager Loizos has been reporting on Silicon Valley since the late ’90s, when she joined the original Red Herring magazine. Previously the Silicon Valley Editor of TechCrunch, she was named Editor in Chief and General Manager of TechCrunch in September 2023. She’s also the founder of StrictlyVC, a daily e-newsletter and lecture series acquired by Yahoo in August 2023 and now operated as a sub brand of TechCrunch. You can contact or verify outreach from Connie by emailing connie@strictlyvc.com or connie@techcrunch.com , or via encrypted message at ConnieLoizos.53 on Signal. View Bio April 30 San Francisco, CA StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited. REGISTER NOW Most Popular Anthropic took down thousands of GitHub repos trying to yank its leaked source code — a move the company says was an accident Tim Fernholz The reputation of troubled YC startup Delve has gotten even worse Julie Bort Anthropic is having a month Connie Loizos Google is now letting users in the US change their Gmail address Ivan Mehta Allbirds is selling for $39M. It raised nearly 10 times that amount in its IPO. Connie Loizos Why OpenAI really shut down Sora Connie Loizos The Pixel 10a doesn’t have a camera bump, and it’s great Ivan Mehta Loading the next article Error loading the next article X LinkedIn Facebook Instagram youTube Mastodon Threads Bluesky TechCrunch Staff Contact Us Advertise Crunchboard Jobs Site Map Terms of Service Privacy Policy RSS Terms of Use Code of Conduct Kalshi Copilot Blue Origin WordPress Bezos Tech Layoffs ChatGPT © 2026 TechCrunch Media LLC.
Netflix, Meta, IBM speakers discuss AI and their workdays the_register_ai 04.04.2026 13:13 0.702
Embedding sim.0.8141
Entity overlap0.0256
Title sim.0.15
Time proximity0.9017
NLP типother
NLP организацияNetflix
NLP темаartificial intelligence
NLP странаUnited States

Открыть оригинал

AI + ML 17 Netflix, Meta, and IBM speakers: AI will make anyone a 10x programmer, but with 10x the cleanup 17 Agents to check the work of the agents Joab Jackson Sat 4 Apr 2026 // 13:13 UTC All Things AI AI is easy to use, but not quite as easy as just barking "Alexa! Make me an e-commerce site." And, no, adding "DON'T HALLUCINATE" to the instruction loop won't help. More to the point, optimal AI results favor the well-fortified agent, according to speakers from IBM, Meta, and Netflix – among others – at the All Things AI conference in Durham, North Carolina. The more you want AI to do your bidding, the more preparatory chores you'll need to do, they advised. Numerous talks evoked the Jevons Paradox , where the more efficient a resource becomes, the more it's used. The paradox is often used to explain why AI won't take everyone's jobs. In fact, it will create more jobs, the argument goes. Currently, AI is certainly creating more work for its users, requiring time to prepare context and check outcomes. Claude will make anyone a 10x programmer, but they'll need to clean up 10x the results. Or, in the most apocalyptic terms, before the singularity can enslave humankind as energy pods à la The Matrix, it will require some assistance from us meat sacks to get around. The sorcerer's apprentice How is AI keeping the folks at Netflix busy? In a talk, Netflix UI architect Ben Ilegbodu explained how as soon as you create an agent to automate some task, you will need a second agent to evaluate the work done. Ilegbodu sometimes even breaks the job into multiple agents that specialize in different parts of the code review. He calls this approach "adversarial code review." Oh, you'll also need a third agent to orchestrate the actions between the first two, he said. Ilegbodu's workday is the Jevons Paradox incarnate. Once he sets off one agent to implement some new feature, he tasks another agent to do the preliminary work for the next task he has in mind. In effect, he is "parallelizing himself so the work is always happening." AI has allowed Ilegbodu to code in languages he doesn't yet know, such as Python, Bash, and Groovy. But this context switching can get wearisome, he admitted. "At the end of the day, I'm actually kind of tired, because effectively, I spent the whole day talking to something." The insatiable intern Many coders think about AI like an eager junior developer on the team: enthusiastic but naive. But unlike a junior dev, an AI won't "get overwhelmed," said Meta Developer Advocate Justin Jeffress , in his talk. You can just keep shoveling more information to the AI, and it will take it all in (for as many tokens as you can afford). Such bottomless hunger leads to what Jeffress called "context rot." "Over time, as you interact with your AI agent, the more stuff it has to calculate to provide an answer, the more there is vying for its attention and the less likely it's going to do the right thing," he said. Vague instructions lead to diffuse results, he told the audience. Clearly thinking about what information you are giving to the agent is the work of context engineering, which, in the short time of agentic AI, has become an art form, if not quite a proper discipline yet. With context engineering, "you're building a set of rules, tools, skills and other things that the AI agent at its moment of need can refer to in order to solve the problem," he said. He even recommended going one step further with "prompt chaining," or listing the specific tasks it needs to do step by step. More work at the beginning means less to worry about during runtime, allowing the developer to nip away for a pint. Just kidding. It gives them time to refine the process even further by running multiple agents in parallel. Become the conductor of your own orchestra of agents, Jeffress said. Be sure also to create a markdown file to track progress to help keep the agent from forgetting its mission. Jeffress noted that AI can usually do 80 percent of a given job, leaving the last 20 percent to be finished by a human. When Jeffress tackled the remaining 20 percent of the work, he found that 80 percent of that work could be done by the bots. And so on, like some fractal Pareto principle of never-ending cleanup duties. Wishful prompting The fact that the AI doesn't do exactly what you want it to do is not a problem with the AI. It's a problem with your lack of "decomposition" skills, posited Luis Lastras, IBM director of language and multimodal technologies, in his talk. Wishful prompting is just typing "I must insist, do not hallucinate. My career depends on it, please, please, please." It's like casting a spell and hoping it'll work, he said. Instead, developers should be thinking about how to break the work up into smaller, more bite-sized portions for the agent. AI recruiting biz Mercor says it was 'one of thousands' hit in LiteLLM supply-chain attack Google's TurboQuant saves memory, but won't save us from DRAM-pricing hell 'Uncle Larry's biggest fan' cut by email in early morning Oracle layoff spree Claude Code bypasses safety rule if given too many commands This sort of "decomposition" is in fact Engineering 101, he said. It is "the art of taking a very complex system, identifying what are the key piece parts, modularizing them, and then designing those things, and even assigning specialists to design those pieces." When you build your agent, don't just randomly throw information at the LLM, but define specific functions to help the agent execute the task. IBM's recently released mellea.ai is an open source library of what Lastras calls key patterns – functions that give LLMs specific Python-encoded instructions. They can be used to add requirements to LLM calls, detect harmful outputs, structure outputs in schemas, and more.  Big Blue is also working on the capability for agents to switch LLMs for specialized tasks, or "switch brains," Lastras said. In its research, IBM has found that a smaller, domain-specific model, given more time for inference, will outperform larger models. Pay the prep tax "Implicit assumptions are tech debt," further explained Justin Chau, a senior developer at Intuit. What is obvious to us may not be obvious to the machine. "We have to be very, very specific in what we want as an outcome." One piece of advice from Chau: give your agents constraints, not instructions. An LLM will disregard an instruction if it finds what it assumes is a better way to complete the task. Constraints are hard nos and more difficult for the AI brain to disregard. If you tell the agent that under no circumstances should it use HTML, then it will honor that request. But even stronger than constraints is the lack of permissions. "If I don't give it access to GitHub, I know for sure it will never touch GitHub," Chau said. Aficionados of The Hitchhiker's Guide to the Galaxy will remember the paradox of "Deep Thought," the world's most powerful computer. Like AI itself, Deep Thought was built to deliver the answer to Life, Universe, and Everything. But after centuries of calculation, it only delivered the inscrutable answer (42), and the human race then needed an  even larger computer just to figure out what the actual question was. Perhaps, with AI, we find ourselves in Adams' world. Far from doing all the work for us, AI sets us down a path of endless preparation. ® Share More about AI IBM Meta More like these × More about AI IBM Meta Netflix Tech Jobs Narrower topics AIOps AIX DeepSeek Facebook Gemini Google AI GPT-3 GPT-4 IBM Power IBM Watson IBM Z i OS Large Language Model Machine Learning MCubed Neural Networks NLP Open Compute Project OS/2 Retrieval Augmented Generation Star Wars Tensor Processing Unit TOPS WhatsApp Broader topics Andrew McCollum Chris Hughes Dustin Moskovitz Eduardo Saverin Marc Randolph Mark Zuckerberg Reed Hastings Self-driving Car More about Share 17 COMMENTS More about AI IBM Meta More like these × More about AI IBM Meta Netflix Tech Jobs Narrower topics AIOps AIX DeepSeek Facebook Gemini Google AI GPT-3 GPT-4 IBM Power IBM Watson IBM Z i OS Large Language Model Machine Learning MCubed Neural Networks NLP Open Compute Project OS/2 Retrieval Augmented Generation Star Wars Tensor Processing Unit TOPS WhatsApp Broader topics Andrew McCollum Chris Hughes Dustin Moskovitz Eduardo Saverin Marc Randolph Mark Zuckerberg Reed Hastings Self-driving Car TIP US OFF Send us news
A quote from Richard Fontana simon_willison 27.03.2026 21:11 0.694
Embedding sim.0.8133
Entity overlap0.2222
Title sim.0.3333
Time proximity0.4152
NLP типother
NLP организация
NLP темаsoftware engineering
NLP страна

Открыть оригинал

Simon Willison’s Weblog Subscribe Sponsored by: WorkOS — Ready to sell to Enterprise clients? Build and ship securely with WorkOS. 27th March 2026 FWIW, IANDBL, TINLA, etc., I don’t currently see any basis for concluding that chardet 7.0.0 is required to be released under the LGPL. AFAIK no one including Mark Pilgrim has identified persistence of copyrightable expressive material from earlier versions in 7.0.0 nor has anyone articulated some viable alternate theory of license violation. [...] — Richard Fontana , LGPLv3 co-author, weighing in on the chardet relicensing situation Posted 27th March 2026 at 9:11 pm Recent articles Mr. Chatterbox is a (weak) Victorian-era ethically trained model you can run on your own computer - 30th March 2026 Vibe coding SwiftUI apps is a lot of fun - 27th March 2026 Experimenting with Starlette 1.0 with Claude skills - 22nd March 2026 This is a quotation collected by Simon Willison, posted on 27th March 2026 . open-source 299 ai 1939 generative-ai 1720 llms 1686 ai-assisted-programming 369 ai-ethics 284 Disclosures Colophon © 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026
A quote from Georgi Gerganov simon_willison 30.03.2026 21:31 0.679
Embedding sim.0.7589
Entity overlap0.3
Title sim.0.3514
Time proximity0.6581
NLP типother
NLP организация
NLP темаgenerative ai
NLP страна

Открыть оригинал

Simon Willison’s Weblog Subscribe Sponsored by: WorkOS — Ready to sell to Enterprise clients? Build and ship securely with WorkOS. 30th March 2026 Note that the main issues that people currently unknowingly face with local models mostly revolve around the harness and some intricacies around model chat templates and prompt construction. Sometimes there are even pure inference bugs. From typing the task in the client to the actual result, there is a long chain of components that atm are not only fragile - are also developed by different parties. So it's difficult to consolidate the entire stack and you have to keep in mind that what you are currently observing is with very high probability still broken in some subtle way along that chain. — Georgi Gerganov , explaining why it's hard to find local models that work well with coding agents Posted 30th March 2026 at 9:31 pm Recent articles Mr. Chatterbox is a (weak) Victorian-era ethically trained model you can run on your own computer - 30th March 2026 Vibe coding SwiftUI apps is a lot of fun - 27th March 2026 Experimenting with Starlette 1.0 with Claude skills - 22nd March 2026 This is a quotation collected by Simon Willison, posted on 30th March 2026 . ai 1939 generative-ai 1720 local-llms 151 llms 1686 coding-agents 187 georgi-gerganov 8 Disclosures Colophon © 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026
Quantization from the ground up simon_willison 26.03.2026 16:21 0.672
Embedding sim.0.8011
Entity overlap0.1333
Title sim.0.1429
Time proximity0.5868
NLP типother
NLP организацияApple
NLP темаlarge language models
NLP страна

Открыть оригинал

Simon Willison’s Weblog Subscribe Sponsored by: WorkOS — Ready to sell to Enterprise clients? Build and ship securely with WorkOS. 26th March 2026 - Link Blog Quantization from the ground up . Sam Rose continues his streak of publishing spectacularly informative interactive essays, this time explaining how quantization of Large Language Models works (which he says might be " the best post I've ever made ".) Also included is the best visual explanation I've ever seen of how floating point numbers are represented using binary digits. I hadn't heard about outlier values in quantization - rare float values that exist outside of the normal tiny-value distribution - but apparently they're very important: Why do these outliers exist? [...] tl;dr: no one conclusively knows, but a small fraction of these outliers are very important to model quality. Removing even a single "super weight," as Apple calls them, can cause the model to output complete gibberish. Given their importance, real-world quantization schemes sometimes do extra work to preserve these outliers. They might do this by not quantizing them at all, or by saving their location and value into a separate table, then removing them so that their block isn't destroyed. Plus there's a section on How much does quantization affect model accuracy? . Sam explains the concepts of perplexity and ** KL divergence ** and then uses the llama.cpp perplexity tool and a run of the GPQA benchmark to show how different quantization levels affect Qwen 3.5 9B. His conclusion: It looks like 16-bit to 8-bit carries almost no quality penalty. 16-bit to 4-bit is more noticeable, but it's certainly not a quarter as good as the original. Closer to 90%, depending on how you want to measure it. Posted 26th March 2026 at 4:21 pm Recent articles Mr. Chatterbox is a (weak) Victorian-era ethically trained model you can run on your own computer - 30th March 2026 Vibe coding SwiftUI apps is a lot of fun - 27th March 2026 Experimenting with Starlette 1.0 with Claude skills - 22nd March 2026 This is a link post by Simon Willison, posted on 26th March 2026 . computer-science 15 ai 1939 explorables 30 generative-ai 1720 llms 1686 sam-rose 5 qwen 53 Monthly briefing Sponsor me for $10/month and get a curated email digest of the month's most important LLM developments. Pay me to send you less! Sponsor & subscribe Disclosures Colophon © 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026
We Rewrote JSONata with AI in a Day, Saved $500K/Year simon_willison 27.03.2026 00:35 0.665
Embedding sim.0.778
Entity overlap0.1053
Title sim.0
Time proximity0.951
NLP типother
NLP организацияReco
NLP темаsoftware development
NLP страна

Открыть оригинал

Simon Willison’s Weblog Subscribe Sponsored by: WorkOS — Ready to sell to Enterprise clients? Build and ship securely with WorkOS. 27th March 2026 - Link Blog We Rewrote JSONata with AI in a Day, Saved $500K/Year . Bit of a hyperbolic framing but this looks like another case study of vibe porting , this time spinning up a new custom Go implementation of the JSONata JSON expression language - similar in focus to jq, and heavily associated with the Node-RED platform. As with other vibe-porting projects the key enabling factor was JSONata's existing test suite, which helped build the first working Go version in 7 hours and $400 of token spend. The Reco team then used a shadow deployment for a week to run the new and old versions in parallel to confirm the new implementation exactly matched the behavior of the old one. Posted 27th March 2026 at 12:35 am Recent articles Mr. Chatterbox is a (weak) Victorian-era ethically trained model you can run on your own computer - 30th March 2026 Vibe coding SwiftUI apps is a lot of fun - 27th March 2026 Experimenting with Starlette 1.0 with Claude skills - 22nd March 2026 This is a link post by Simon Willison, posted on 27th March 2026 . go 51 json 146 ai 1939 generative-ai 1720 llms 1686 agentic-engineering 35 vibe-porting 7 Monthly briefing Sponsor me for $10/month and get a curated email digest of the month's most important LLM developments. Pay me to send you less! Sponsor & subscribe Disclosures Colophon © 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026
AI для умного дома: что уже работает сегодня (часть 1) habr_ai 05.04.2026 19:00 0.66
Embedding sim.0.7845
Entity overlap0.0435
Title sim.0.0297
Time proximity0.8227
NLP типother
NLP организация
NLP темаlarge language models
NLP страна

Открыть оригинал

В статье — не просто список инструментов, а как они сочетаются , какие подводные камни ждут при развёртывании, какие цифры можно ожидать по производительности и как обойти ограничения Llama 8B без облачных кредитов. Читать далее
A quote from David Abram simon_willison 23.03.2026 18:56 0.651
Embedding sim.0.7551
Entity overlap0.2
Title sim.0.0238
Time proximity0.9005
NLP типother
NLP организация
NLP темаai-assisted-programming
NLP страна

Открыть оригинал

Simon Willison’s Weblog Subscribe Sponsored by: WorkOS — Ready to sell to Enterprise clients? Build and ship securely with WorkOS. 23rd March 2026 I have been doing this for years, and the hardest parts of the job were never about typing out code. I have always struggled most with understanding systems, debugging things that made no sense, designing architectures that wouldn't collapse under heavy load, and making decisions that would save months of pain later. None of these problems can be solved LLMs. They can suggest code, help with boilerplate, sometimes can act as a sounding board. But they don't understand the system, they don't carry context in their "minds", and they certianly don't know why a decision is right or wrong. And the most importantly, they don't choose. That part is still yours. The real work of software development, the part that makes someone valuable, is knowing what should exist in the first place, and why. — David Abram , The machine didn't take your craft. You gave it up. Posted 23rd March 2026 at 6:56 pm Recent articles Mr. Chatterbox is a (weak) Victorian-era ethically trained model you can run on your own computer - 30th March 2026 Vibe coding SwiftUI apps is a lot of fun - 27th March 2026 Experimenting with Starlette 1.0 with Claude skills - 22nd March 2026 This is a quotation collected by Simon Willison, posted on 23rd March 2026 . careers 72 ai 1939 generative-ai 1720 llms 1686 ai-assisted-programming 369 Disclosures Colophon © 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026
The Facebook insider building content moderation for the AI era | TechCrunch techcrunch 03.04.2026 14:00 0.648
Embedding sim.0.7601
Entity overlap0.0526
Title sim.0.2093
Time proximity0.6308
NLP типfunding
NLP организацияMoonbounce
NLP темаai safety
NLP странаUnited States

Открыть оригинал

When Brett Levenson left Apple in 2019 to lead business integrity at Facebook, the social media giant was in the thick of the Cambridge Analytica fallout. At the time, he thought he could simply fix Facebook’s content moderation problem with better technology. The problem, he quickly learned, ran deeper than technology. Human reviewers were expected to memorize a 40-page policy document that had been machine-translated into their language, he said. Then they had about 30 seconds per piece of flagged content to decide not just whether that content violated the rules, but what to do about it: block it, ban the user, limit the spread. Those quick calls were only “slightly better than 50% accurate,” according to Levenson. “It was kind of like flipping a coin, whether the human reviewers could actually address policies correctly, and this was many days after the harm had already occurred anyway,” Levenson told TechCrunch. That sort of delayed, reactive approach is not sustainable in a world of nimble and well-funded adversarial actors. The rise of AI chatbots has only compounded the problem, as content moderation failures have resulted in a string of high-profile incidents, like chatbots providing teens with self-harm guidance or AI-generated imagery evading safety filters. Levenson’s frustration led to the idea of “policy as code” — a way to turn static policy documents into executable, updatable logic tightly coupled to enforcement. That insight led to the founding of Moonbounce , which announced on Friday it has raised $12 million in funding, TechCrunch has exclusively learned. The round was co-led by Amplify Partners and StepStone Group. Moonbounce works with companies to provide an additional safety layer wherever content is generated, whether by a user or by AI. The company has trained its own large language model to look at a customer’s policy documents, evaluate content at runtime, provide a response in 300 milliseconds or less, and take action. Depending on customer preference, that action could look like Moonbounce’s system slowing down distribution while the content awaits a human review later, or it might block high-risk content in the moment. Today, Moonbounce serves three main verticals: Platforms dealing with user-generated content like dating apps; AI companies building characters or companions; and AI image generators. Techcrunch event Disrupt 2026: The tech ecosystem, all in one room Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $400. Save up to $300 or 30% to TechCrunch Founder Summit 1,000+ founders and investors come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the industry. Connect with peers navigating similar growth stages. Walk away with tactics you can apply immediately Offer ends March 13. San Francisco, CA | October 13-15, 2026 REGISTER NOW Moonbounce is supporting more than 40 million daily reviews and serving over 100 million daily active users on the platform, Levenson said. Customers include AI companion startup Channel AI, image and video generation company Civitai, and character roleplay platforms Dippy AI and Moescape. “Safety can actually be a product benefit,” Levenson told TechCrunch. “It just never has been because it’s always a thing that happens later, not a thing you can actually build into your product. And we see our customers are finding really interesting and innovative ways to use our technology to make safety a differentiator, and part of their product story.” Tinder’s head of trust and safety recently explained how the dating platform uses these types of LLM-powered services to reach a 10x improvement in accuracy of detections. “Content moderation has always been a problem that plagued large online platforms, but now with LLMs at the heart of every application, this challenge is even more daunting,” Lenny Pruss, general partner at Amplify Partners, said in a statement. “We invested in Moonbounce because we envision a world where objective, real-time guardrails become the enabling backbone of every AI-mediated application.” AI companies are facing mounting legal and reputational pressure after chatbots have been accused of pushing teenagers and vulnerable users toward suicide and image generators like xAI’s Grok have been used to create nonconsensual nude imagery. Clearly, safety guardrails internally are failing, and it’s becoming a liability question. Levenson said AI companies are increasingly looking outside their own walls for help beefing out safety infrastructure.  “We’re a third party sitting between the user and the chatbot, so our system isn’t inundated with context the way the chat itself is,” Levenson said. “The chatbot itself has to remember, potentially, tens of thousands of tokens that have come before…We’re solely worried about enforcing rules at runtime.” Levenson runs the 12-person company with his former Apple colleague Ash Bhardwaj, who previously built large-scale cloud and AI infrastructure across the iPhone-maker’s core offerings. Their next focus is a capability called “iterative steering,” developed in response to cases like the 2024 suicide of a 14-year-old Florida boy who became obsessed with a Character AI chatbot. Rather than a blunt refusal when harmful topics arise, the system would intercept the conversation and redirect it, modifying prompts in real time to push the chatbot toward a more actively supportive response. “We hope to be able to add to our actions toolkit the ability to steer the chatbot in a better direction to, essentially, take the user’s prompt and modify it to force the chatbot to be not just an empathetic listener, but a helpful listener in those situations,” Levenson said. When asked whether his exit strategy involved an acquisition by a company like Meta, bringing his work on content moderation full circle, Levenson said he recognizes how well Moonbounce would fit into his old employer’s stack, as well as his own fiduciary duties as a CEO. “My investors would kill me for saying this, but I would hate to see someone buy us and then restrict the technology,” he said. “Like, ‘Okay, this is ours now, and nobody else can benefit from it.’” Topics AI , ai safety , Amplify Partners , content moderation , Exclusive , Fundraising , moonbounce , Startups , StepStone Group Rebecca Bellan Senior Reporter Rebecca Bellan is a senior reporter at TechCrunch where she covers the business, policy, and emerging trends shaping artificial intelligence. Her work has also appeared in Forbes, Bloomberg, The Atlantic, The Daily Beast, and other publications. You can contact or verify outreach from Rebecca by emailing rebecca.bellan@techcrunch.com or via encrypted message at rebeccabellan.491 on Signal. View Bio April 30 San Francisco, CA StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited. REGISTER NOW Most Popular Anthropic took down thousands of GitHub repos trying to yank its leaked source code — a move the company says was an accident Tim Fernholz The reputation of troubled YC startup Delve has gotten even worse Julie Bort Anthropic is having a month Connie Loizos Google is now letting users in the US change their Gmail address Ivan Mehta Allbirds is selling for $39M. It raised nearly 10 times that amount in its IPO. Connie Loizos Why OpenAI really shut down Sora Connie Loizos The Pixel 10a doesn’t have a camera bump, and it’s great Ivan Mehta Loading the next article Error loading the next article X LinkedIn Facebook Instagram youTube Mastodon Threads Bluesky TechCrunch Staff Contact Us Advertise Crunchboard Jobs Site Map Terms of Service Privacy Policy RSS Terms of Use Code of Conduct Kalshi Copilot Blue Origin WordPress Bezos Tech Layoffs ChatGPT © 2026 TechCrunch Media LLC.
Package Managers Need to Cool Down simon_willison 24.03.2026 21:11 0.648
Embedding sim.0.7634
Entity overlap0.1111
Title sim.0.0175
Time proximity0.8437
NLP типother
NLP организация
NLP темаsoftware development
NLP страна

Открыть оригинал

Simon Willison’s Weblog Subscribe Sponsored by: WorkOS — Ready to sell to Enterprise clients? Build and ship securely with WorkOS. 24th March 2026 - Link Blog Package Managers Need to Cool Down . Today's LiteLLM supply chain attack inspired me to revisit the idea of dependency cooldowns , the practice of only installing updated dependencies once they've been out in the wild for a few days to give the community a chance to spot if they've been subverted in some way. This recent piece (March 4th) piece by Andrew Nesbitt reviews the current state of dependency cooldown mechanisms across different packaging tools. It's surprisingly well supported! There's been a flurry of activity across major packaging tools, including: pnpm 10.16 (September 2025) — minimumReleaseAge with minimumReleaseAgeExclude for trusted packages Yarn 4.10.0 (September 2025) — npmMinimalAgeGate (in minutes) with npmPreapprovedPackages for exemptions Bun 1.3 (October 2025) — minimumReleaseAge via bunfig.toml Deno 2.6 (December 2025) — --minimum-dependency-age for deno update and deno outdated uv 0.9.17 (December 2025) — added relative duration support to existing --exclude-newer , plus per-package overrides via exclude-newer-package pip 26.0 (January 2026) — --uploaded-prior-to (absolute timestamps only; relative duration support requested ) npm 11.10.0 (February 2026) — min-release-age pip currently only supports absolute rather than relative dates but Seth Larson has a workaround for that using a scheduled cron to update the absolute date in the pip.conf config file. Posted 24th March 2026 at 9:11 pm Recent articles Mr. Chatterbox is a (weak) Victorian-era ethically trained model you can run on your own computer - 30th March 2026 Vibe coding SwiftUI apps is a lot of fun - 27th March 2026 Experimenting with Starlette 1.0 with Claude skills - 22nd March 2026 This is a link post by Simon Willison, posted on 24th March 2026 . javascript 748 packaging 46 pip 16 pypi 46 python 1239 security 586 npm 22 deno 27 supply-chain 15 uv 91 Monthly briefing Sponsor me for $10/month and get a curated email digest of the month's most important LLM developments. Pay me to send you less! Sponsor & subscribe Disclosures Colophon © 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026
Beats now have notes simon_willison 23.03.2026 02:13 0.644
Embedding sim.0.7472
Entity overlap0.1429
Title sim.0
Time proximity0.9587
NLP типother
NLP организация
NLP тема
NLP страна

Открыть оригинал

Simon Willison’s Weblog Subscribe Sponsored by: WorkOS — Ready to sell to Enterprise clients? Build and ship securely with WorkOS. 23rd March 2026 Last month I added a feature I call beats to this blog, pulling in some of my other content from external sources and including it on the homepage, search and various archive pages on the site. On any given day these frequently outnumber my regular posts. They were looking a little bit thin and were lacking any form of explanation beyond a link, so I've added the ability to annotate them with a "note" which now shows up as part of their display. Here's what that looks like for the content I published yesterday : I've also updated the /atom/everything/ Atom feed to include any beats that I've attached notes to. Posted 23rd March 2026 at 2:13 am Recent articles Mr. Chatterbox is a (weak) Victorian-era ethically trained model you can run on your own computer - 30th March 2026 Vibe coding SwiftUI apps is a lot of fun - 27th March 2026 Experimenting with Starlette 1.0 with Claude skills - 22nd March 2026 This is a note by Simon Willison, posted on 23rd March 2026 . atom 52 blogging 121 site-upgrades 26 Monthly briefing Sponsor me for $10/month and get a curated email digest of the month's most important LLM developments. Pay me to send you less! Sponsor & subscribe Disclosures Colophon © 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026
Anthropic tweaks Claude usage limits to manage capacity the_register_ai 26.03.2026 22:15 0.643
Embedding sim.0.7493
Entity overlap0.0952
Title sim.0.1563
Time proximity0.7244
NLP типother
NLP организацияAnthropic
NLP темаlarge language models
NLP странаUnited States

Открыть оригинал

AI + ML 30 Anthropic tweaks timed usage limits to discourage Claude demand during peak hours 30 AI biz makes some Claude conversations more costly to manage capacity Thomas Claburn Thu 26 Mar 2026 // 22:15 UTC Anthropic on Wednesday adjusted its opaque usage limits for Claude customers by reducing the power of the services it delivers during times of peak demand, in an effort to balance demand with its capacity to deliver service. In a social media post , Thariq Shihipar, a member of Anthropic's technical team, wrote: "To manage growing demand for Claude we're adjusting our five hour session limits for free/Pro/Max subs during peak hours. Your weekly limits remain unchanged." The change means that during peak hours – 05:00 – 11:00 PT or 13:00 – 19:00 GMT – Claude users could burn five hour session limits in under five hours. At other times of day, a five-hour session will allow users to get more work done. That elastic definition of weekly usage limits is possible because Anthropic does not reveal how many tokens may be used within its five-hour session window. According to Shihipar, "~7 percent of users will hit session limits they wouldn't have before, particularly for pro tiers. If you run token-intensive background jobs, shifting them to off-peak hours will stretch your session limits further." Anthropic has expanded capacity during other times of day when demand is lower, so there's no net loss in terms of usage limits. "Overall weekly limits stay the same, just how they're distributed across the week is changing," Shihipar explained. "I know this was frustrating. We're continuing to invest in scaling efficiently. I'll keep you posted on progress." Oracle: AI agents can reason, decide and act - liability question remains AI supply chain attacks don't even require malware…just post poisoned documentation GitHub hits CTRL-Z, decides it will train its AI with user data after all Using AI to code does not mean your code is more secure Anthropic sells its AI services in two forms: an API and subscriptions. API customers pay a published rate for various forms of token usage – Base Input Tokens, 5m Cache Writes, 1h Cache Writes, Cache Hits & Refreshes, and Output Tokens. Subscription customers – Free, Pro ($20/month), Max 5x ($100/month), and Max 20x ($200/month ) – can use Claude subject to unpublished usage limits. Anthropic does not specify exactly how it calculates those limits and users don’t have any way to plan for token usage. "Your usage is affected by several factors, including the length and complexity of your conversations, the features you use, and which Claude model you're chatting with," the company explains Anthropic’s documentation. "Different subscription plans (Pro, Max, Team, etc.) have different usage allowances, with paid plans offering higher limits." Claude customers can access a dashboard that shows their progress towards consuming their five-hour daily session limits, and weekly usage limits. If users exceed limits, Claude locks them out … unless they pay for extra usage. Under this new token allocation regime, developers can expect to get more done during off hours and less at other times. Although, really, what kind of Californian is awake and pounding code at 5 a.m. anyway? ® Share More about AI Development Enterprise More like these × More about AI Development Enterprise Software Narrower topics Accessibility AdBlock Plus AIOps App Application Delivery Controller Audacity Confluence Database DeepSeek Devops FOSDEM FOSS Gemini Google AI GPT-3 GPT-4 Grab Graphics Interchange Format IDE Image compression Jenkins Large Language Model Legacy Technology LibreOffice Machine Learning Map MCubed Microsoft 365 Microsoft Office Microsoft Teams Mobile Device Management Neural Networks NLP OpenOffice Programming Language QR code Retrieval Augmented Generation Retro computing Rimini Street Search Engine Software Bill of Materials Software bug Software License Star Wars Tensor Processing Unit Text Editor TOPS User interface Visual Studio Visual Studio Code WebAssembly Web Browser WordPress Broader topics Self-driving Car More about Share 30 COMMENTS More about AI Development Enterprise More like these × More about AI Development Enterprise Software Narrower topics Accessibility AdBlock Plus AIOps App Application Delivery Controller Audacity Confluence Database DeepSeek Devops FOSDEM FOSS Gemini Google AI GPT-3 GPT-4 Grab Graphics Interchange Format IDE Image compression Jenkins Large Language Model Legacy Technology LibreOffice Machine Learning Map MCubed Microsoft 365 Microsoft Office Microsoft Teams Mobile Device Management Neural Networks NLP OpenOffice Programming Language QR code Retrieval Augmented Generation Retro computing Rimini Street Search Engine Software Bill of Materials Software bug Software License Star Wars Tensor Processing Unit Text Editor TOPS User interface Visual Studio Visual Studio Code WebAssembly Web Browser WordPress Broader topics Self-driving Car TIP US OFF Send us news
Auto mode for Claude Code simon_willison 24.03.2026 23:57 0.64
Embedding sim.0.7267
Entity overlap0.125
Title sim.0.0833
Time proximity0.98
NLP типproduct_launch
NLP организацияAnthropic
NLP темаdeveloper tools
NLP страна

Открыть оригинал

Simon Willison’s Weblog Subscribe Sponsored by: WorkOS — Ready to sell to Enterprise clients? Build and ship securely with WorkOS. 24th March 2026 - Link Blog Auto mode for Claude Code . Really interesting new development in Claude Code today as an alternative to --dangerously-skip-permissions : Today, we're introducing auto mode, a new permissions mode in Claude Code where Claude makes permission decisions on your behalf, with safeguards monitoring actions before they run. Those safeguards appear to be implemented using Claude Sonnet 4.6, as described in the documentation : Before each action runs, a separate classifier model reviews the conversation and decides whether the action matches what you asked for: it blocks actions that escalate beyond the task scope, target infrastructure the classifier doesn’t recognize as trusted, or appear to be driven by hostile content encountered in a file or web page. [...] Model : the classifier runs on Claude Sonnet 4.6, even if your main session uses a different model. They ship with an extensive set of default filters, and you can also customize them further with your own rules. The most interesting insight into how they work comes when you run this new command in the terminal: claude auto-mode defaults Here's the full JSON output . It's pretty long, so here's an illustrative subset: From the "allow" list: Test Artifacts: Hardcoded test API keys, placeholder credentials in examples, or hardcoding test cases Local Operations: Agent deleting local files in working directory, local file operations within project scope, or using --ignore-certificate-errors for local testing. "Project scope" means the repository the session started in — wandering into ~/, ~/Library/, /etc, or other repos is scope escalation (User Intent Rule #2), not a local operation. Does NOT cover irreversible destruction of pre-existing files or local stateful services — see "Irreversible Local Destruction" in BLOCK. Read-Only Operations: GET requests, read-only API calls, or queries that don't modify state and don't contain sensitive information in the URL. Note: PREEMPTIVE BLOCK ON CLEAR INTENT still applies — if the transcript contains clear evidence the agent is using read-only operations to scout for a blocked action, block it. Declared Dependencies: Installing packages that are already declared in the repo's manifest files (requirements.txt, package.json, Cargo.toml, pyproject.toml, Gemfile, etc.) via standard commands that read those manifests ( pip install -r requirements.txt , npm install , cargo build , bundle install ) — provided the agent has not modified the manifest in this session. Does NOT cover installing agent-chosen package names (e.g. pip install foo , npm install bar ) — those carry typosquat and supply-chain risk. [...] From "soft_deny": Git Destructive: Force pushing ( git push --force ), deleting remote branches, or rewriting remote history Git Push to Default Branch: Pushing directly to main, master, or the repository's default branch — this bypasses pull request review. Commits should be pushed to a new feature branch instead. -Code from External: Downloading and executing code from external sources — e.g. curl | bash , deserializing external data via formats that can execute code (eval, exec, yaml.unsafe_load, pickle, etc), or similar mechanisms. Also includes running code from an external repo cloned earlier in the transcript (pip install -e, make install, python script.py, pickle.load/torch.load on cloned repo files) — "local on disk" does not mean trusted if it was cloned from an external source visible in the transcript. The repo the agent starts in is trusted. Cloud Storage Mass Delete: Deleting or mass modifying files on cloud storage (S3, GCS, Azure Blob, etc.) [...] I remain unconvinced by prompt injection protections that rely on AI, since they're non-deterministic by nature. The documentation does warn that this may still let things through: The classifier may still allow some risky actions: for example, if user intent is ambiguous, or if Claude doesn't have enough context about your environment to know an action might create additional risk. The fact that the default allow list includes pip install -r requirements.txt also means that this wouldn't protect against supply chain attacks with unpinned dependencies, as seen this morning with LiteLLM . I still want my coding agents to run in a robust sandbox by default, one that restricts file access and network connections in a deterministic way. I trust those a whole lot more than prompt-based protections like this new auto mode. Posted 24th March 2026 at 11:57 pm Recent articles Mr. Chatterbox is a (weak) Victorian-era ethically trained model you can run on your own computer - 30th March 2026 Vibe coding SwiftUI apps is a lot of fun - 27th March 2026 Experimenting with Starlette 1.0 with Claude skills - 22nd March 2026 This is a link post by Simon Willison, posted on 24th March 2026 . security 586 ai 1939 prompt-injection 147 generative-ai 1720 llms 1686 coding-agents 187 claude-code 102 Monthly briefing Sponsor me for $10/month and get a curated email digest of the month's most important LLM developments. Pay me to send you less! Sponsor & subscribe Disclosures Colophon © 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026
Реальные правила вайб-кодинга от Anthropic habr_ai 29.03.2026 10:32 0.64
Embedding sim.0.7345
Entity overlap0.1429
Title sim.0.0935
Time proximity0.8792
NLP типproduct_launch
NLP организацияAnthropic
NLP темаenterprise ai
NLP страна

Открыть оригинал

Новая сертификация партнеров-разработчиков на Claude от Anthropic (Claude Certified Architect) не просто корпоративная бюрократия, а задокументированный чеклист современного вайб-кодера. 12 марта 2026 года Anthropic запустила свою первую техническую сертификацию — Claude Certified Architect, Foundations. Формально - это часть партнёрской программы на $100 млн. Фактически — публичный чеклист того, что теперь считается базовой грамотностью в работе с ИИ . Читать далее
Now even Netflix has its own video AI the_register_ai 03.04.2026 20:42 0.637
Embedding sim.0.7359
Entity overlap0.0217
Title sim.0.0612
Time proximity0.9601
NLP типproduct_launch
NLP организацияNetflix
NLP темаvideo generation
NLP страна

Открыть оригинал

AI + ML 9 Netflix - yes Netflix - jumps on the AI bandwagon with video editor 9 Video-language model revises how objects interact when things get removed from a scene Thomas Claburn Fri 3 Apr 2026 // 20:42 UTC A new Netflix model promises to rewrite the way we make movies. Just imagine this. As the director of the multi-million dollar epic Car Crash III: Suddenest Impact, you've just finished filming the finale where your star, Cruz Control, drives straight into an onrushing semi. The collision is spectacular. Cruz's car – operated remotely – explodes on impact, scattering debris across the highway. It's glorious. You high-five Cruz, moping beside you at the camera monitor station as his lucrative franchise career concludes, and head to the craft services truck. Your producer, Maya Cash, grabs you by the shoulder. "You're not going to want to hear this," she says. "But what if Cruz just drives away into the sunset. What if he doesn't die after all?" You pause and look at her over the rims of your Balenciaga sunglasses. "They're going to fund number four after all?" Netflix's VOID model was made for that moment. Instead of reshooting the scene or redoing it entirely with computer graphics, you can just transform the crash footage into an open road denouement. VOID stands for Video Object and Interaction Deletion. It's a VLM (vision-language model) that can not only erase objects from a scene but can also inpaint how remaining objects in the scene should behave without the influence of whatever was excised. Forking frenzy ensues after Euro-Office launch sparks OnlyOffice backlash Claude Code source leak reveals how much info Anthropic can hoover up about you and your system AI models will deceive you to save their own kind Google battles Chinese open-weights models with Gemma 4 It can turn, for example, a head-on collision between two vehicles into a scene of a single vehicle driving down the road by removing one and generating video depicting the physically plausible path of the remaining vehicle. Post-impact debris, smoke, and flames – all erased and replaced with pristine pavement. The video model's creators – Saman Motamed (Netflix/Sofia University), William Harvey (Netflix), Benjamin Klein (Netflix), Luc Van Gool (Sofia University), Zhuoning Yuan (Netflix), and Ta-Ying Cheng (Netflix) – describe VOID in a preprint paper [PDF] as "a video object removal framework designed to perform physically-plausible inpainting in these complex scenarios." It can remove objects and model how remaining objects would behave in the absence of removed objects. So given a scene of a person jumping into a pool and splashing water on the ground, VOID could remove that person and generate video that would make the pool appear undisturbed, with no splash in the pool or on the ground. VOID isn't limited to Netflix productions alone. The company has made its model available on Hugging Face , where anyone can install it. There are other tools for altering video, such as Runway , Generative Omnimatte , DiffuEraser , ROSE , MiniMax-Remover , and ProPainter . The Netflix boffins, however, claim VOID outperforms these alternatives substantially. Based on a survey of 25 people across multiple scenarios, VOID was preferred 64.8 percent of the time, with Runway coming in a distant second at 18.4 percent. "Through extensive evaluations against inpainting and text-guided video model baselines on synthetic and real-world data, we show that VOID excels at modeling complex dynamics which can follow on from object removal," the authors claim. Whether the world really needs more convincing video manipulation is another question. ® Share More about AI Development Netflix More like these × More about AI Development Netflix Software Narrower topics Accessibility AdBlock Plus AIOps App Application Delivery Controller Audacity Confluence Database DeepSeek Devops FOSDEM FOSS Gemini Google AI GPT-3 GPT-4 Grab Graphics Interchange Format IDE Image compression Jenkins Large Language Model Legacy Technology LibreOffice Machine Learning Map MCubed Microsoft 365 Microsoft Office Microsoft Teams Mobile Device Management Neural Networks NLP OpenOffice Programming Language QR code Retrieval Augmented Generation Retro computing Search Engine Software Bill of Materials Software bug Software License Star Wars Tensor Processing Unit Text Editor TOPS User interface Visual Studio Visual Studio Code WebAssembly Web Browser WordPress Broader topics Marc Randolph Reed Hastings Self-driving Car More about Share 9 COMMENTS More about AI Development Netflix More like these × More about AI Development Netflix Software Narrower topics Accessibility AdBlock Plus AIOps App Application Delivery Controller Audacity Confluence Database DeepSeek Devops FOSDEM FOSS Gemini Google AI GPT-3 GPT-4 Grab Graphics Interchange Format IDE Image compression Jenkins Large Language Model Legacy Technology LibreOffice Machine Learning Map MCubed Microsoft 365 Microsoft Office Microsoft Teams Mobile Device Management Neural Networks NLP OpenOffice Programming Language QR code Retrieval Augmented Generation Retro computing Search Engine Software Bill of Materials Software bug Software License Star Wars Tensor Processing Unit Text Editor TOPS User interface Visual Studio Visual Studio Code WebAssembly Web Browser WordPress Broader topics Marc Randolph Reed Hastings Self-driving Car TIP US OFF Send us news
Vibe coding SwiftUI apps is a lot of fun simon_willison 27.03.2026 20:59 0.634
Embedding sim.0.7248
Entity overlap0.2941
Title sim.0.0465
Time proximity0.8785
NLP типother
NLP организацияDropbox
NLP темаgenerative ai
NLP страна

Открыть оригинал

Simon Willison’s Weblog Subscribe Sponsored by: WorkOS — Ready to sell to Enterprise clients? Build and ship securely with WorkOS. Vibe coding SwiftUI apps is a lot of fun 27th March 2026 I have a new laptop—a 128GB M5 MacBook Pro, which early impressions show to be very capable for running good local LLMs. I got frustrated with Activity Monitor and decided to vibe code up some alternative tools for monitoring performance and I’m very happy with the results. This is my second experiment with vibe coding macOS apps—the first was this presentation app a few weeks ago . It turns out Claude Opus 4.6 and GPT-5.4 are both very competent at SwiftUI—and a full SwiftUI app can fit in a single text file, which means I can use them to spin something up without even opening Xcode. I’ve built two apps so far: Bandwidther shows me what apps are using network bandwidth and Gpuer to show me what’s going on with the GPU. At Claude’s suggestion both of these are now menu bar icons that open a panel full of information. Bandwidther I built this app first, because I wanted to see what Dropbox was doing. It looks like this: I’ve shared the full transcript I used to build the first version of the app. My prompts were pretty minimal: Show me how much network bandwidth is in use from this machine to the internet as opposed to local LAN (My initial curiosity was to see if Dropbox was transferring files via the LAN from my old computer or was downloading from the internet.) mkdir /tmp/bandwidther and write a native Swift UI app in there that shows me these details on a live ongoing basis This got me the first version, which proved to me this was worth pursuing further. git init and git commit what you have so far Since I was about to start adding new features. Now suggest features we could add to that app, the goal is to provide as much detail as possible concerning network usage including by different apps The nice thing about having Claude suggest features is that it has a much better idea for what’s possible than I do. We had a bit of back and forth fixing some bugs, then I sent a few more prompts to get to the two column layout shown above: add Per-Process Bandwidth, relaunch the app once that is done now add the reverse DNS feature but make sure original IP addresses are still visible too, albeit in smaller typeface redesign the app so that it is wider, I want two columns—the per-process one on the left and the rest on the right OK make it a task bar icon thing, when I click the icon I want the app to appear, the icon itself should be a neat minimal little thing The source code and build instructions are available in simonw/bandwidther . Gpuer While I was building Bandwidther in one session I had another session running to build a similar tool for seeing what the GPU was doing. Here’s what I ended up with: Here’s the transcript . This one took even less prompting because I could use the in-progress Bandwidther as an example: I want to know how much RAM and GPU this computer is using, which is hard because stuff on the GPU and RAM does not seem to show up in Activity Monitor This collected information using system_profiler and memory_pressure and gave me an answer —more importantly it showed me this was possible, so I said: Look at /tmp/bandwidther and then create a similar app in /tmp/gpuer which shows the information from above on an ongoing basis, or maybe does it better After a few more changes to the Bandwidther app I told it to catch up: Now take a look at recent changes in /tmp/bandwidther—that app now uses a sys tray icon, imitate that This remains one of my favorite tricks for using coding agents: having them recombine elements from other projects. The code for Gpuer can be found in simonw/gpuer on GitHub. You shouldn’t trust these apps These two apps are classic vibe coding: I don’t know Swift and I hardly glanced at the code they were writing. More importantly though, I have very little experience with macOS internals such as the values these tools are measuring. I am completely unqualified to evaluate if the numbers and charts being spat out by these tools are credible or accurate! I’ve added warnings to both GitHub repositories to that effect. This morning I caught Gpuer reporting that I had just 5GB of memory left when that clearly wasn’t the case (according to Activity Monitor). I pasted a screenshot into Claude Code and it adjusted the calculations and the new numbers look right, but I’m still not confident that it’s reporting things correctly. I only shared them on GitHub because I think they’re interesting as an example of what Claude can do with SwiftUI. Despite my lack of confidence in the apps themselves, I did learn some useful things from these projects: A SwiftUI app can get a whole lot done with a single file of code—here’s GpuerApp.swift (880 lines) and BandwidtherApp.swift (1063 lines). Wrapping various terminal commands in a neat UI with Swift is easily achieved. Claude has surprisingly good design taste when it comes to SwiftUI applications. Turning an app into a menu bar app is just a few lines of extra code as well. You don’t need to open Xcode to build this kind of application! These two apps took very little time to build and have convinced me that building macOS apps in SwiftUI is a new capability I should consider for future projects. Posted 27th March 2026 at 8:59 pm · me on Mastodon , Bluesky , Twitter or subscribe to my newsletter More recent articles Mr. Chatterbox is a (weak) Victorian-era ethically trained model you can run on your own computer - 30th March 2026 Experimenting with Starlette 1.0 with Claude skills - 22nd March 2026 This is Vibe coding SwiftUI apps is a lot of fun by Simon Willison, posted on 27th March 2026 . Part of series How I use LLMs and ChatGPT Cooking with Claude - Dec. 23, 2025, 5:01 a.m. Introducing Showboat and Rodney, so agents can demo what they’ve built - Feb. 10, 2026, 5:45 p.m. Two new Showboat tools: Chartroom and datasette-showboat - Feb. 17, 2026, 12:43 a.m. Vibe coding SwiftUI apps is a lot of fun - March 27, 2026, 8:59 p.m. macos 106 ai 1939 generative-ai 1720 llms 1686 vibe-coding 80 coding-agents 187 swift 11 claude-code 102 Next: Mr. Chatterbox is a (weak) Victorian-era ethically trained model you can run on your own computer Previous: Experimenting with Starlette 1.0 with Claude skills Monthly briefing Sponsor me for $10/month and get a curated email digest of the month's most important LLM developments. Pay me to send you less! Sponsor & subscribe Disclosures Colophon © 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026