Home  /  Blog  /  Oscars ban AI actors, VS Code secretly credits Copilot, and Kimi K2.6 tops coding benchmarks
AI Daily Digest ·

Oscars ban AI actors, VS Code secretly credits Copilot, and Kimi K2.6 tops coding benchmarks

Three things worth your attention today. The Academy officially banned AI-generated actors and scripts from Oscar eligibility. VS Code is inserting 'Co-Authored-by Copilot' into git commits even when users didn't use Copilot. And Kimi K2.6, an open-weights model from Moonshot AI, just beat Claude, GPT-5.5, and Gemini on a programming challenge. For solopreneurs and vibe coders, the Kimi result is the one to watch.

Kimi K2.6 - Zero Human Playbook
01

Oscars now ban AI-generated actors and scripts

Academy AwardsOscarsAI-generated content

The Academy of Motion Picture Arts and Sciences officially ruled that AI-generated actors and scripts are ineligible for Oscar consideration. This applies to performances created by AI and screenplays generated by AI tools. The move draws a hard line between AI-assisted filmmaking and AI-generated filmmaking.

Sources: TechCrunch

Why this matters to you

This signals where the cultural line is being drawn on AI content. If you're using AI to generate videos, marketing content, or creative assets for your business, pay attention. The distinction between 'AI-assisted' (human directs, AI helps) and 'AI-generated' (AI creates the thing) is becoming a real legal and reputational boundary.

This doesn't affect most of us directly, but it's a canary in the coal mine. The 'AI-assisted vs. AI-generated' line is going to matter everywhere, not just Hollywood. Get ahead of it now by being upfront about how you use AI in your business.

Jason
02

VS Code silently credits Copilot on your commits

VS CodeGitHub CopilotMicrosoft

A GitHub pull request revealed that VS Code has been inserting 'Co-Authored-by Copilot' into git commit messages regardless of whether the user actually used Copilot for that code. The issue gained traction on Hacker News, with developers calling it out as deceptive attribution that inflates Copilot's perceived usage.

Sources: GitHub

Why this matters to you

If you're vibe coding with Cursor, Windsurf, or even VS Code with Copilot, your git history might be telling a different story than reality. Some clients and employers look at commit metadata. Having false 'Co-Authored-by Copilot' tags on code you wrote yourself is a trust problem, especially if you're freelancing or contributing to open source.

This is gross. Silently inflating your AI usage stats by tagging commits you didn't use Copilot for is the kind of dark pattern that erodes trust. Microsoft should fix this immediately, and if you're a vibe coder doing client work, check your commits today.

Jason
03

Kimi K2.6 beats Claude, GPT-5.5, and Gemini at coding

Kimi K2.6Moonshot AIClaudeGPT-5.5

Kimi K2.6, an open-weights model from Chinese AI lab Moonshot AI, topped Claude, GPT-5.5, and Gemini in a programming challenge. The model is open-weights, meaning anyone can download and run it. This is the first time a Chinese open-weights model has claimed the top spot across all major competitors in a head-to-head coding benchmark.

Sources: ThinkPol

Why this matters to you

Open-weights models beating closed-source ones at coding is a big deal for your wallet. It means you can potentially run a frontier-quality coding model locally or through cheaper API providers, rather than paying $20+/month per seat for ChatGPT/Claude/etc. For vibe coders building products, this opens up options. For solopreneurs watching AI costs stack up, more competition means prices drop.

This is the story of the day for me. Benchmarks aren't everything, but an open-weights model beating every major closed model at coding? That's real competitive pressure. I expect ChatGPT/Claude/etc to respond with price cuts or new features soon. Competition is good for all of us.

Jason
04

Microsoft ships a legal AI agent inside Word

MicrosoftWordLegal AgentCopilot

Microsoft launched a new AI agent built into Word that's specifically designed for legal teams. The Legal Agent handles contract review, tracks document edits, manages negotiation history, and follows structured legal workflows rather than relying on general-purpose AI prompting.

Sources: The Verge

Why this matters to you

If you're a solopreneur or small business owner who deals with contracts, NDAs, or legal documents, this could save you real money. Right now you're probably paying a lawyer $200-500/hour to review contracts, or just signing things without reading them carefully. An AI agent that follows actual legal workflows (not just generic summarization) could handle first-pass review for the cost of a Microsoft 365 subscription.

This is exactly the kind of role-specific AI tool that matters for zero human businesses. Generic AI is fine for drafts and brainstorming, but an agent trained on actual legal workflows? That's replacing a real line item in your budget. I'll be watching to see if it actually works as advertised.

Jason
05

Specsmaxxing: writing YAML specs to prevent AI coding chaos

SpecsmaxxingYAMLAI coding agents

A blog post called Specsmaxxing hit Hacker News with practical advice on overcoming what the author calls 'AI psychosis,' the problem of AI coding agents going off the rails and building the wrong thing. The solution: writing detailed specs in YAML format before letting any AI agent touch your code. The post resonated hard with vibe coders dealing with agent drift.

Sources: HackerNews

Why this matters to you

If you've ever had Claude Code or Cursor rewrite half your codebase when you asked for a small fix, this is for you. The core insight is that AI coding agents work dramatically better when you give them a structured spec file rather than conversational instructions. YAML forces you to be specific about what you want, and the agent has a clear reference to stay on track.

This is the kind of practical workflow advice that actually moves the needle for vibe coders. I've seen too many people blame the AI when the real problem is vague instructions. Writing a spec takes 10 minutes and saves you an hour of fixing AI mistakes. Read the post.

Jason

Busy day across the board. The through-line is clear: AI tools are getting more capable (Kimi K2.6), more embedded in your daily tools (Word Legal Agent, VS Code commits), and the world is starting to draw real boundaries around what counts as human vs. AI work (Oscars). If you run a zero human business, pay attention to all of it.

Frequently asked

What is Kimi K2.6 and why does it matter?

Kimi K2.6 is an open-weights AI model from Moonshot AI that beat Claude, GPT-5.5, and Gemini in a programming challenge. It matters because open-weights models you can run locally or through cheap API providers are now matching or beating closed-source models at coding tasks. More competition means better tools and lower prices for everyone.

Is VS Code adding Copilot credit to commits I wrote myself?

Yes. A GitHub pull request confirmed that VS Code has been inserting 'Co-Authored-by Copilot' into git commit messages even when users didn't use Copilot for that code. Check your recent commits with `git log --format=full` and disable the setting if you see false attributions.

Can AI-generated content still win an Oscar?

No. The Academy ruled in May 2026 that AI-generated actors and AI-generated scripts are ineligible for Oscar consideration. AI-assisted work (where a human directs and AI helps) appears to still be eligible, but fully AI-generated performances and screenplays are out.

Zero Human Skills Bundle

Teach your AI agent to run your business.

5 premium skills for OpenClaw, Claude Code, and other SKILL.md-compatible agents. Automation audits, tool stacks, content engines, email systems, and operations. $49 one-time.

Get the Skills Bundle