rssed

a collection of dev rss feeds - blogroll

Add a new feed

+

318 feeds


Adam Jones's Blog

Posts

UBI doesn't solve AI power concentration πŸ”—

Open source doesn't solve AI power concentration πŸ”—

How AGI could kill US democracy πŸ”—

Langfuse Context: All things MCP with Adam Jones (Tech Lead at Anthropic) πŸ”—

MCP Connect with Anthropic, Zoopla & Alpic | London edition πŸ”—

Code Execution with MCP: Fix Tool Token Bloat (Adam Jones, Anthropic) πŸ”—

Code execution with MCP: Building more efficient agents πŸ”—

The Model Context Protocol: Connecting AI to Everything - Adam Jones, Anthropic πŸ”—

An AI safety plan that might work: an international coalition governing AGI πŸ”—

What should the UK be doing to make AGI go well? πŸ”—

Announcing... AdamCon! πŸ”—

Overcoming problems with compute thresholds for AI regulation πŸ”—

AI Model Thresholds: How Governments Can Identify Frontier AI Systems πŸ”—

5 ways to develop an AI safety plan or field strategy πŸ”—

Quantitative models of AI-driven bioterrorism and lab leak biorisk πŸ”—

What makes a good AI safety plan? πŸ”—

Key paths, plans and strategies to AI safety success πŸ”—

How you can get more value from me reviewing your work πŸ”—

Preferences for communicating with me πŸ”—

LinkedIn Ads: can you get more efficient marketing by overbudgeting and holding your manual bid the same? πŸ”—

What is customer due diligence in AI safety? πŸ”—

Export ALL Your WhatsApp Chats from Android to Your Computer! πŸ”—

What might AGI look like, concretely? πŸ”—

Why I disagree with Yann LeCun on whether LLMs could scale to AGI πŸ”—

300,000 people are directly creating training data for AI πŸ”—

What does Yann LeCun think about AGI? A summary of his talk, "Mathematical Obstacles on the Way to Human-Level AI" πŸ”—

Major UK banks are training their customers to fall for scams πŸ”—

Running LLMs Locally in 2025: Speed tests on M2 Pro + 16 GB RAM πŸ”—

A rough plan for AI alignment assuming short timelines πŸ”—

YouTube series: How to contribute to the BlueDot Impact repo πŸ”—

How to set up PostHog for a Bubble single-page application, with proper pageview tracking πŸ”—

AI safety content you could create πŸ”—

Policymakers don't have access to paywalled articles πŸ”—

Alignment Is Not All You Need: Other Problems in AI Safety πŸ”—

The post-AGI purpose problem πŸ”—

Why product managers are uniquely suited for tech policy roles πŸ”—

The beginners guide to investing (2025 UK edition) πŸ”—

Setting up OpenWrt on the DSL-AC68U for 1 Gig speeds πŸ”—

Teach-swap-explain: a learning activity for course designers to create highly effective learning experiences πŸ”—

Why we run our AI safety courses πŸ”—

How Does AI Learn? A Beginner’s Guide with Examples πŸ”—

The standard W3C Gamepad API mapping for an Xbox controller πŸ”—

What early career policymakers can learn from product managers: understanding people’s actual problems is key to effective policy πŸ”—

AI Alignment June 2024 course retrospective πŸ”—

OpenAI's cybersecurity is probably regulated by NIS Regulations πŸ”—

Does project proposal feedback result in better final projects? πŸ”—

No time for user interviews? Learn how to use empathetic role-playing to make better product decisions. πŸ”—

An easy win for UK AI safety: competition law safe harbour πŸ”—

Modular AI Safety courses proposal πŸ”—

Summary of AI alignment participant user interviews πŸ”—

An easy win for UK AI safety: supporting whistleblowers πŸ”—

What we didn’t cover in our June 2024 AI Alignment course (or, an accessible list of more niche alignment research agendas) πŸ”—

The AI regulator's toolbox: A list of concrete AI governance practices πŸ”—

Are cheap shaver blades any good? πŸ”—

Advertising to technical people: LinkedIn, Twitter, Reddit and others compared πŸ”—

What advertising creatives work for technical people? πŸ”—

Results from testing ad adjustments πŸ”—

Diagnosing infectious diseases with CRISPR: SHERLOCK and DETECTR explained πŸ”—

Reflections on my 7-day writing challenge πŸ”—

How to avoid the 2 mistakes behind 89% of rejected AI alignment applications πŸ”—

What do applicants mean when they say they come from LinkedIn? πŸ”—

Our 2023 internal cybersecurity course πŸ”—

Addressing digital harms: a right of appeal is not sufficient πŸ”—

AI as a corporation (or, an intro to AI safety?) πŸ”—

How to fix proof of address πŸ”—

Proof of address is nonsense πŸ”—

Government departments should say they don't care πŸ”—

Avoiding unhelpful work as a new AI governance researcher πŸ”—

Asking me for help πŸ”—

Preventing overreliance: The case for deliberate AI errors in human-in-the-loop systems πŸ”—

Why having a human-in-the-loop doesn't solve everything πŸ”—

7 blogs in 7 days πŸ”—

What we learnt from running our AI alignment course in March 2024 πŸ”—

What is a lead cohort? πŸ”—

What we changed for the June 2024 AI alignment course πŸ”—

A thing I'd like to exist: benchmarks for train internet πŸ”—

3 articles on AI safety we’d like to exist πŸ”—

Why we work in public at BlueDot Impact πŸ”—

Why are people building AI systems? πŸ”—

How to send Keycloak emails through Google Workspace's SMTP relay πŸ”—

Preventing AI oligarchies: important, neglected and tractable work πŸ”—

How might AI-enabled oligarchies arise? πŸ”—

Follow-up: benchmarking Next.js server vs nginx at serving a static site, now on AWS πŸ”—

Benchmarking the Next.js server vs nginx at serving a static site πŸ”—

No, I don’t want to fill out your contact form πŸ”—

AI alignment project ideas πŸ”—

How to avoid the 4 mistakes behind 92% of rejected AI governance applications πŸ”—

Do cheap GPS trackers work? A review of the GF-07, GF-09 and GF-22. πŸ”—

Can we scale human feedback for complex AI tasks? An intro to scalable oversight. πŸ”—

ai-safety.txt: Making AI vulnerability reporting easy πŸ”—

What's the best Myprotein flavour? I tried 23 of them to find out. πŸ”—

OHGOOD: A coordination body for compute governance πŸ”—

What is AI alignment? πŸ”—

What risks does AI pose? πŸ”—

How are AI companies doing with their voluntary commitments on vulnerability reporting? πŸ”—