In the past few weeks, I’ve made a concerted effort to integrate AI tools into my workflow. I thought playing around with these tools would be a good way to improve my understanding of them, and also suspected they could be genuinely useful in my day-to-day job. Both predictions have turned out to be true.
Given the scarcity of firsthand accounts from journalists using these tools, I thought I’d outline my current practices. I imagine much of this would be useful in other jobs, too.
General writing and editing
While neither ChatGPT nor Claude naturally excel at journalistic writing, with some prodding they can become valuable assistants. Claude, in particular, shows promise.
I’ve set up a Claude Project with five of my previous articles in its memory (I could add more, I’m just lazy). I asked Claude to read those articles and generate a prompt that matched my style, which now serves as the custom instructions for the project. All my writing work happens within this project, meaning Claude is guided by my custom instructions and previous writing.
(As an aside: I’m not sharing specific prompts, in part because I genuinely think they might be a competitive advantage!)
My typical workflow follows one of two paths:
I give Claude a draft and ask it for edits. I then compare its version against mine, selectively incorporating some of its edits while ignoring many others. I don’t end up using much of what Claude gives me, but I always end up improving a few turns of phrase. By forcing a close-read of the text, the process also helps me spot things I want to change.
I also ask Claude to critique the article and point out any weak arguments, helping me preemptively address criticism from readers.
Here’s an example paragraph where Claude was particularly useful (I typically don’t use as much of what it gives me).
Original piece: Yet not a single Big Tech company has publicly supported the bill, and many have come out against it. In a bill analysis last week from the California Assembly Committee on Judiciary, Google is listed as opposing the bill unless amended, while Meta is said to have “concerns”. IBM opposes the bill altogether.
Claude version: Yet not a single Big Tech company has publicly backed the bill, and many oppose it. A recent California Assembly Committee on Judiciary analysis lists Google as opposing unless amended, Meta as having "concerns", and IBM as fully opposing.
Final piece: Yet reality is rather different. Not a single Big Tech company has publicly supported the bill, and many oppose it. A recent California Assembly Committee on Judiciary analysis lists Google as opposing unless amended, Meta as having “concerns”, and IBM as fully opposing.
I give Claude the rough outline of a piece (in bullet points) and ask it to produce a finished article. While I end up rewriting almost all of what it gives me, something about having a draft to look at makes that process much easier. Once I’m done rewriting and editing, I’ll return the piece to Claude for another edit, following the process I mentioned above.
Claude’s contributions here aren’t groundbreaking, but they help bridge the gap between bullet points and prose: often one of the most challenging steps in writing.
The obvious objection: wouldn’t a human editor be better?
Of course! I’d love to have a real, professional editor review all my articles. Claude falls far short of the editors I've had the privilege of working with in my career.
But as an independent journalist, I’m on my own. I could hire an editor, but that would be extremely costly. It would also be much slower: I’d have to hope they were free when I finished writing, wait for them to have time to read and edit it, and then engage in a back-and-forth editing process.
The final result would be better, no doubt about it. But it would take much longer and cost much more. The realistic alternative to using Claude for me, right now, isn’t having a human editor; it’s having no editor at all. And Claude is certainly better than that. And even if I did have an editor, I expect I’d still use Claude to refine the piece more extensively than I ever could with a human editor.
Building time-saving applications
Writing my weekly newsletter is very time-consuming. Each week I curate 150+ news articles relevant to AI policy and summarise them for my subscribers, highlighting the most important bits as concisely as possible. This involves reading and summarising every article — a process that takes many hours and has a strict deadline.
Recently, I experimented with using AI tools to help speed this process up. Using the Anthropic Workbench to evaluate and iterate on my prompts, I developed a Claude prompt which can take an article and summarise it in my style, more or less. Here’s my summary of this article, compared with Claude’s version:
My version: HuggingFace released an open-source AI evaluation suite.
Claude’s version: Hugging Face launched LightEval, an open-source AI evaluation suite for customisable model assessment.
While not perfect, it’s impressively close. The real magic, though, came not from the prompt, but the scaffolding Claude and o1 built around it. I now have a custom Chrome extension which exports the text of all open tabs into a JSON file, and a Python app which processes the JSON file, passing each article to Claude along with the “summarise in my style” prompt. Claude then outputs a summary in markdown format, complete with links and citations. The whole process takes just a couple minutes, and costs very little (about $5 in API credits per week).
I have virtually no coding skills, so Claude and o1 built this entire app for me, from scratch: the Chrome extension, the Python code, the shell script, and setting up the virtual environment. It took some trial and error: the initial version had a bunch of bugs, and we had to try a few different approaches for the Chrome extension before finding one that worked. But each time, the AIs debugged the code and solved the problems, resulting in a flawless final product. I could not have accomplished any part of this without the AI tools.
This app now saves me several hours a week. I still fact-check the summaries, tweak them a little, and flesh them out for the most important stories. And the newsletter is still a deeply human product: its value lies in my curation, which I’ve not been able to automate. But the summary app means I can spend much less time reading and summarising less important articles — freeing up that time for original reporting and analysis instead.
As someone without coding skills, I’ve long envied those who can develop custom apps to automate their specific workflows. It always seemed like a superpower. Thanks to AI tools, I now have that superpower! In addition to the above, I/Claude built an AppleScript function which grabs the contents of whatever I’m currently reading and asks Claude a bunch of questions about it — very useful for long articles and papers I don’t have time to read in full.
I feel like I’m only scratching the surface of what’s possible here: I expect that a few months from now I’ll be running a whole suite of custom LLM-built apps.
Data processing and formatting
I occasionally work on projects that involve lots of poorly-formatted data, such as lobbying disclosures or ad data, which typically require hours of manual formatting or transcription. But no longer! I can just hand Claude the data (in the form of CSVs, copy-pasted text, or screenshots) and it’ll extract and format it however I like. I still check the results’ accuracy, but that’s much faster than doing the whole thing manually.
Research
You obviously shouldn’t use LLMs as an alternative to Google, and you’d be nuts to trust their information without independent fact-checking. But they can still be helpful research aids.
I find LLMs particularly helpful in sparking research directions. I’ll often have a vague idea of something I want to look into, but don’t know where to get started. For example, I’m interested in writing about pre-emptive regulation, and wanted to know if there are historical examples where we’ve regulated an anticipated risk before it materialised. Such a question is quite hard to Google, but Claude gives a bunch of suggestions for case studies I can then explore further.
Not listening to podcasts
The “discourse” section of my newsletter often relies on podcasts, but I really don’t like listening to podcasts for work.
Apple Podcasts’ automatic transcriptions have been a gamechanger here: I now just skim the transcript of any podcast that seems work-relevant, and can easily grab (and tweet) useful quotes.
Producing images
I almost always use Unsplash or other royalty-free images to illustrate my posts, but occasionally — like on this post — I’ll use a diffusion model to come up with something fun.
What I don’t use AI for
Try as I might, I cannot get Claude to generate good headlines — so I no longer bother trying. I’ve also not used any AI tools for fact-checking yet, though I might try using NotebookLM for this.
What’s next
In just a few weeks of playing with these models I’ve discovered a bunch of applications that will significantly improve my life. As I said above, I feel like I’m just scratching the surface: I’m sure there are countless other things I could and should try. If you have suggestions — particularly if you use these tools yourself — I’d love to hear from you!