Gemini Batch Mode API Made Easy: Mac App for Smarter Bulk Request Handling
I build an open-source Mac app for managing the new Gemini Batch Mode API!
One of the biggest advantages of coding much faster with AI tools is that we can now quickly build apps that are super specific to our workflows - both personal, professional, and in organizations. And after running 100s of Gemini API Batch jobs as I continue working on my Sanskrit-English Dictionary app, I just had to code a tool to manage it all, and I did!
The Backstory
While the mainstream use-case for LLMs is using it as chatbots, there are a lot of opportunities to use it behind-the-scenes. In my case of building a Sanskrit-English Dictionary app, my database is extremely static and doesn’t change. That means that I can process many dictionary definitions using LLMs on the backend, and simply show the result to all users in a UI-friendly way in my app, without them even being aware that AI was used for processing.
I’ve already worked a lot with the Gemini 2.5 Pro API, before Batch Mode was released, and it took me ~ 30 days to process all my database definitions in the past one-by-one. So when Batch Mode came out, I couldn’t be more excited! Now my processing time went down to only 2 days and my cost went down 50%!
The Problems
I initially ran my batch jobs via Python scripts, but quickly ran into an organizational nightmare. There were several things to keep track of:
I had to divide up all my data into 100s of files to stay within the Batch Mode limits
Some of my files had to run using Gemini 2.5 Flash, and some had to be run using Gemini 2.5 Pro. When running a batch job, you have to run it using only 1 model, you cannot combine requests that need to run via different models.
I had to have a way to keep track of which files are running, which are done, which should be queued up next, etc.
I didn’t know how much everything was costing - yes, this is my fault, but I ended up spending A LOT more than I estimated. Since the batch jobs run so fast and the Gemini usage billing data updates very slowly, I ended up with a very surprising bill. Yes, again, my fault for not being slow in testing out the costs and not calculating them more carefully, but I honestly didn’t think it would go past my budget.
There is a tight timeframe for some of the tasks - once you upload your batch request file, it expires within 48 hours. The batch job itself expires after 48 hours after it starts. And of course, the batch job results file also expires after 48 hours. So you cannot just take your eye off-the-ball and return a week later to download your files. You have to actively poll for the result to make sure you download it before everything expires and you lose all your money on that batch job, having to re-run it again.
I did manage all of this in Python, but it was a bit of a mess. I really wished Gemini had a better interface to manage these batch requests, similar to what OpenAI has, but unfortunately, it’s just not there. So I decided to build it myself!
The Gemini Batch Mac App
So for the past week, I took the time to build the interface that I wished the Google Gemini team had… as a Mac Tahoe app. Here is the result!
Simply upload your JSONL Batch request files, click Run, and the whole process is managed for you! You can keep track of exactly what is happening in the right-hand pane by selecting the file at any time. And the results file is saved in the app sandbox directory - safe from the 48 hour expiration. You can come back to the app whenever you have time, and download all completed JSONL results file, which you can then process further!
Oh - and I forgot to mention one thing! The tokens are calculated at the end of each batch job, so you can get an estimated price per file run. So it’s very easy to test by running 1 file, waiting around 5 - 15 minutes for the batch job to complete, and check the costs before running more files. Wish I had this when I started running my Gemini batch jobs!
The App is Open Source
The app is now open source and available on the try! Swift AI Github for anybody to use! Note that the app is made for macOS 26 only - as I wanted to play around with LiquidGlass, but it should be easy to clone and port back if you need.
To learn how to read / write JSONL files in Swift, read my blog post: How to Read / Write JSONL Files in Swift. I also build a JSONL Beautifier to make it easy to inspect the JSONL results as I couldn’t find one when I searched.
Happy Batch Processing 🚀🚀🚀