|
It's been a while since I sent an email. We had our second baby girl! I've been spending a lot of late nights and early mornings with AI, specifically Claude Code and now codex. I want to share the journey. I haven't had time to write or make videos because I have been knee-deep in just figuring out best practices for working with AI (it's been amazing and frustrating at the same time). To be blunt, we're at a tipping point where there's so much you can do. You can just do things now; it's incredibly exciting. At the same time, it's still a little early, and there are rough edges. I've been able to build 6 different tools (macOS apps, CLI tools, and iOS apps) since I last emailed you all in the direction of an app that I want to build. Want to learn with me on a weekly basis and finally publish your app on the App Store? Join the waitlist for the App Launch Community here (ETA: January). Spider Warning... I've discussed this a little bit in the past. I thought I would just be able to create a little OpenAI wrapper around spider detection. If spiders scare you, you might not want to watch this video or continue scrolling . . . We're at a tipping point where you can just build tools and apps. You can learn new technologies and frameworks, and you can get a prototype out the door to experiment with things. I've been using AI, specifically GPT-5 now, to create spider-related image classification. I'm not sticking just to spiders, but my first exploration into this space is going to be dangerous spiders. I might do snakes or other things that people consider dangerous. That's kind of my angle, and I've been able to build the tooling to support machine learning. I'm still not sure if the results are going to be great, but I know that the results from other apps are mixed. I have ideas that can improve the experience, and that's what I'm working on. Watch the spider video where I show off one of my tools to detect spiders and then classify them (two stages). This Mac app is a test harness for the models that I'm training in Create ML. Along with that, I still plan to kickstart the App Launch Community. I put that on hold with the baby. I just didn't have the bandwidth, and I still don't have the bandwidth yet. I won't have it for another 2-3 months until all the baby's in day care (Plus we're doing a massive renovation project). My attic is a construction zone, and I only have so much time in the day. Over the next few weeks I'll share what's working for me. And how you can leverage what I'm learning (for free). If you want to take it to the next level, I'll be opening a weekly community group to help you drive towards your own app goals. Now don't get me wrong; I still like coding, and I still have to drop into code to fix problems that the AI is unable to solve. But I get to operate at a higher level, and I get to be more of the idea person driving the app and testing it and making sure it works. That's been really exciting. If you're interested in publishing an iPhone app or a Mac app, or if you're interested in anything related to artificial intelligence, such as machine learning, I've been deep diving into image classification and object detection, and I've been using Create ML, which is Apple's framework for creating models that can detect things. I've been working with data from iNaturalist.org, and then I've been building out apps that help with image detection and classification. It's really interesting, and it's fascinating that I can use AI to prototype this. Click here if you want to sign up for when I do announce more details about my small community where I'm going to be working on a weekly basis with people working on their apps, alright, so that's all I have for today. Talk to you soon, Paul Solt |
Join 5,712+ developers learning iPhone app development and App Store publishing. Every week, I share iOS tips on how to create polished, intuitive apps—backed by insights from shipping seven apps and working at GoPro, Apple, and Microsoft.
Hey Reader, AI agents struggle with iOS development for a simple reason: they can’t actually run Xcode. In this video, I have an impromptu conversation with Cameron Cooke from the UK about XcodeBuildMCP, a tool that finally lets AI build and run iOS apps in the Simulator. We talk through the backstory, the real problems it solves, and why this changes how AI agents work with iOS and macOS apps. If AI + Xcode is Broken for you, then you might need this tool. Watch here: AI Couldn’t Run My iOS...
Hey Reader, New Codex models are incredibly good for Swift code for iOS/macOS apps. You can easily install codex via here Just open Terminal/Ghostty and type: npm i -g @openai/codex Using codex, you can use simple language to talk to your agent. See the first video if you want a breakdown on the tools I am using, and some of the rules in my agents.md rule file. Spider Warning: if you're afraid of spiders, you won't want to watch these videos. I am making a dangerous spider identification app,...
Hey Reader, New video on my workflow for building iOS and macOS apps. GPT 5.2 is insane. It fixed so many problems the first time. Less back and forth from any other model to date. You can fly way faster from idea to prototype in minutes. It's beyond crazy. In a day I can have a working app that solves a key problem.I could never code this fast without AI. It let's you try ideas that you would never consider before due to time constraints.Codex + Xcode: 3 Workflows You Need for iOS/macOS...