FAQ
SnapAction reads screenshot assets from Photos, sends selected screenshots to a backend for AI analysis, and stores the resulting resource records locally with SwiftData.
SnapAction uses PhotoKit to read screenshots from the user's iOS screenshot library. The app can scan selected screenshots, recent screenshots, or the latest screenshot from system entrypoints such as App Shortcuts, the Action Button, or Control Center.
When you scan, selected images are sent to a Convex-backed AI screenshot agent. OpenRouter handles visual analysis and structured extraction. Serper search can be used when SnapAction needs to recover a canonical URL from a visible title, repo name, product, or other resource clue.
After analysis, SnapAction stores the resulting resources, scan records, tags, screenshot links, favorite state, and read state locally with SwiftData on iPhone.
Type, title, description, tags, URL when available, and the screenshot asset IDs behind the card.
Fields such as event dates, place details, contact info, invoice amounts, booking references, or assignees when visible and relevant.
SwiftData records for scan history, tags, favorite state, read state, and Rewind recall from the local resource index.
It is accurate to say SnapAction stores analyzed resource records locally with SwiftData. It is not accurate to say screenshot analysis is fully offline or that screenshots never leave the device.
If you are evaluating SnapAction for sensitive screenshots, read the privacy policy and scan only screenshots you are comfortable sending for backend AI analysis.
Install the beta to scan selected screenshots to see how resource cards are created.
Try 30 free scans