Skip to main content
Hand your AI agent a design mockup and watch it turn into working SwiftUI code. The agent doesn’t just write the code - it runs the app, compares the result against your design, and iterates until it matches. This works because FlowDeck gives the agent the ability to see the simulator screen. After every code change, the agent rebuilds, navigates to the screen, captures a screenshot, and compares it to your original mockup. No manual back-and-forth needed.

How it works

  1. You provide a design reference - an image, a screenshot, or a description
  2. The agent analyzes it to extract layout, spacing, typography, colors, and effects
  3. It writes the SwiftUI view
  4. It builds and runs the app on the simulator
  5. It captures a screenshot and compares against the original
  6. If something doesn’t match, it adjusts the code and repeats
The agent does this loop automatically. You can jump in at any point to give feedback (“the padding is too tight”, “use a bolder font”), and the agent will incorporate it into the next iteration.

What you can ask

Implement a full screen from a mockup

I attached a mockup of the profile screen. Implement it in SwiftUI,
run the app, and validate it matches the design closely. Then iterate
until you get a pixel-perfect implementation.
The agent will study the image - visual hierarchy, spacing rhythm, typography, colors, corner radii, shadows - and implement it as a SwiftUI view. It builds and runs the app, captures the result, compares it against the mockup, and keeps adjusting until the implementation matches the design precisely.

Iterate on specific details

The card shadow is too harsh compared to the mockup. Make it softer
and verify it matches the design.
The agent reads the current implementation, identifies the shadow modifier, adjusts the radius and opacity, rebuilds, and compares against the mockup. One change at a time, verified visually each time.

Match spacing precisely

The spacing between the header and the first card looks like about 24pt
in the mockup, but it looks tighter in the app. Fix it and verify the
spacing matches the design.
The agent adjusts the spacing value, rebuilds, and compares the result against the mockup. If it’s still off, keep iterating - the agent remembers the context and will keep refining.

Build a reusable component

I attached a mockup of a rating stars component. Build it as a reusable
SwiftUI view that takes a rating from 0 to 5. Verify it renders correctly
with 3.5 stars and matches the mockup.
The agent implements the component with the specified API, adds it to a preview or test screen, runs the app, and validates the output matches the design. You get both the reusable code and confirmation it renders correctly.

Build and verify interactions

I attached a mockup of a quantity stepper (minus button, count label,
plus button). Build it in SwiftUI, run the app, and verify that tapping
+ increments and - decrements the count.
The agent implements the component, runs the app, starts a UI session, locates the stepper on screen, taps the plus button, reads the label to confirm it incremented, taps minus, confirms it decremented. You get the implementation and proof that it works.

Implement a complex layout

I attached a mockup of a product card with an image, title, price,
rating stars, and an "Add to Cart" button. The card has rounded corners
and a subtle shadow. Implement it and iterate until it matches the design.
The agent breaks down the visual hierarchy - image at the top, text stack in the middle, button at the bottom - and implements each layer. It uses explicit spacing, exact hex colors from the mockup, continuous corner radius for the Apple-style squircle, and multi-layer shadows for realistic depth. It rebuilds and compares until the result matches.

Validate across screen sizes

Build the settings screen from this mockup, then verify it renders
correctly on both iPhone SE and iPhone 16 Pro Max. Fix any layout
issues on either screen size.
The agent implements the screen, runs on iPhone SE and validates the layout, then runs on the larger device and checks again. If anything clips on the small screen or has excessive whitespace on the large one, it adjusts the layout and re-verifies on both.

Dark mode variant

I built the Home screen for light mode. Now implement the dark mode
variant to match this second mockup. Verify both modes render correctly
and match their respective designs.
The agent reads the dark mode mockup, adjusts colors and backgrounds in the implementation (or adds a proper color scheme), then validates both light and dark mode against their respective designs. It iterates until both variants match.

Tips for better results

Provide high-quality mockups. The sharper and more detailed the image, the better the agent can extract measurements. Figma exports at 2x work well.
Call out non-obvious details. If your design uses a specific font, custom colors, or unusual spacing, mention it in the prompt. The agent can estimate from the image, but explicit values are more reliable.
Iterate in small steps. Instead of “everything looks wrong”, say “the title font is too small and the padding on the left side needs to increase by about 8pt”. Specific feedback produces specific fixes.
Ask for validation against the mockup. Say “compare the result to the mockup and fix any differences” - the agent will iterate on its own until it matches, and you can jump in with specific feedback at any point.