📷 Camera App

Fluxo de Dados

Documentação completa de como os dados fluem através dos componentes do Camera App.


📊 Visão Geral

O Camera App usa unidirecional data flow com Combine para reatividade:

User Input  ViewModel  Controller  AVFoundation
                
            @Published
                
             View

🎬 Fluxo de Gravação Completo

sequenceDiagram
    participant User
    participant ContentView
    participant CameraViewModel
    participant SegmentedRecorder
    participant AVFoundation
    participant FileSystem
    participant PhotosApp

    User->>ContentView: Tap Record Button
    ContentView->>CameraViewModel: toggleRecording()
    CameraViewModel->>SegmentedRecorder: startNewSegment()
    SegmentedRecorder->>FileSystem: Create temp URL
    SegmentedRecorder->>AVFoundation: output.startRecording(to: tempURL)
    AVFoundation-->>SegmentedRecorder: didStartRecordingTo
    SegmentedRecorder-->>CameraViewModel: (recording started)
    CameraViewModel-->>ContentView: isRecording = true
    ContentView-->>User: Show countdown timer
    
    Note over User,AVFoundation: User records video...
    
    User->>ContentView: Tap Stop Button
    ContentView->>CameraViewModel: toggleRecording()
    CameraViewModel->>SegmentedRecorder: stopCurrentSegment()
    SegmentedRecorder->>AVFoundation: output.stopRecording()
    AVFoundation-->>SegmentedRecorder: didFinishRecordingTo: URL
    SegmentedRecorder-->>CameraViewModel: delegate.didFinishSegment(url)
    CameraViewModel->>CameraViewModel: Generate thumbnail
    CameraViewModel->>CameraViewModel: Create RecordedSegment
    CameraViewModel-->>ContentView: segments.append(segment)
    ContentView-->>User: Show segment thumbnail
    
    Note over User,PhotosApp: User records more segments...
    
    User->>ContentView: Tap Next Button
    ContentView->>CameraViewModel: nextAction()
    CameraViewModel->>CameraViewModel: concatenateAndSaveSegments()
    CameraViewModel->>CameraViewModel: Create AVMutableComposition
    CameraViewModel->>CameraViewModel: Insert all segment time ranges
    CameraViewModel->>CameraViewModel: Apply filter (if selected)
    CameraViewModel->>CameraViewModel: exportComposition()
    CameraViewModel->>FileSystem: Export to temp file
    CameraViewModel->>PhotosApp: PHPhotoLibrary.performChanges
    PhotosApp-->>CameraViewModel: Success
    CameraViewModel->>FileSystem: Remove temp files
    CameraViewModel-->>ContentView: segments.removeAll()
    ContentView-->>User: Segments cleared

🚀 Fluxo de Inicialização

sequenceDiagram
    participant ContentView
    participant CameraViewModel
    participant Controller
    participant System

    ContentView->>CameraViewModel: .onAppear()
    CameraViewModel->>CameraViewModel: requestPermissionsAndConfigure()
    CameraViewModel->>Controller: requestPermissions()
    Controller->>System: AVCaptureDevice.requestAccess(.video)
    Controller->>System: AVCaptureDevice.requestAccess(.audio)
    System-->>Controller: granted = true
    Controller-->>CameraViewModel: completion(granted: true)
    CameraViewModel->>CameraViewModel: isAuthorized = true
    CameraViewModel->>Controller: configureSession(fps: .fps60)
    Controller->>Controller: findBestCamera(for: .front)
    Controller->>System: AVCaptureDevice.DiscoverySession
    System-->>Controller: devices: [AVCaptureDevice]
    Controller->>Controller: Create videoDeviceInput
    Controller->>Controller: Create audioDeviceInput
    Controller->>Controller: setFrameRateLocked(to: 60)
    Controller->>Controller: Add AVCaptureMovieFileOutput
    Controller->>Controller: applyPreferredStabilizationMode()
    Controller->>Controller: setPreferredCodecHEVC(true)
    Controller-->>CameraViewModel: completion(error: nil)
    CameraViewModel->>CameraViewModel: Create SegmentedRecorder
    CameraViewModel->>CameraViewModel: setupOrientationMonitoring()
    CameraViewModel->>Controller: startSession()
    Controller->>System: session.startRunning()
    CameraViewModel-->>ContentView: isSessionRunning = true
    ContentView->>ContentView: Render camera preview

🎨 Fluxo de Filtro de Vídeo

graph TD
    A[User selects filter] --> B[selectedFilter = .mono]
    B --> C[User taps Next]
    C --> D[concatenateAndSaveSegments]
    D --> E{selectedFilter == .none?}
    E -->|Yes| F[Direct AVAssetExportSession]
    E -->|No| G[applyFilter to composition]
    G --> H[Create AVVideoComposition]
    H --> I[Define handler block]
    I --> J[For each frame:]
    J --> K[request.sourceImage]
    K --> L[Apply CIFilter]
    L --> M[CIFilter.photoEffectMono]
    M --> N[outputImage?.cropped]
    N --> O[request.finish with image]
    O --> P[AVAssetExportSession with videoComposition]
    P --> Q[Export filtered video]
    F --> R[saveVideoToPhotos]
    Q --> R
    R --> S[PHPhotoLibrary.performChanges]
    S --> T[Remove temp files]

📱 Fluxo de Controles de Câmera

Zoom com Pinch

User pinch gesture
    
CameraPreviewView.onPinch (UIKit)
    
Closure callback
    
CameraViewModel.setZoomFactor()
    
CaptureSessionController.setZoomFactor()
    
sessionQueue.async
    
device.lockForConfiguration()
device.ramp(toVideoZoomFactor:)
device.unlockForConfiguration()

Foco por Toque

User tap gesture
    
CameraPreviewView.onTapToFocus (UIKit)
    
videoPreviewLayer.captureDevicePointConverted()
    
Closure callback com devicePoint
    
CaptureSessionController.focusAndExpose(at:)
    
sessionQueue.async
    
device.lockForConfiguration()
device.focusPointOfInterest = devicePoint
device.focusMode = .continuousAutoFocus
device.unlockForConfiguration()

Alternância de Câmera

User tap flip button
    
ContentView button action
    
CameraViewModel.toggleCameraPosition()
    
CaptureSessionController.toggleCameraPosition()
    
sessionQueue.async
    
findBestCamera(for: nextPosition)
session.beginConfiguration()
Remove old input
Add new input
session.commitConfiguration()
setFrameRateLocked()
setZoomFactor(1.0)
applyMirroring()

📜 Fluxo do Teleprompter

Inicialização

User toggles teleprompter
    
ContentView updates isTeleprompterOn
    
TeleprompterOverlay appears
    
TeleprompterViewModel.updateContentHeight()
    
calculateContentHeight() com NSAttributedString
    
contentHeight published
    
View atualiza viewport size

Scroll Automático

User starts recording
    
CameraViewModel.isRecording = true
    
TeleprompterOverlay.onChange(of: isRecording)
    
TeleprompterViewModel.handleRecordingStateChange()
    
startScrolling(speed:viewportHeight:)
    
Timer.scheduledTimer (60fps)
    
A cada tick:
    contentOffset += speed * deltaTime
    
TeleprompterTextView.updateUIView()
    
coordinator.isProgrammaticScroll = true
UITextView.setContentOffset()
coordinator.isProgrammaticScroll = false

Interações (Drag/Resize)

User drags overlay
    
DragGesture.onChanged
    
TeleprompterViewModel.updateOverlayPosition()
    
isInteracting = true
overlayOffset updated
    
@Published overlayOffset
    
View re-renders com novo offset
    
DragGesture.onEnded
    
finalizeOverlayPosition() (clamp to bounds)
isInteracting = false

🔄 Ciclo de Vida de um Segmento

1. CRIAÇÃO
   startNewSegment()
   └── tempURL = NSTemporaryDirectory() + "segment_UUID.mov"

2. GRAVAÇÃO
   output.startRecording(to: tempURL)
   └── AVFoundation grava frames no arquivo

3. FINALIZAÇÃO
   output.stopRecording()
   └── didFinishRecordingTo delegate callback

4. THUMBNAIL
   generateThumbnail(for: url)
   └── AVAssetImageGenerator extrai frame 0.05s

5. ARMAZENAMENTO
   RecordedSegment(url:thumbnail:)
   └── segments.append()

6. CONCATENAÇÃO
   AVMutableComposition
   └── insertTimeRange de cada segmento

7. EXPORT
   AVAssetExportSession
   └── outputURL = tempFinalURL

8. SALVAMENTO
   PHPhotoLibrary.performChanges
   └── Vídeo final na galeria

9. CLEANUP
   FileManager.removeItem(tempURL)
   FileManager.removeItem(tempFinalURL)
   segments.removeAll()

🧵 Threading Flow

Main Thread

ContentView (SwiftUI)
    
@Published properties observadas
    
View re-renders automaticamente
    
User interactions (taps, gestures)
    
ViewModel method calls

sessionQueue (Serial)

CaptureSessionController operations
    
session.beginConfiguration()
Device configuration
Format selection
Input/output management
session.commitConfiguration()

Background Queues

Thumbnail Generation (.userInitiated)
    
AVAssetImageGenerator.copyCGImage()
    
UIImage(cgImage:)
    
DispatchQueue.main.async
    └── segments.append()

Video Export (.utility)
    
AVAssetExportSession.exportAsynchronously
    
Save to Photos
    
Cleanup temp files

📦 Data Models

RecordedSegment

struct RecordedSegment: Identifiable {
    let id: UUID
    let url: URL              // temp file path
    let thumbnail: UIImage    // generated thumbnail
    let createdAt: Date       // timestamp
}

Lifecycle:

  1. Created após gravação finalizar
  2. Stored em CameraViewModel.segments
  3. Displayed na UI como thumbnail
  4. Used na concatenação
  5. Deleted após salvamento

VideoFilter

enum VideoFilter: String, CaseIterable {
    case none
    case mono
}

Flow:

  1. User selects em FilterMenu
  2. Stored em CameraViewModel.selectedFilter
  3. Applied durante export em nextAction()
  4. CIFilter processes cada frame

🎯 Casos de Uso Completos

Caso 1: Gravação Simples

1. App Launch
   ├── Request permissions
   ├── Configure session
   └── Start session

2. User Taps Record
   ├── toggleRecording()
   ├── startNewSegment()
   └── isRecording = true

3. User Records (10s)
   └── AVFoundation writes to disk

4. User Taps Stop
   ├── stopCurrentSegment()
   ├── didFinishSegment callback
   ├── Generate thumbnail
   └── segments.append()

5. User Taps Next
   ├── concatenateAndSaveSegments()
   ├── Export composition
   ├── Save to Photos
   └── segments.removeAll()

Caso 2: Gravação com Teleprompter

1. Setup
   ├── isTeleprompterOn = true
   ├── teleprompterText = "..."
   └── Adjust speed & font

2. Start Recording
   ├── toggleRecording()
   ├── Teleprompter starts scrolling
   └── User reads while recording

3. Stop Recording
   ├── toggleRecording()
   ├── Teleprompter stops scrolling
   └── Segment saved

4. Repeat for more takes
   └── Teleprompter resumes from last offset

5. Finish
   └── nextAction()  Final video

🔍 Debugging Flow

Trace a Recording

# Habilitar logs detalhados

print("1. toggleRecording called")
print("2. startNewSegment with URL: \(tempURL)")
print("3. Recording started")
print("4. stopCurrentSegment called")
print("5. didFinishSegment with URL: \(url)")
print("6. Thumbnail generated")
print("7. Segment appended to array")

Monitor Threading

print("On thread: \(Thread.current)")

// Main thread operations
DispatchQueue.main.async {
    print("Updating UI on main")
}

// Session queue operations
sessionQueue.async {
    print("Configuring session on sessionQueue")
}

📚 Ver Também


← Guias Performance →