AI Voice Transcript
Voice-to-text transcription with speaker identification, confidence scores, playback controls, and AI-powered features like summarization and keyword extraction. Supports live transcription, editing, and export.
Compact Variant
Minimal live transcription display
Standard Variant
Transcript with playback controls
Hello, can you help me understand this code?
Of course! I would be happy to help. What specific part would you like me to explain?
The async function syntax is confusing me.
Detailed Variant
Full featured transcript with search, filter, and AI summary
Voice Transcript
4 segments • 4:00
AI Summary
User requested help debugging a React component with state update issues. AI provided guidance on event handler syntax.
I need help debugging this React component.
I can help with that. Let me analyze your component structure first.
The state is not updating when I click the button.
That suggests an issue with your event handler. Make sure you are using the correct syntax for state updates.
Live Transcription
Real-time transcription with live badge
Hello, can you help me understand this code?
Of course! I would be happy to help. What specific part would you like me to explain?
The async function syntax is confusing me.
Props
AIVoiceTranscript component API reference
| Prop | Type | Default | Description |
|---|---|---|---|
segments* | TranscriptSegment[] | — | Transcript segments with speaker and content |
status | 'live' | 'processing' | 'complete' | 'error' | 'complete' | Current transcription status |
totalDuration | number | 0 | Total duration in seconds |
currentPosition | number | 0 | Current playback position in seconds |
highlights | TranscriptHighlight[] | — | Text highlights (keywords, actions, questions) |
isPlaying | boolean | false | Whether audio is playing |
onPlayPause | () => void | — | Callback when play/pause is toggled |
onSeek | (position: number) => void | — | Callback when seeking to a position |
onEditSegment | (segmentId: string, newContent: string) => void | — | Callback when segment is edited |
onExport | (format: 'txt' | 'srt' | 'json') => void | — | Callback when transcript is exported |
onRequestSummary | () => void | — | Callback when AI summary is requested |
summary | string | — | AI-generated summary of the transcript |
showSpeakers | boolean | true | Show speaker labels and avatars |
showTimestamps | boolean | true | Show timestamps for each segment |
showConfidence | boolean | false | Show transcription confidence scores |
allowEditing | boolean | false | Allow editing transcript segments |
searchable | boolean | true | Enable search functionality |
variant | 'compact' | 'standard' | 'detailed' | 'standard' | Display mode variant |
autoScroll | boolean | true | Auto-scroll to current segment during playback |
className | string | — | Additional CSS classes |
TranscriptSegment Type
| Prop | Type | Default | Description |
|---|---|---|---|
id* | string | — | Unique segment identifier |
speaker* | 'human' | 'ai' | 'system' | — | Speaker type |
speakerName | string | — | Speaker display name |
content* | string | — | Transcribed text |
timestamp* | number | — | Start time in seconds |
duration | number | — | Duration in seconds |
confidence | number | — | Confidence score (0-100) |
isEdited | boolean | — | Whether this was manually edited |
originalContent | string | — | Original content before edits |
emotions | string[] | — | Detected emotions |
keywords | string[] | — | Extracted keywords |
Usage
Import and implementation example
import { AIVoiceTranscript } from '@/blocks/ai-transparency/ai-voice-transcript'
export default function VoiceChat() { const [segments, setSegments] = useState([]) const [isPlaying, setIsPlaying] = useState(false) const [currentPosition, setCurrentPosition] = useState(0)
const handleEdit = (id: string, newContent: string) => { setSegments(prev => prev.map(seg => seg.id === id ? { ...seg, content: newContent, isEdited: true, originalContent: seg.content } : seg )) }
const handleExport = (format: 'txt' | 'srt' | 'json') => { // Export transcript in selected format }
const handleRequestSummary = async () => { // Generate AI summary const summary = await generateSummary(segments) setSummary(summary) }
return ( <AIVoiceTranscript segments={segments} status="complete" totalDuration={300} currentPosition={currentPosition} isPlaying={isPlaying} summary={summary} showSpeakers={true} showTimestamps={true} showConfidence={true} allowEditing={true} searchable={true} variant="detailed" onPlayPause={() => setIsPlaying(!isPlaying)} onSeek={setCurrentPosition} onEditSegment={handleEdit} onExport={handleExport} onRequestSummary={handleRequestSummary} /> )}Built With
5 componentsThis block uses the following UI components from the design system:
Features
Built-in functionality
- Speaker identification: Distinguish between human, AI, and system speakers
- Confidence scores: Display transcription accuracy for each segment
- Playback controls: Play/pause and seek through audio with progress bar
- Live transcription: Real-time transcription with live badge and auto-scroll
- Segment editing: Edit transcription errors with original content tracking
- Search and filter: Search transcript and filter by speaker
- AI summarization: Generate AI-powered summaries of conversations
- Keyword extraction: Automatically identify and highlight key terms
- Emotion detection: Optional emotion tagging for segments
- Export options: Export as TXT, SRT subtitles, or JSON
- Copy to clipboard: Copy entire transcript with formatting
- Timestamp navigation: Click timestamps to jump to that position
- Dark mode support: Full dark mode with proper contrast
Accessibility
ARIA support and keyboard navigation
ARIA Attributes
Playback controls use semantic media controlsSpeaker avatars have proper labelsLive regions for real-time transcription updatesKeyboard Navigation
| Key | Action |
|---|---|
| Tab | Navigate through controls and segments |
| Space | Play/pause playback |
| Left/Right arrows | Seek backward/forward |
| Enter | Jump to timestamp or edit segment |
Notes
- Confidence indicators use both color and percentage
- Live status clearly indicated with animated badge
- Speaker types distinguished by icons and colors
- Consider adding aria-live="polite" for new segments in live mode