WebRTC Implementation Details¶
This document provides technical details about the WebRTC implementation in FuryMesh, intended for developers who want to understand or contribute to the codebase.
Component Architecture¶
The WebRTC implementation in FuryMesh is organized into several key components:
WebRTC Manager (WebRTCManager
)¶
Responsible for managing WebRTC peer connections:
- Creates and maintains peer connections
- Handles ICE candidate exchange
- Manages data channels
- Monitors connection state
WebRTC Messaging (WebRTCMessaging
)¶
Handles message exchange over WebRTC data channels:
- Serializes/deserializes messages using FlatBuffers
- Routes messages to appropriate handlers
- Provides a simple API for sending different message types
- Tracks messaging metrics
WebRTC Transfer Manager (WebRTCTransferManager
)¶
Coordinates file transfers between peers:
- Initiates file requests
- Manages chunk requests and responses
- Tracks transfer progress
- Handles transfer errors and retries
- Implements multi-peer transfers
WebRTC Transport Adapter (WebRTCTransportAdapter
)¶
Bridges the file system with WebRTC components:
- Adapts the file system interfaces to WebRTC
- Provides a consistent API for file operations
- Handles serialization/deserialization of file data
Message Types¶
FuryMesh uses several message types for WebRTC communication:
- FileMetadata: Contains information about a file (size, chunks, etc.)
- FileChunk: Contains a chunk of file data
- ChunkRequest: Requests a specific chunk from a peer
- TransferStatus: Updates on transfer progress
- ErrorMessage: Communicates errors between peers
Transfer Process Flow¶
The detailed flow of a file transfer in FuryMesh:
- Request Initiation:
- Metadata Exchange:
- Requesting peer sends metadata request
- Source peer responds with file metadata
-
Requesting peer creates transfer record
-
Chunk Request and Transfer:
- Requesting peer determines which chunks to request
- Chunks are requested based on selection strategy
- Source peer sends requested chunks
-
Requesting peer saves received chunks
-
Transfer Completion:
- All chunks received and verified
- Transfer marked as completed
- File available for reassembly
Error Handling¶
FuryMesh implements comprehensive error handling:
Timeout Handling¶
// Wait for chunk to be received or timeout
timeout := time.After(w.config.IdleTimeout)
ticker := time.NewTicker(100 * time.Millisecond)
defer ticker.Stop()
chunkWaitLoop:
for {
select {
case <-ctx.Done():
return
case <-timeout:
// Timeout waiting for chunk
w.logger.Warn("Timeout waiting for chunk",
zap.String("transferID", transfer.ID),
zap.String("fileID", metadata.FileID),
zap.Int("chunkIndex", chunkIndex))
break chunkWaitLoop
case <-ticker.C:
// Check if chunk has been received
_, err := w.fileManager.GetStorageManager().GetChunk(metadata.FileID, chunkIndex)
if err == nil {
// Chunk received
return
}
}
}
Retry Mechanism¶
FuryMesh implements automatic retries for failed chunk transfers:
// Retry logic
for retry := 0; retry < w.config.MaxRetries; retry++ {
// Request chunk
err := w.requestFileChunk(peerID, fileID, chunkIndex)
if err != nil {
// Handle error
continue
}
// Wait for chunk...
}
Performance Considerations¶
Chunk Selection Strategies¶
FuryMesh implements different strategies for selecting which chunks to request:
- Round Robin: Distributes chunk requests evenly among peers
- Rarest First: Requests the rarest chunks first
- Sequential: Requests chunks in sequential order
Concurrent Transfers¶
FuryMesh optimizes transfers by requesting multiple chunks simultaneously:
// Create a semaphore to limit concurrent requests
sem := make(chan struct{}, w.config.MaxConcurrentChunks)
// Request chunks concurrently
for _, chunkIndex := range chunksToRequest {
sem <- struct{}{} // Acquire semaphore
go func(idx int) {
defer func() { <-sem }() // Release semaphore
// Request and process chunk
}(chunkIndex)
}
Integration with File System¶
The WebRTC components integrate with the file system through the FileManager
:
// FileManager integrates file operations with the node
type FileManager struct {
logger *zap.Logger
chunker *file.Chunker
storageManager *file.StorageManager
transferManager *file.TransferManager
webrtcTransport *file.WebRTCTransportAdapter
// Other fields...
}
Security Implementation¶
Data Channel Encryption¶
WebRTC data channels are automatically encrypted using DTLS:
// WebRTC configuration
config := webrtc.Configuration{
ICEServers: iceServers,
ICETransportPolicy: webrtc.ICETransportPolicyAll,
BundlePolicy: webrtc.BundlePolicyBalanced,
RTCPMuxPolicy: webrtc.RTCPMuxPolicyRequire,
}
Content Encryption¶
Optional content encryption is implemented using the crypto
package:
// Encrypt chunk data if encryption is enabled
if encryptionMgr != nil {
encryptedData, err := encryptionMgr.EncryptData(data)
if err != nil {
return err
}
data = encryptedData
}
Debugging and Monitoring¶
FuryMesh provides extensive logging and metrics for debugging:
// Log transfer completion
w.logger.Info("Transfer completed",
zap.String("file_id", transfer.FileID),
zap.String("peer_id", transfer.PeerID),
zap.Int("total_chunks", transfer.TotalChunks),
zap.Int64("bytes", transfer.BytesTransferred),
zap.Float64("rate_bps", transfer.TransferRate))
Configuration Reference¶
Complete configuration reference for WebRTC in FuryMesh:
webrtc:
# STUN servers for NAT traversal
stun_servers:
- "stun:stun.l.google.com:19302"
- "stun:stun1.l.google.com:19302"
# TURN servers for fallback relay
turn_servers: []
username: ""
credential: ""
# ICE timeout in seconds
ice_timeout: 30
# Maximum connection retries
max_retries: 3
transfer:
# Chunk size in bytes (1MB default)
chunk_size: 1048576
# Maximum concurrent chunk transfers
max_concurrent_chunks: 5
# Retry interval for failed transfers
retry_interval: 5s
# Maximum retry attempts
max_retries: 3
# Idle timeout for stalled transfers
idle_timeout: 30s
# Buffer size for prefetching chunks
buffer_size: 10