AES67

AES67 is a technical standard for audio over IP and audio over Ethernet (AoE) interoperability. The standard was developed by the Audio Engineering Society and first published in September 2013. It is a layer 3 protocol suite based on existing standards and is designed to allow interoperability between various IP-based audio networking systems such as RAVENNA, Livewire, Q-LAN and Dante.

AES67
Manufacturer Info
ManufacturerAudio Engineering Society
Development dateSeptember 2013; 7 years ago[1]
Network Compatibility
SwitchableYes
RoutableYes
Ethernet data ratesFast EthernetGigabit Ethernet5GBASE-T10 Gigabit Ethernet
Audio Specifications
Minimum latency125 μs to 4 ms
Maximum channels per link120
Maximum sampling rate48, 44.1, or 96 kHz[1]
Maximum bit depth16 or 24 bits[1]

AES67 promises interoperability between previously competing networked audio systems[2] and long-term network interoperation between systems.[3] It also provides interoperability with layer 2 technologies, like Audio Video Bridging (AVB).[4][5][6] Since its publication, AES67 has been implemented independently by several manufacturers and adopted by many others.

OverviewEdit

AES67 defines requirements for synchronizing clocks, setting QoS priorities for media traffic, and initiating media streams with standard protocols from the Internet protocol suite. AES67 also defines audio sample format and sample rate, supported number of channels, as well as IP data packet size and latency/buffering requirements.

The standard calls out several protocol options for device discovery but does not require any to be implemented. Session Initiation Protocol is used for unicast connection management. No connection management protocol is defined for multicast connections.

SynchronizationEdit

AES67 uses IEEE 1588-2008 Precision Time Protocol (PTPv2) for clock synchronisation. For standard networking equipment, AES67 defines configuration parameters for a "PTP profile for media applications", based on IEEE 1588 delay request-response sync and (optionally) peer-to-peer sync (IEEE 1588 Annexes J.3 and J4); event messages are encapsulated in IPv4 packets over UDP transport (IEEE 1588 Annex D). Some of the default parameters are adjusted, specifically, logSyncInterval and logMinDelayReqInterval are reduced to improve accuracy and startup time. Clock Grade 2 as defined in AES11 Digital Audio Reference Signal (DARS) is signaled with clockClass.

Network equipment conforming to IEEE 1588-2008 uses default PTP profiles; for video streams, SMPTE 2059-2 PTP profile can be used.

In AVB/TSN networks, synchronization is achieved with IEEE 802.1AS profile for Time-Sensitive Applications.

The media clock is based on synchronized network time with an IEEE 1588 epoch (1 January 1970 00:00:00 TAI). Clock rates are fixed at audio sampling frequencies of 44,1 kHz, 48 kHz and 96 kHz (i.e. thousand samples per second). RTP transport works with a fixed time offset to network clock.

TransportEdit

Media data is transported in IPv4 packets and attempts to avoid IP fragmentation.

Real-time Transport Protocol with RTP Profile for Audio and Video (L24 and L16 formats) is used over UDP transport. RTP payload is limited to 1460 bytes, to prevent fragmentation with default Ethernet MTU of 1500 bytes (after subtracting IP/UDP/RTP overhead of 20+8+12=40 Bytes).[7] Contributing source (CSRC) identifiers and TLS encryption are not supported.

Time synchronization, media stream delivery, and discovery protocols may use IP multicasting with IGMPv2 (optionally IGMPv3) negotiation. Each media stream is assigned a unique multicast address (in the range from 239.0.0.0 to 239.255.255.255); only one device can send to this address (many-to-many connections are not supported).

To monitor keepalive status and allocate bandwidth, devices may use RTCP report interval, SIP session timers and OPTIONS ping, or ICMP Echo request (ping).

AES67 uses DiffServ to set QoS traffic priorities in the Differentiated Services Code Point (DSCP) field of the IP packet. Three classes should be supported at a minimum:

QoS classes and DiffServ associations
Class nameTraffic typeDefault DiffServ class (DSCP decimal value)
ClockIEEE 1588-2008 time events *EF (46)
MediaRTP / RTCP media streamsAF41 (34)
Best effortIEEE 1588-2008 signaling, discovery and connection managementDF (0)
  • Announce, Sync, Follow_Up, Delay_Req, Delay_Resp, Pdelay_Req, Pdelay_Resp, Pdelay_Resp_Follow_Up

250 μs maximum delay may be required for time-critical applications to prevent drops of audio. To prioritize critical media streams in a large network, applications may use additional values in the Assured Forwarding class 4 with low-drop probability (AF41), typically implemented as a weighted round-robin queue. Clock traffic is assigned to the Expedited Forwarding (EF) class, which typically implements strict priority per-hop behavior (PHB). All other traffic is handled on a best effort basis with Default Forwarding.

RTP Clock Source Signalling procedure is used to specify PTP domain and grandmaster ID for each media stream.

Audio encodingEdit

Sample formats include 16-bit and 24-bit Linear PCM with 48 kHz sampling frequency, and optional 24-bit 96 kHz and 16-bit 44.1 kHz. Other RTP audio video formats may be supported. Multiple sample frequencies are optional. Devices may enforce a global sample frequency setting.

Media packets are scheduled according to 'packet time' - transmission duration of a standard Ethernet packet. Packet time is negotiated by the stream source for each streaming session. Short packet times provide low latency and high transmission rate, but introduce high overhead and require high-performance equipment and links. Long packet times increase latencies and require more buffering. A range from 125 μs to 4 ms is defined, though it is recommended that devices shall adapt to packet time changes and/or determine packet time by analyzing RTP timestamps.

Packet time determines RTP payload size according to a supported sample rate. 1 ms is required for all devices. Devices should support a minimum of 1 to 8 channels per stream.[7]

Recommended packet times
Packet timeSamples per packetNotes
48 / 44.1 kHz96 kHz
125 μs612Compatible with AVB Class A
250 μs1224High-performance low-latency operation. Compatible with AVB Class B, interoperable with AVB Class A
333 13 μs1632Efficient low-latency operation
1 ms4896Required packet time for all devices
4 ms192384Wide area networks, networks with limited QoS capabilities, or interoperability with EBU 3326
  • MTU size restrictions limit a 96 kHz audio stream using 4-ms packet time to a single channel.
Maximum channels per stream
Audio formatPacket time
125 μs250 μs333 13 μs1 ms4 ms
16-bit 48 kHz1206045153
24-bit 48 kHz804030102
24-bit, 96 kHz40201551

LatencyEdit

Network latency (link offset) is the time difference between the moment an audio stream enters the source (ingress time), marked by RTP timestamp in the media packet, and the moment it leaves the destination (egress time). Latency depends on packet time, propagation and queuing delays, packet processing overhead, and buffering in the destination device; thus minimum latency is the shortest packet time and network forwarding time, which can be less than 1 μs on a point-to-point Gigabit Ethernet link with minimum packet size, but in real-world networks could be twice the packet time.

Small buffers decrease latency but may result in drops of audio when media data does not arrive on time. Unexpected changes to network conditions and jitter from packet encoding and processing may require longer buffering and therefore higher latency. Destinations are required to use a buffer of 3 times the packet time, though at least 20 times the packet time (or 20 ms if smaller) is recommended. Sources are required to maintain transmission with jitter of less than 17 packet times (or 17 ms if shorter), though 1 packet time (or 1 ms if shorter) is recommended.

Interoperability with AVBEdit

AES67 may transport media streams as IEEE 802.1BA AVB time-sensitive traffic Classes A and B on supported networks, with guaranteed latency of 2 ms and 50 ms respectively. Reservation of bandwidth with the Stream Reservation Protocol (SRP) specifies the amount of traffic generated through a measurement interval of 125 μs and 250 μs respectively. Multicast IP addresses have to be used, though only with a single source, as AVB networks only support Ethernet multicast destination addressing in the range from 01:00:5e:00:00:00 to 01:00:5e:7f:ff:ff.

An SRP talker advertise message shall be mapped as follows:

Talker advertise message
StreamIDA 64-bit globally-unique ID (48-bit Ethernet MAC address of the source and 16-bit unique source stream ID).
Stream destination addressEthernet multicast destination address.
VLAN ID12-bit IEEE 802.1Q VLAN tag. Default VLAN identifier for AVB streams is 2.
MaxFrameSizeThe maximum size of the media stream packets, including the IP header but excluding Ethernet overhead.
MaxIntervalFramesMaximum number of frames a source may transmit in one measurement interval. Since allowed packet times are greater than (or equal to) AVB measurement intervals, this is always 1.
Data Frame Priority3 for Class A, 2 for Class B.
Rank1 for normal traffic, 0 for emergency traffic.

Under both IEEE 1588-2008 and IEEE 802.1AS, a PTP clock can be designated as an ordinary clock (OC), boundary clock (BC) or transparent clock (TC), though 802.1AS transparent clocks also have some boundary clock capabilities. A device may implement one or more of these capabilities. OC may have as few as one port (network connection), while TC and BC must have two or more ports. BC and OC ports can work as a master (grandmaster) or a slave. An IEEE 1588 profile is associated with each port. TC can belong to multiple clock domains and profiles. These provisions make it possible to synchronize IEEE 802.1AS clocks to IEEE 1588-2008 clocks used by AES67.

Development historyEdit

The standard was developed by the Audio Engineering Society beginning at the end of 2010.[8] The standard was initially published September 2013.[9][10][11][12] A second printing including a patent statement from Audinate was published in March 2014. An update including clarifications and error corrections was issued in September 2015.[1]

The Media Networking Alliance was formed in October 2014 to promote adoption of AES67.[13]

In October 2014 a plugfest was held to test interoperability achieved with AES67.[14][15] A second plugfest was conducted in November 2015[16] and third in February 2017.[17]

In May 2016, the AES published a report describing synchronization interoperability between AES67 and SMPTE 2059-2.[18]

In June 2016, AES67 audio transport enhanced by AVB/TSN clock synchronisation and bandwidth reservation was demonstrated at InfoComm 2016.[19]

In September 2017, SMPTE published ST 2110, a standard for professional video over IP.[20] ST 2110-30 uses AES67 as the transport for audio accompanying the video.[21]

In December 2017 the Media Networking Alliance merged with the Alliance for IP Media Solutions (AIMS) combining efforts to promote standards-based network transport for audio and video.[22]

In April 2018 AES67-2018 was published. The principal change in this revision is addition of a protocol implementation conformance statement (PICS).[23]

The AES Standards Committee and AES67 editor, Kevin Gross, were recipients of a Technology & Engineering Emmy Award in 2020 for development of synchronized multi-channel uncompressed audio transport over IP networks.[24]

AdoptionEdit

The standard has been implemented by Lawo,[25] Axia,[26] AMX (in SVSI devices), Wheatstone,[27][28] Extron Electronics, Riedel,[29] Ross Video,[30][31] ALC NetworX,[32] Audinate,[33][34][35][36][37][38] Archwave,[39] Digigram,[40] Sonifex,[41] Yamaha,[42] QSC,[43] Neutrik, Attero Tech,[44] Merging Technologies,[45][46] Gallery SIENNA,[47] and is supported by RAVENNA-enabled devices under its AES67 Operational Profile.[48]

Shipping productsEdit

Over time this table will grow to become a resource for integration and compatibility between devices. The discovery methods supported by each device are critical for integration since the AES67 specification does not stipulate how this should be done, but instead provides a variety of options or suggestions. Also, AES67 specifies Multicast or Unicast but many AES67 devices only support Multicast.

VendorProductDescriptionOS PlatformAES67 ModelSendReceiveMulticastUnicastNotes
Merging TechnologiesVirtual Audio Device[49]Ravenna/AES67 driversmacOS,[50] Linux,[51] WindowsRavenna AES67SAP, mDNS/RTSPSAP, mDNS/RTSPYYFree
ALC NetworksVirtual Sound Card[52]Ravenna/AES67 WDM driverWindowsRavenna AES67YFree
ALC NetworksRAV2SAP[53]AES67 Discovery ToolsWindowsRavenna AES67SAPmDNS/RTSPYFree
SiennaAES67 to NDI Gateway[47]AES67 to NDI GatewaymacOS, Linux, WindowsNative AES67SAPSAPYN
SiennaNDI to AES67[54]NDI to AES67 SendermacOS, LinuxNative AES67SAPSAPYN
LawoVRX4[55]Audio MixerWindowsRavenna AES67Y
HassebAoE[56]Analog and optical AES67 InterfaceNative AES67mDNS/RTSPmDNS/RTSPYY
QSCDSP, Amplifiers[57]variousQ-SYS AES67SAPSAPY
AXIAVarious[58]variousLivewire+ AES67YY
YamahaMixers[59]variousDante AES67SAPSAPYN
Attero TechEndpoints [60]EndpointsAttero AES67SAPSAPYN
SoundTube EntertainmentVarious[61]VariousDante AES67


This article uses material from the Wikipedia article
 Metasyntactic variable, which is released under the 
Creative Commons
Attribution-ShareAlike 3.0 Unported License
.