1、 附 录 一、英文原文 Specification of 30-3000MHz Terrestrial Digital Audio Broadcasting System The principal method of user access to the service components carried in the Multiplex is by selecting a service. Several services may be accessible within one ensemble, and each service contains one or more servic
2、e components. However, dedicated DAB data terminals may search for and select the User Application(s) they are able to process automatically or after user selection. The essential service component of a service is called the primary service component. Normally this would carry the audio (programme s
3、ervice component), but data service components can be primary as well. All other service components are optional and are called secondary service components. The sub-channel organization defines the position and size of the sub-channels in the CIF and the error protection employed. It is coded in Ex
4、tensions 1 and 14 of FIG type 0. Up to 64 sub-channels may be addressed in a multiplex using a sub-channel Identifier which takes values 0 to 63. The values are not related to the sub-channel position in the MSC. The service organization defines the services and service components carried in the ens
5、emble. It is coded in the Extensions 2, 3, 4 and 8 of FIG type 0. Each service shall be identified by a Service Identifier which, when used in conjunction with an Extended Country Code, is unique world-wide. Each service component shall be uniquely identified within the ensemble. When a service comp
6、onent is transported in the MSC in Stream mode, the basic service organization information is coded in FIG 0/2 . Service components, carried in the Packet mode, require additional signalling of the sub-channel and packet address. Extension 3 is used for this purpose . Also, when service components a
7、re scrambled , the Conditional Access Organization field is signalled in Extension 3, for data in packet mode, and in Extension 4 for data carried in the stream mode or in the FIC. The Extension 8 provides information to link together the service component description that is valid within the ensemb
8、le to a service component description that is valid in other ensembles. The ensemble information contains SI and control mechanisms which are common to all services contained in the ensemble. It is specifically used to provide an alarm flag and CIF counter (24 ms increments) for use with the managem
9、ent of a multiplex re-configuration. The ensemble information provides the required mechanisms for changing the multiplex configuration whilst maintaining continuity of services. Such a multiplex re-configuration is achieved by sending at least the relevant part of the MCI of the future multiplex co
10、nfiguration in advance as well as the MCI for the current configuration. When the sub-channel organization changes, the relevant part of the MCI is that encoded in FIG 0/1 and, for sub-channels applying additional FEC for packet mode, FIG 0/14. When the service organization changes, the relevant par
11、t of the MCI is that encoded in FIG 0/2, FIG 0/3, FIG 0/4, and FIG 0/8. Accordingly, every MCI message includes a C/N flag signalling whether its information applies to the current or to the next multiplex configuration Service continuity requires the signalling of the exact instant of time, from wh
12、ich a multiplex reconfiguration is to be effective. The time boundary between two CIFs is used for this purpose. Every CIF is addressable by the value of the CIF counter. The occurrence change field, which comprises the lower part of the CIF count, is used to signal the instant of the multiplex re-c
13、onfiguration. It permits a multiplex re-configuration to be signalled within an interval of up to six seconds in advance. A multiplex configuration shall remain stable for at least six seconds (250 CIFs). NOTE: It is expected that the MCI for a new configuration will be signalled at least three time
14、s in the six-second period immediately before the change occurs. A multiplex re-configuration requires a careful co-ordination of the factors which affect the definition of the sub-channels. These factors include the source Audio/Data (A/D) bit rate and convolutional encoding/decoding. The timing of
15、 changes made to any of these factors can only be made in terms of logical frames. However the logical frame count is related to the CIF count (see clause 5.3) and this provides the link for co-ordinating these activities. In general, whenever a multiplex re-configuration occurs at a given CIF count
16、 n (i.e. the new configuration is valid from this time), then each of the actions related to the sub-channels, affected by this re-configuration, shall be changed at the logical frame with the corresponding logical frame count. There is only one exception to this rule: if the number of CUs allocated
17、 to a sub-channel decreases at the CIF count n, then all the corresponding changes made in that sub-channel, at the logical frame level, shall occur at CIF count (n - 15) which is fifteen 24 ms bursts in advance. This is a consequence of the time interleaving process. The coding technique for high q
18、uality audio signals uses the properties of human sound perception by exploiting the spectral and temporal masking effects of the ear. This technique allows a bit rate reduction from 768 k bit/s down to about 100 k bit/s per mono channel, while preserving the subjective quality of the digital studio
19、 signal for any critical source material (see reference ITU-R Recommendation BS.1284 10). The input PCM audio samples are fed into the audio encoder. A filter bank creates a filtered and sub-sampled representation of the input audio signal. The filtered samples are called sub-band samples. A psychoa
20、coustic model of the human ear should create a set of data to control the quantizer and coding. These data can be different depending on the actual implementation of the encoder. An estimation of the masking threshold can be used to obtain these quantizer control data. The quantizer and coding block
21、 shall create a set of coding symbols from the sub-band samples. The frame packing block shall assemble the actual audio bit stream from the output data of the previous block, and shall add other information, such as header information, CRC words for error detection and Programme Associated Data (PA
22、D), which are intimately related with the coded audio signal. For a sampling frequency of 48 kHz, the resulting audio frame corresponds to 24 ms duration of audio and shall comply with the Layer II format, ISO/IEC 11172-3 3. The audio frame shall map on to the logical frame structure in such a way t
23、hat the first bit of the DAB audio frame corresponds to the first bit of a logical frame. For a sampling frequency of 24 kHz, the resulting audio frame corresponds to 48 ms duration of audio and shall comply with the Layer II LSF format, ISO/IEC 13818-3 11. The audio frame shall map on to the logica
24、l frame structure in such a way that the first bit of the DAB audio frame corresponds to the first bit of a logical frame (this may be associated with either an even or an odd logical frame count). The formatting of the DAB audio frame shall be done in such a way that the structure of the DAB audio
25、frame conforms to the audio bit stream syntax described . The source encoder for the DAB system is the MPEG Audio Layer II (ISO/IEC 11172-3 3 and ISO/IEC 13818-3 11) encoder with restrictions on some parameters and some additional protection against transmission errors. In the ISO/IEC 11172-3 3 and
26、ISO/IEC 13818-3 11 International Standards only the encoded audio bit stream, rather than the encoder, and the decoder are specified. In subsequent clauses, both normative and informative parts of the encoding technique are described. An example of one complete suitable encoder with the correspondin
27、g flow diagram is given in the following clauses A bit allocation procedure shall be applied. Different strategies for allocating the bits to the sub-band samples of the individual sub-bands are possible. A reference model of the bit allocation procedure is described in clause C.3. The principle use
28、d in this allocation procedure is minimization of the total noise-to-mask ratio over the audio frame with the constraint that the number of bits used does not exceed the number of bits available for that DAB audio frame. The allocation procedure should consider both the output samples from the filte
29、r bank and the Signal-to-Mask-Ratios from the psychoacoustic model. The procedure should assign a number of bits to each sample (or group of samples) in each sub-band, in order to simultaneously meet both the bit rate and masking requirements. At low bit rates, when the demand derived from the maski
30、ng threshold cannot be met, the allocation procedure should attempt to spread bits in a psychoacoustically inoffensive manner among the sub-bands. After determining, how many bits should be distributed to each sub-band signal, the resulting number shall be used to code the sub-band samples, the ScFS
31、I and the ScFs. Only a limited number of quantizations is allowed for each sub-band. In the case of 48 kHz sampling frequency tables 14 and 15 indicate for every sub-band the number of quantization steps which shall be used to quantize the sub-band samples. Table 13 shall be used for bit rates of 56
32、 k bit/s to 192 k bit/s in single channel mode as well as for 112 k bit/s to 384 k bit/s in all other audio modes. The number of the lowest sub-band for which no bits are allocated, called sblimit, equals 27, and the total number of bits used for the bit allocation per audio frame is defined by the
33、sum of nbal. If sblimit is equal to 27, the sum of nbal is equal to 88 for single channel mode, whereas the sum of nbal is equal to 176 for dual channel or stereo mode. This number is smaller, if the joint stereo mode is used. Table 14 shall be used for bit rates of 32 k bit/s and 48 k bit/s in sing
34、le channel mode, as well as for 64 k bit/s and 96 k bit/s in all other audio modes. In this case sblimit is equal to 8, and the total number of bits used for the bit allocation per audio frame, i.e. sum of nbal is equal to 26 for single channel mode, whereas the sum of nbal is equal to 52 for dual c
35、hannel or stereo mode. This number is 40, if joint stereo mode with mode extension 00 is used. In the case of 24 kHz sampling frequency, table 15 indicates for every sub-band the number of quantization steps which shall be used to quantize the sub-band samples. Other than in the case of 48 kHz sampl
36、ing frequency, table 15 shall be used for all bit rates which are specified for MPEG-2 Audio Layer II ISO/IEC 13818-3 11 low sampling frequency coding, in the range of 8 k bit/s to 160 k bit/s, independent of the audio mode. The number of the lowest sub-band for which no bits are allocated, called s
37、blimit, equals 30, and the total number of bits used for the bit allocation per audio frame is defined by the sum of nbal. The sum of nbal is equal to 75 for single channel mode, whereas the sum of nbal is equal to 150 for dual channel or stereo mode. This number is smaller, if the joint stereo mode is used. Each DAB audio frame contains a number of bytes which may carry Programme Associated Data (PAD). PAD is information which is synchronous to the audio and its contents may be