A controversial bill that would require all new cars to be fitted with AM radios looks set to become a law in the near future. Yesterday, Senator Edward Markey (D-Mass) revealed that the “AM Radio for Every Vehicle Act” now has the support of 60 US Senators, as well as 246 co-sponsors in the House of Representatives, making its passage an almost sure thing. Should that happen, the National Highway Traffic Safety Administration would be required to ensure that all new cars sold in the US had AM radios at no extra cost.

  • tal@lemmy.todayOP
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    7 months ago

    AM operates and can have audio heard/understood at a much lower frequency than a digital signal.

    Specifically, digital radio is broadcast at around 175Mhz, on the low end; while AM radio is around 1,000 Khz.

    HD Radio uses the same frequencies; that’s a selling point.

    https://en.wikipedia.org/wiki/HD_Radio

    HD Radio (HDR) is a trademark for an in-band on-channel (IBOC) digital radio broadcast technology. HD radio generally simulcasts an existing analog radio station in digital format with less noise and with additional text information. HD Radio is used primarily by AM and FM radio stations in the United States, U.S. Virgin Islands, Canada, Mexico and the Philippines, with a few implementations outside North America.

    The term “on channel” is a misnomer because the system actually broadcasts on the ordinarily unused channels adjacent to an existing radio station’s allocation. This leaves the original analog signal intact, allowing enabled receivers to switch between digital and analog as required.

    There’s no lower limit to a frequency that contains a digital signal. Submarines use the VLF range and do digital communication.

    Any frequency that can transmit an analog signal can also contain a digital signal.

    The reason you’d want to use a digital encoding is that it can more-efficiently encode the useful information than an analog AM signal.

    AM radio uses a really simple encoding. It’s the analog analog (heh) of PCM, which is what a simple WAV file on a PC uses.

    But that’s not actually optimized for a human hearing system, and it’s why we don’t usually use that encoding for most audio transmission today.

    PCM is good at encoding whether one sample is suddenly high or low. But…we can’t perceive that. Our hearing system doesn’t pick that up well.

    A lot of lossy compression we use – JPEG, MP3, and it looks like HD Radio and the other two – represents data using a discrete cosine transformation (DCT) to obtain data in the frequency domain, as a bit of information in the frequency domain buys more information that we can perceive than PCM.

    There’s also a hard line on digital signal interpretation. Once there’s a threshold hit for not picking up enough signal to fully interpret it, you drop straight to getting nothing interpreted

    That’s not a requirement of a digital encoding. You can create encodings that deal poorly with interference, but you can also create encodings that deal very well with interference.

    If I LZMA-compress my data, loss of a single bit in transmission may result in multiple bits not being recoverable.

    Raw PCM is pretty resilient to a single bit error. A bit of transmission lost probably isn’t even perceivable to a human, just because PCM isn’t a very efficient encoding for human hearing.

    But I can make an encoding arbitrarily-resillient to interference by using forward error correction to provide redundant data that allows reconstruction of the original data if there is an error in transmission.

    Now, that requires more bandwidth, which may not be the tradeoff that I want to make…but using an encoding other than PCM for the audio frees up bandwidth. So if I want to, I can use the bandwidth made available to pack more FEC data in. I can come out ahead in terms of interference resillience if the encoding is an efficient representation.

    Single-bit errors aren’t the only type of error out there, and if you know something about the type of interference, you can maybe improve on that – like, maybe you’re more likely to have a run of errors, so you don’t store error-correction data near the data that it’s correcting for. Or maybe I can expend more of my redundant data on more-perceptually-important bits in the data; you can permit for gradual degradation via such a route. As you hit some error rate X, you will be less-likely to be able to reconstruct the more-perceptually-important bits.

    The TV was able to interpret whatever signal it did manage to get, even if it wasn’t all of the signal.

    ATSC will also be able to show a degraded TV feed – it’s not a flat threshold at which it just cuts out. You’ll start having visual and auditory artifacts, see discolored squares and such.

    You definitely can make a digital protocol which is less-resillient to interference than a given analog protocol. But that’s just a matter of the tradeoffs you want to make. That includes how much frequency spectrum you want to allocate to it. What’s important to get correct in the reproduced audio – if you want to reproduce audio for a modem rather than a human listener, then encodings designed for the human are going to do poorly. What the characteristics of the interference you expect to see are. What kind of fidelity you want in a low-interference environment.

    Now, that doesn’t mean that a given digital protocol has made the tradeoffs that you want for a particular scenario. But it can.