The only justification that i can think of, is so called ‘last metre’ effect.
When the importance of only last few devices in the chain are playing the role.
Also, regarding Linux.
I remember one thread where a guy reported his experiments that setting ALSA period size and samples per buffer played leading role in quality.
Most pleasant sound was from something like 3 samples, and 2 periods. The lower you can get, the better the sound.
To achieve that, he was using patches for real time kernels, alsa itself, and optimizing the hell outa linux. Because regular installation cannot process those low values just out of the box.
May as well try this also.