Tordre le cou au moteur audio:
- 212 réponses
- 40 participants
- 29 036 vues
- 56 followers
Anonyme
pour ça, un autre thread:
Tordre le coup au moteur audio: polemiques et crtiques
Anonyme
Citation : [la sommation des signaux]depend of the code and the skill/knowledge of the coder
reponse inspirée de GOL, codeur de fruity
Citation : I think it all depends on the lack of coding knowledge of the user. Because if you had any idea how to add 2 waveforms together, you wouldn't post that kind of bs.
une autre "attaque:
Citation : So you are saying that sample rate conversions and digital volume scaling has no audible effect on sound quality?
reponse:
Citation : volume scaling doesn't
samplerate conversion surely does, but normally you avoid playing with samples that don't match your project's samplerate (even just for the sake of CPU usage). If you don't, then you'll be surprised that the worst resampling is usually in the most pro apps.
et
Citation : you know what, you could simply prove it. Post 2 32bit wave files, and show the world how that 3.99999999999996 sample instead of 4.0 is audible. Once done, you can keep arguing about calculators, etc, for your whole life because computers aren't gonna handle 'infinite precision' so soon.
au passage, petite pique pour pro tools:
Citation : If you really feel the need to nag about that kind of thing, then choose pro-tools. There's just NO audio app today that still mixes in integer except pro-tools (granted, it's 24bit, but whatever it is, as long as it's integer you'll have to check for clipping).
tout cela est trouvable https://www.kvraudio.com/forum/viewtopic.php?t=139306&postdays=0&postorder=asc&start=0
gol y prend le pseudo de tony tony chopper.
a lire aussi les propos d'Arke qui est codeur aussi, et dont les repliques sont parfois tres droles, mais toujours dans le vrai.
Anonyme
Post subject: sound quality Dear Live
Citation : Citation :
"...the whole "audio engine" thread is a myth. I know that you are not going to believe this, but maybe then you should read some basic books about computer music. It will not only help understanding digital audio but also give tons of ideas about what to do with all these great tools !!!!!"
"
....2. Digital A versus Digital B....
In case 1 it is obvious that there are huge differences. An analog mixer contains some hundred transistors and each of them has a nonlinear transfer curve. The result is very complex distortion. On a good mixer some engineer did a great job adjsting the circuits in a way that this nonlinear behaviour sounds great. Also each D/A converter has an analog side and the same rules apply for it. Playing back a mix using one stereo converter will sound different from playing back each track with it`s own converter and then adding the resulting signal in a mixer.
We do not need to discuss here that there is a difference since this is obvious.
2. A summing bus in software is
A* a + B * b + ...
and if this is done with 32 bit or more the potential error is very low. Each software using 32 bit floating point math sound the same in this regard. Filters are a complete different issue. There are lots of concepts and they all sound different. Same goes for other DSP processing algorithms like timestrech, sample rate conversion etc. But the whole "audio engine" thread is a myth. I know that you are not going to believe this, but maybe then you should read some basic books about computer music. It will not only help understanding digital audio but also give tons of ideas about what to do with all these great tools !!!!! "
"A vew statements to sound quality of Live:
1. the timestrech changes the sound. this is true for every timestrech.
2. playing back a 44.1 kHz sample at any other sampling rate then 44.1 needs interpolation. this changes the sound. the HighQuality button allows for using a state of the art alogrithm for this task if desired. the same is true for transposing a sample.
3. playing back an unwarped 44.1 kHz at 44.1 Hz with no transpositition and no gain change and no FX will result in an unchainged signal passed to the soundcard. this will sound 100% the same in each audio application.
4. adding two or more sources in a digital system can result in slight differences if the system uses floating point or integers. most software use floats and i personally do not believe that anyone can actually hear the difference. Lives busses sound like any buss which does not contain EQ or compression.
4. Live´s delays use the simplest possible algorithm. if you think they sound fine- cool, but they will not sound different in Reaktor or MAX/MSP or Protools.
5. Live`s EQs in most cases are standard ones, nothing special but also not bad. you may or you may not like the sound, it may or may not be sufficient for your work, but thats why there are VST plugins giving you every kind of EQ you want. Some EQ`s in Live, like the Autofiler are using more sophisticated alogorithms - more CPU, but more analog-like.
conclusion : especially filters and compression does sound very different in differnent DAWs and everything else does sound the same.
Regards, Robert Henke / Ableton."
Anonyme
Citation : I'll spare you the code, but all Tunafish does for summing is simply adding a bunch of floats to each other. If you do anything else, you're not summing Cool
Ofcourse, there's a lot of other stuff going on as well, both before and after the summing takes place, but none of that should color the audio in any significant way - unless the master signal starts clipping. I can imagine that different hosts handle this situation in different ways, which *could* affect the sound. But that's beside the point you're discussing here.
la situation de la fin, c'est le clipping.
Anonyme
Citation : The Art of Recording:
How Are Things Panning Out? It's not just a good idea — it's the law.
By Craig Anderton | April 2005
Can you possibly imagine a more boring topic for an Art of Recording article than panning? I mean, what’s the big deal — you twist the friggin’ knob, real or virtual, and put the sound somewhere in the stereo field. Done. In fact, I’m sure some of you are thinking: “Jeez, how stupid do you think I am?”
But ignorance of the law is no excuse. The panning law, that is. Panning laws have nothing to do with laws laid down by the music arm of the Fashion Police; instead, they govern exactly what happens when a monaural sound moves from left to right in the stereo field.
That may seem cut and dried, but it’s not — especially in the world of DAWs. As a matter of fact, not knowing about panning laws can create some real issues — significant issues — if you need to move a project from one host to another. Panning laws may even account for some of the online foolishness where people argue about one host sounding “punchier” or “wimpier” than another when they loaded the same project into different hosts. It’s the same project, right? So it should sound the same, right?
Ha. Read on.
HOW IT ALL STARTED
Panning laws came about because back in the days of analog mixers, if there was a linear gain increase in one channel and a linear gain decrease in the other channel to change the stereo position, at the center position the sum of the two channels sounded louder than if the signal was panned full left or full right.
Well, that didn’t seem right, so it became common to use a logarithmic gain-change taper to drop the signal by -3dB RMS at the center. You could do this by using dual pots for panning with log/antilog tapers, but as those could be hard to find, you could do pretty much the same thing by adding tapering resistors to standard linear potentiometers. Thus, even though signals were being added together from the left and right channels, the apparent level was the same when centered because they had equal power.
But it turned out this “law” was not a standard. Some engineers preferred to drop the center level a bit more, either because they liked the signal to seem louder as it moved out of the main center zone, or because signals that “clumped up” around the center tended to “monoize” the signal. So, dropping the centered level a little further emphasized the stereo effect somewhat. Some of the people using analog consoles had their own little secret tweaks to change the panning characteristics, which became part of their “secret sauce.”
ENTER THE DAW
With virtual mixers we don’t have to worry about dual-ganged panpots, and can create any panning characteristic we want. That’s a good thing. Well, I think it’s a good thing, but it’s also added a degree of chaos that we really didn’t need.
For example, Cubase SX3 has four panning laws in the Project Setup dialog (Figure 1); you get there by going Project > Project Setup. Setting the value to 0dB eliminates constant-power panning, and gives you the old school, center-channel-louder effect. Since we tried so hard to get away from that, it’s not surprising that Cubase defaults to using the “drop the center by -3dB” classic equal power setting. But you can also choose to drop the center by -4.5dB or -6dB if you want to hype up/widen the stereo field somewhat, and make the center a bit more demure. Fair enough, it’s nice to have options.
Adobe Audition has two panning options in multitrack mode, accessed by going View > Advanced Session Properties (Figure 2). L/R Cut Logarithmic is the default, and pans to the left by reducing the right channel volume, and conversely, pans to the right by reducing the left channel volume. As the panning gets closer to hard left or right, the channel being panned to doesn’t increase past what its volume would be when centered. The Equal Power Sinusoidal option maintains constant power by amplifying hard pans to left or right by +3dB, which is conceptually similar to dropping the two channels by -3dB when the signal is centered.
Sonar takes the whole process further with six different panning options (Figure 3), which you can find by going Options > Audio. In the descriptions below, “taper” refers to the curve of the gain and doesn’t have too radical an effect on the sound. The six options are:
* 0dB center, sin/cos taper, constant power. The signal level stays at 0dB when centered, and increases by +3dB when panned left or right. Although this is the default, I don’t recommend it because of the possibility of clipping if you pan a full-level signal off center.
* 0dB center, square root taper, constant power. This is similar, but the gain change taper is different.
* -3dB center, sin/cos taper, constant power. The signal level stays at 0dB when panned right or left, but drops by -3dB in each channel when centered. This is the same as the Cubase SX default panning law.
* -3dB center, square root taper, constant power. This is similar, but the gain change taper is different.
* -6dB center, linear taper. The signal level stays at 0dB when panned left or right, but drops by -6dB when centered. This is for those who like to hype up the sides a bit at the expense of the center.
* 0dB center, balance control. The signal level stays constant whether the signal is in the left channel, right channel, or set to the middlle.
You can actually see the results of choosing different pan options. Figure 4 shows meter settings in Sonar for two different settings. (Note that pan law is a global setting and can’t be set individually for each track; this illustration was created by grabbing two screen shots and combining them.)
The top meter shows a centered mono signal, while the second meter down shows the same signal panned full right, using the 0dB center, balance control option. The RMS level is the same when the signal is centered as when it’s panned full right.
The third meter down shows the same signal centered, but subjected to the -6dB center, linear taper law. Note how it’s playing back at -9dB (the meters are set to show RMS readings over a 24dB range), while the fourth meter down shows what happens when the same signal is panned full right: It registers exactly 6dB higher (-3dB).
SO WHICH ONE DO YOU CHOOSE?
Well, as we’ve noted, this particular law is pretty unspecific. Note that if you compare the three programs mentioned above, they all default to a different law! But here’s the rub: When you move a project from one host sequencer to another, unless the selected panning laws match, look out. I often wonder if when some people say a particular host sounds “punchier” than another, the “punchy” one boosts the level when signals are panned hard left or right, while the “unpunchy” one uses the law that drops the level of the center instead.
For example, suppose you move a Sonar project to Cubase SX. It will likely sound softer, because Cubase drops the center channel to compensate, while Sonar raises the left and right channels to compensate. Conversely, if you move a Cubase SX project to Sonar, you might have to deal with distortion issues and reduce a few levels here and there, because signals panned hard left and hard right will now be louder.
But where these laws really come into play is with surround, because here you’re talking about spatial changes between more than just two speakers. Bottom line: Be consistent in the panning law you use, and document it with the file if a project needs to be moved from one platform to another.
Personally, I go for the tried-and-true “-3dB down in the center” option. I designed analog mixers to have that response, and so I’m more than happy to continue that tradition within the virtual world of sequencer hosts. Also, this is one option that just about every host provides, whereas some of the more esoteric ones may not be supported by other hosts.
SO WHAT DOES IT ALL MEAN?
There, now aren’t you glad you read this article after all? But we can’t sign off without mentioning one more thing: The pan law you choose isn’t just a matter of convenience or compatibility, although I’ve stressed the importance of being compatible if you want a “transportable” file. The law you choose can also make a difference in the overall sound of a mix.
This is less of an issue if you use mostly stereo tracks, as panning in that case is really more of a balance control. But for many of us, “multitrack” still means recording at least some mono tracks. I tend to record a mono source (voice, guitar, bass) in mono, unless it’s important to capture the room ambience — and even then, I’m more likely to capture the main sound in mono, and use a stereo pair of room mics that go to their own tracks. And if you pan that mono track, you’re going to have to deal with the panning laws. It’s up to you to decide which law sounds best for the project you’re doing.
In any event, you now know enough about those laws to make sure you don’t get cited in contempt of recording court. Happy panning!
fritesgrec
Pov Gabou
L'article de Henke avait ete aussi cite plus tard.
Fockwulf
La différence entre la théorie et la pratique est, en théorie, qu'il n'y a pas de différence...
zograme
zograme
- < Liste des sujets
- Charte