Yes, I really do have a background in it.vick1000 wrote:When I say I don't need proof, I am stating it should be obvious. Do I need to prove to you the 1+1=2? If you do indeed have a background in physics and audio engineering, which it seems everyone on the internets does in these situations, then it should be clear that once an analog signal is converted to binary, it is no longer the same signal.
Seriously, with 40-plus years of theoretical and practical converter designs under our belts, and the VAST MAJORITY of audio signals on Earth going through the digital domain, I have to assert that it is not obvious that the signal is lost in conversion. In fact, every textbook I am aware of states exactly the opposite, and gives the mathematical and physical reasons why. As I see it, you are claiming that 1+1=11 and calling it obvious.
The act of sampling, when done within well-understood mathematical boundaries such as Nyquist, only adds noise to the signal. When sampled with an oversampling delta sigma system, the noise can be tailored to particular frequency bands such that it is easily removed with a filter that doesn't affect the audio signal.
Further, with a 20-bit sample, the quantization noise (it's noise, not signal loss, because it is easy enough to treat statistically) is 1 part per million, or -120dB. If you look at the analog specs of most audio systems, the analog noise floor is on the order of 80-90dB. So the "digital noise", if you will, is way less than the analog noise.
I understand that it is quite difficult to intuitively grasp the idea of sampling in a way that doesn't disconnect it from the "real world". Nonetheless, it works.
In my experience this is usually due to poor amplifier loop design and poor system design (impedance mismatch, level setting, etc.). When tests are done with an A/B/X box, mostly it's shown that people can't tell the difference. YMMV.vick1000 wrote:To anyone that has played extensively in live and studio environments, it is painfully obvious. The connection between your hands and the speaker is...severed, for lack of a better term, when there is a digital conversion in the chain. And I don't mean just with clipping, which are a seperate issue from tube dynamics and compression.
Tone is gain and phase over frequency. Dynamics is linearity over signal size plus noise, distortion, aliasing, etc., and can include the effects of previous and future signals for systems that are not time-invariant. Not "did your favorite tone come out?" More along the lines of "what, if anything, was changed by the system?"vick1000 wrote:When I said "tone" is subjective, I mean that one persons idea of a good "tone" is usually completely different from someone else. While you may find the result of a digital device in your chain to be the same quality, someone else will notice something in your "tone" that they cannot live with in theirs. Your play style and hearing are most certainly different from everyone elses. So how can I describe to you my idea of "tone"? It would be like trying to describe the color blue to a blind man.
I know the definition in the audio world of the word "tone", which is a specific output of a single frequecy, such as a tone output generator, touch tone phone, etc...
This is simply a terrible misunderstanding, still based on a false example (staircase-shaped sine wave). Sampling actually works incredibly well for sine waves. A pure signal like that does little to tax the nonlinearities in the system and can give better dynamic range results in testing than are actually possible with a real signal source. Here is some info about reconstruction filters and sampling.vick1000 wrote:Let me ask you this. If there is no difference in digital tonal conversion to all analog, why have they not been able to reproduce the clipping? The answer is simple, digital emulation can not reproduce a smooth sine wave. It may look smooth when looking at the large bandwidth scale, but zoom in and you see the steps. Thos steps are always there in ever emulated signal, even a pass through. So you take something that was once a smooth signal wave, chop it up, and put it back together in pieces with a little missing in between each piece. That's is the flaw of digital anything.
http://www.jiscdigitalmedia.ac.uk/guide ... ital-audio
As for making an amp simulator work as well as a real tube amp, it has exactly 0% to do with chopping up a sine wave. It has to do with the dynamics and technology. It takes a pretty hard-core mathematical model to cover the nonlinear behavior of clipping system with as much stuff going on as in a multi-stage guitar amp. In order to reproduce that with little-enough latency that you can't tell it's delayed, you need a VERY fast processor. Processors are getting faster and faster. Compare Space Invaders to Call of Duty. Did physics change? No, the technology got better. As technology improves, make no mistake, there will be a tube amp emulator that cannot be discerned from the real thing. The world's best chess player is a computer. Virtual reality is going to be a thing.
Inside the processor, it's just math. If I want to chop up a signal 93 ways and I can come up with a matrix that puts it back together, I'll do it and nobody will know the difference. We have multi-core computers that chop us software processing already and nobody can tell the difference except it's faster.
It's odd that you discount latency when that is the actual disconnect - time delay. Not a bottleneck. Not a slowdown. A constant delay. Try to do too much (perfect amp simulation), and you can feel and hear the delay. Speed it up to faster than 10ms, not so much. I wrote earlier that sampling only adds noise. In truth, it adds noise and latency.vick1000 wrote:As far as latency, it's irrelevant when we are discussing a signal from point A to point B, A bieng my hands, B bieng my ears. If a latent part of the system effects the whole, it's slower than the rest of the system. The principle applies in the same way as with any processor, instructions per cyle and cycle speed combine with latency for the overall outptu capacity. It does not matter how fast a CPU is, if yu are waiting for it to perform (or it is waiting for data), the end result is the same, slower or delayed output, resulting from a bottleneck.
I would totally understand if you told me that you can hear a 5millisecond delay and it makes you feel disconnected. But this stuff about chopped up sine waves is a non-starter.
As a bit of fun, this is what an audio converter actually does to a signal...
Not exactly a staircase.