son saccadé en interface audio
- 13 réponses
- 8 participants
- 6 371 vues
- 8 followers
orosen
Voilà mon problème : la bestiole fonctionne impeccablement en "stand alone", mais dés que j'essaie de l'utiliser en tant qu'interface audio, rien à faire, le son est "haché". Plus précisément - et ce à partir de n'importe quel logiciel (de windaube média player, en passant par vlc, cubase ou reaper) - la lecture s'interrompt une fraction de seconde (le logiciel "freeze" un court instant) avant de reprendre.
Quelqu'un a-t-il déjà croisé ce phénomène? Mieux encore quelqu'un l'a t-il résolu? Quelqu'un a-t-il une piste?
HELP!
Je précise que j'ai évidemment réinstallé le dernier driver en date.
Mon matos :
DELL PRECISION M2400
Processeur intel core DUO CPU P8600 2,40 GHz
Windows Vista PRO
Merci de vos réponses
[ Dernière édition du message le 30/01/2010 à 10:52:17 ]
- 1
- 2
jovinellim
Merci encore !!
Musicalement,
Mike
ElderKruegger
"Hyper-Threading
For an overview on how Hyper-Threading actually works, please see this Wikipedia article (external link)
So how does Hyper-Threading affect the performance in our applications?
We have found that - depending on the system environment - having Hyper-Threading (HT) enabled can lead to performance issues and spikes in the VST performance meter. At very low latencies, even dropouts may occur.
You can either download the latest version of Xcode or use the version that is included with any of the current Mac OS X installation DVDs.
Here's the procedure how to access that needed setting:
Install Xcode Go to Developer -> Extras -> PreferencePanes on your HDD Double-click Processor.prefPane to install the application
"Processor" Launch the System Preferences to open the application. Uncheck "Hyper-Threading"
Some additional background information
So now, with all these terms like multithreading, multiprocessing, symmetrical multiprocessing, Hyper-Threading, ... we thought it would be a good idea to have a broad overview on the history behind it.
A few years ago, PC computing was simple: a desktop system had one processor that ran one program at a time. This simple model was the easy-to-understand world of MS-DOS* and Windows* 3.x. A user ran a program, which meant the operating system loaded into memory and the processor was entirely given over to its execution until the program completed. If the operating system needed the processor, it had to issue an interrupt. In response to the interrupt, the program would save its current state, suspend operations, and give control to the operating system.
Over time, other ways of handling interrupts were introduced, including the jury-rigging necessary to run two programs simultaneously. On MS-DOS, this required one program to run, finish executing, and then wait semi-dormant in memory while another program executed. It would wait for a specific interrupt to bring it to life and suspend the other program. Such software was called terminate-and-stay-resident (TSR). TSR was a nightmare to support because it was trying to get the operating system to do something it was not built to do: switch between two running programs.
Multithreading
When a program was waiting for a diskette drive to be ready or a user to type some information, programmers began to wonder if the processor could be doing other work. Under MS-DOS, the answer was unequivocally no.
Instructions were executed sequentially, and if there was a pause in the thread of instructions, all downstream instructions had to wait for the pause to terminate.
To come up with a solution, software architects began writing operating systems that supported running pieces of programs, called threads. These multithreading operating systems made it possible for one thread to run while another was waiting for something to happen. Today's operating systems, such as Windows XP/Vista and Mac OS X support multithreading. In fact, the operating systems themselves are multithreaded. Portions of them can run while other portions are stalled.
To benefit from multithreading, programs also need to be multithreaded themselves. That is, rather than being developed as a single long sequence of instructions, they are broken up into logical units whose execution is controlled by the mainline of the program. This allows, for example, Microsoft Word* to repaginate a document while the user is typing. On single processor systems, these threads are executed sequentially, not concurrently. The processor switches back and forth between the threads quickly enough that both processes appear to occur simultaneously.
By switching between threads, operating systems that support multithreaded programs can improve performance, even if they are running on a uniprocessor system.
In the real world, large programs that use multithreading often run many more than two threads. Software like database engines create a new processing thread for every request for a record they receive. In this way, no single I/O operation blocks new requests from executing. On some servers, this approach can mean hundreds, if not thousands, of threads are running concurrently on the same machine.
Multiprocessing
Multiprocessing systems have multiple processors running at the same time. Traditional multiprocessing systems have anywhere from 2 to about 128 processors. Beyond that number (and this upper limit keeps rising) multiprocessing systems become parallel processors. Multiprocessing systems allow different threads to run on different processors. This capability considerably accelerates program performance. Now two threads can run more or less independently of each other without requiring thread switches to get at the resources of the processor. Multiprocessor operating systems are themselves multithreaded and they too generate threads that can run on the separate processors to best advantage.
Asymmetrical and symmetrical multiprocessing
On asymmetrical systems, one or more processors are exclusively dedicated to specific tasks, such as running the operating system. The remaining processors were available for all other tasks, generally the user applications. This configuration is not optimal. The operating system processors may be running at 100% capacity, while the user-assigned processors are doing nothing.
Symmetrical multiprocessing (SMP) is an architecture that balances the processing load better: The "symmetry" refers to the fact than any thread -be it from the operating system or the user application- can run on any processor. In this way, the total computing load is spread evenly across all computing resources. Today, symmetrical multiprocessing systems are the norm, and asymmetrical designs have nearly completely disappeared.
Thread interaction has two components: how threads handle competition for the same resources, and how threads communicate among themselves.
When two threads both want access to the same resource, one of them has to wait. The resource can be a disk drive, a record in a database that another thread is writing to, or any of a myriad other features of the system. Minimizing this delay is a central design issue for hardware installations and the software they run. It is generally the largest factor in providing perfect scalability of performance of multiprocessing systems, because running threads that never contend for the same resource is effectively impossible.
A second factor is thread synchronization. When a program is designed in threads, there are many occasions where the threads need to interact, and the interaction points require delicate handling. For example, if one thread is preparing data for another thread to process, delays can occur when the first thread does not have data ready when the processing thread needs it. More compelling examples occur when two threads need to share a common area of memory. If both threads can write to the same area in memory, then the thread that wrote first has to check that what it wrote has not been overwritten, or it must lock out other threads until it has finished with the data. This synchronization and inter-thread management is clearly an aspect that does not benefit from having more available processing resources.
System overhead is the thread management done by the operating system. The more processors are running, the more the operating system has to coordinate. As a result, each new processor adds incrementally to the system management work of the operating system. This means that each new processor will contribute less and less to the overall system performance."
Voilà j'espère que cela règlera aussi vos soucis
ElderKruegger
jovinellim
Musicalement,
Mike
- < Liste des sujets
- Charte
- 1
- 2