Fall 2003 IDF

Pages: 1 2 3 4 5 6 7 8 9 10 11

The Future of Virtualization

While it is unlikely that Intel will release a VMM for public consumption, they are working with key partners to create the tools and resources needed to implement successful virtualization solutions. At IDF, Microsoft demonstrated some of the virtualization features of Longhorn; likely the fruits of their acquisition of Connectix, earlier this year. One area of interest to Intel is “hardware assisted” virtualization. While it is unclear precisely what this means, it is almost certain that one aspect of this will be minimizing the performance penalty associated with virtualization [9].

VMM’s introduce three new types of performance penalties: additional overhead, resource management issues and communication or sharing problems. The overhead is made up of the additional memory required to virtualize the system and the extra processing needed for system calls and exception handling. There is also the issue of sharing and communication between hosted instances; two distinct instances are unlikely to be able to share data, which could cause contention similar to what is seen in multiprocessor cache coherent systems today. Lastly, the VMM is responsible for managing the system resources and must effectively schedule across all instances. It is easy to imagine that a VMM using a naïve resource allocation algorithm could end up dedicating many cycles of computational time to an instance that is waiting for data from the disk subsystems. All of these issues will need to be effectively addressed in order to maximize the performance of virtualized systems. Luckily, by the time Intel will be incorporating virtualization into shipping products, there will be quite a bit of die space to deal with these issues and software expertise to complement hardware solutions.

Another open issue is the degree of virtualization supported by Intel’s different architectures. Since virtualization is mainly needed for servers, it seems like it would be a natural differentiator between IA64 and IA32. If it turns out that hardware assisted virtualization consumes quite a bit of die-space, then it would be even more likely that the best implementations would be restricted to Intel’s high end architecture. The technical question is: what makes an easily virtualiziable environment? It seems logical that the more RISC-like an environment is, the easier it would be to virtualize; but is IA64 a simpler more streamlined environment than IA32? Based on it’s descent from PA-WW, a RISC architecture, it seems like IA64 would be much easier platform to deal with, but IA64 is not without its own difficulties, such as the RSE. The other question is whether virtualization will be implemented through a transparent modification to the microarchitecture, or will it require modifications to the ISA? While x86 is easily modified, IA64 developers might not be so eager to embrace modifications to an already complicated ISA, and they are less tied to the underlying architecture than developers who target the desktop user population. Ultimately, there are many questions regarding Vanderpool (Intel’s virtualization technology), and very few are likely to be resolved before the release of Tanglewood and Nehalem.


Pages: « Prev   1 2 3 4 5 6 7 8 9 10 11   Next »

Be the first to discuss this article!