After reading a post from my good friend Alex Weeks at vi411.org about his opinion on the Microsoft and Novell agreement it reminded me that I wanted to post my take after doing some research. I have come to the conclusion as many others have that this was a big win for Microsoft because they now gain access to the “Pike Patent” – for those of you unfamiliar with this patent it is important to note that Microsoft has been in lose violation of it since the dawn of the Windows OS, now any potential problems have evaporated. In short the Pike patent deals with overlapping windows and transparency, originally granted to AT&T and acquired by Novell, Microsoft now has access to this patent. They also gain access to some interesting SMP and clustering patents. My guess this makes sense for Novell because of the huge influx of cash and the “potential” access to Microsoft technology like AD, cifs, etc… Only time will tell but I think that Microsoft actually gained much more from this deal – some how they always do. Novell, the modern day Gary Kildall?
Heavy focus the evolution toward the virtual infrastructure. Today x86 virtualization is leading the charge but we are moving towards a virtual infrastructure – Does this remind anyone of the the “Utility Computing Discussion that the emergence of the SAN (Storage Area Network) drove? Having just returned from VMworld 2006 where VMware talked about virtualization as a revolution and now sitting at Gartner Data Center Conference 2006 listening to the first keynote frame much of the conference around virtualization the VMware folks would be very proud. Another major point discussed was the driving forces behind what Garner calls Real-Time Infrastructe (to be defined in a later post – ironically I defined something I called JiT (Just-in-Time) Infrastructure two years ago for whatever it’s worth. Gartner defines three driving forces behing the need to move to a Real-Time Infrastructure:
- QoS (Quality of Service)
I will explain each of these in a follow-up post but it is not hard to see how virtualization is front and center with drivers like these.
Lastly the Gartner analyst talked about how in the future the Hypervisor will become free and the resource management and orchestration will be where the cost will reside. Interested in you thoughts specifically on this comment – if, when and why?
Share and Enjoy
This week I will be attending the 25th annual Gartner Data Center Summit in Las Vegas. Today I built my agenda and there are a few sessions that hold some promise. I will attempt to publish my observations and feedback as quickly as possible. I look forward to your opinions on what the Gartner analysts have to say.
Share and Enjoy
I believe it is fair to say the disk I/O performance characteristics bare not been the focus of VMware in the past but it seems VMware has taken some strides to address this in VI3. The ADC0135 – Choosing and Architecting Storage for Your Environment session was a bit basic for someone who understands disk technology but I think that over the past few years so much emphasis had been put on server consolidation that much of the VMware community has ignored the disk I/O discussion. I don’t think this was intentional but the value prop was so impressive around consolidation and test/dev that the I/O discussion was not a primary concern, the target audience has often also been server engineering teams and not storage engineering.
During the session the presenters reviewed some rudimentary topics such as SAN, NAS, iciness, DAS and where each is applicable as well as technological differentiators between technologies such as Fibre Channel (FC) and ATA (Advanced Technology Attachment) (e.g. – Tagged Command Queuing). As a proof point the moderator polled the audience of about 300 strong asking if anyone had ever heard off HIPPI (high Performance Parallel Interface) and about 3 people raised their hands. This is understandable as the target audience for VMware had traditionally been the server engineering team and/or developers and not the storage engineers thus the probable lack of a detailed understanding of storage interconnects.
With VMware looking for greater adoption rates in the corporate production IT environment by leveraging new value propositions focused on business continuity and disaster recovery and host of others, Virtualized servers will demand high I/O performance characteristics from both an transaction and bandwidth perspective. Storage farms will grow, become more sophisticated and more attention will be paid to integrating VMware technology with complex storage technologies such as platform based replication (e.g. – EMC SRDF), snapshot technology (e.g. – EMC Timefinder) and emerging technologies like CDP (Continuos data protection).
A practical example of what I believe has been a lack of education around storage and storage best practice can be proven through the fact that I believe many VMware users are unaware partition offset alignment. Offset alignment is a best practice that absolutely should be followed, this is not a function or responsibility of VMware but it is an often overlooked best practice – (engineers who grew up in the UNIX world and are familiar with a command strings like “sync;sync;sync” typically align partition offsets but admits who grew up in the Windows world I find often overlook offset alignment unless they are very savvy Exchange or SQL performance gurus). Windows users have become accustomed to portioning using disk manager from which it is not possible to align offsets, diskpar must be used to partition and align offsets.
I would be interested in some feedback on how many VMware / Windows users did not do this during their VMware configuration of Windows VM install? Be honest! If you are not using disk par to create partitions and align offsets it means that we need to do a better job educating.
Other notable points from the session:
- <= ESX 2.5.x FC-AL was not supported by VMware, VI3 supports FC-AL.
- VI3 supports 3 outstanding tag command queues per VMDK vs the a single command tag queue which was available in per VMFS in <= ESX 2.5.x – If someone else can verify this it would be great because I have a question mark next to my notes which means I may not have heard it correctly.
Share and Enjoy
Mendel is showing a VMware Workstation prototype that supports something he called VMware log replay. Using Microsoft Paint to create a drawing, he speaks and depicts a situation where the application may crash. Next he begins a log replay which replays the exact session as it was created live. This has far reaching effects from a development and debugging, forensics, software support, etc…
With the emergence of technologies such as this and virtual appliances reduced deployment times by leveraging VMware, increased reliability, reduced support costs, etc… are now becoming a reality. Virtual appliances hold the promise of becoming a defacto preferred methodology of application distribution. Users who choose to deeply a virtual appliance as an alternative to traditional physical application deployment could benefit from significantly reduced deployment times, increased reliability via hared environments optimized by the application vendor, reduced support costs and the list goes on and on. Virtual Appliances and VMware truly have the ability to revolutionize the software distribution paradigm we have all become accustomed to.
Share and Enjoy
Today started with a sausage, egg and cheese breakfast burrito. Yum
Missed most of the sessions yesterday because I was tied up in meetings all day. I am thrilled to be able to put my propeller back on today.
Looking forward to a couple of sessions in particular:
- ADC0135- Choosing and Architecting Storage for Your Environment
- TAC9668 – VMmark: A Scalable Benchmark for Virtualized Systems
- Have not seen this yet but I am excited to see what it will provide
- TAC9745 – Virtualization Management APIs: VMware, DMTF and Xen
Enjoy the day!
Share and Enjoy
Wow!!!! A representative from PG&E (Pacific Gas and Electric) just walked out on stage during the general session at world 2006 to announce a program where PG&E customers will see a 300 to 600 dollar energy credit for every physical server they remove from their compute environment by leveraging virtualization. This is truly incredible! The implications of VMware is now capturing the attention of the energy community, all I can say is WOW!! Not only is virtualization a revolution own its own but VMware is acting as a catalyst for the energy revolution. The purpose of the program is to continue their charter of reducing global warming and our dependence on oil. This is a revolution.
Share and Enjoy
Wow! Attendance at VMworld 2006 is up from VMworld 2005. In 2005 attendance at VMworld was approximately 3600 attendees. This year there are approximately 7000 attendees! If the attendance at VMware is indicative of the growth and adoption rate of the industry I think that VMware pundits are correct. This is a revolution.
Share and Enjoy
Just left a session entitled PAR110 – VMware Infrastructure 3 Operational Readiness: People and Process Considerations. Maybe it is me I was a bit confused during this session, following the morning general session where the advertised VMware ASP (average sale price) was 50K I’m not sure how much of a need there is for ITIL (Information Technology Infrastructure Library) level process and procedures. With an ASP how large can the addressable market be for these services? Will finding opportunities be like searching for a needle in a hay stack? I sensed similar confusion from others in the room. Who is the target audience for this? I am sure it will play very well in the fortune 100.