Well back on April the 14th I attended my second London VMUG meeting, I really feel part of the community now and even had a shout out for Cloudcred, so if anyone wants to join the team please get in touch. This blog has taken rather too long to put together and I found myself writing so much it’s more like a novel now. But a blog not posted is a waste of a blog so here is my review of the London VMUG Meeting April 2016.
The day started with two general sessions for all.
The first session was from the Gold sponsor Veeam, Luca @dellock6 He kicked off the day with a lighthearted session with the subtitle “A look at what works and what doesn’t work when designing & implementing a data protection solution”
“Veeam Backup and Replication: Worst Practices” - Luca Dell’Oca EMEA Evangelist
I haven’t used Veeam since my last job but I took away several of Luca’s points some are more obvious that others:
- Don’t just install Veeam with defaults, next, next, next ,(hey we have all done it!) Create a backup job and never test a restore.
- Don’t ignore the infrastructure: you should Use the Veeam monitoring tools before you install Veeam. The VM Config assessment and VM change rate estimation tools.
- After Installation you should monitor capacity.
- Luca recommend the Restore point simulator at rps.dewin.me
- vCenter performance is critical so you should avoid any impacting processes while Veeam is running so do check you SQL maintenance plans for SQL (if you’re running a Windows vCenter) and don’t run them the same time you’re running your backups.
- Check Windows patching and don’t patch when you’re trying to backup.
- All VM’s are not equal so don’t create one job per VM, by a similar point don’t create one huge job as that causes huge backup files, this is also bad practice for de-duplication. This is bad as it can overload vCenter, Increases memory usage on the Veeam server and cause the Veeam database to grow.
- The Veeam scheduler is not optimised for hundreds of jobs so don’t give it hundreds of jobs.
- Don’t chain jobs manually, let the automatic scheduler control the load, it knows best, as a hung job will stop the chain so all your subsequent backups will fail.
- If you use old slow storage with no cache it will perform badly so use reasonable hardware with more ram and large caches.
- Whilst on poor performance don’t install on a Domain Controller or exchange servers it sounds obvious but if Luca mentioned it, somebody must have done it!
- Use a dedicated VM for Veeam hell it will even run on Windows 7 / 8 / 10 if you’re worried about a Server license.
- Don’t change advanced options without fully understanding the implications (again sounds obvious) but some advanced settings can have a negative impact. Don’t change what you don’t understand as the defaults are correct for 99% of installations.
- Luca informed us that new VM’s cause baby seal deaths now I’ve looked this up on the internet and I think Luca is mistaken 🙂
Next up was the Plenary Keynote from Simon Richardson,@SimonRichards0n VMware lead solution architect on SDDC/VSAN. Simon went through several new features and products including:
VMware vRealize Automation 7
- Which now includes a unified blueprint that integrates with NSX.
VMware vRealize Operations 6.1 and Log Insight 3.0
- A number of Improvements in Usability.
VMware vRealize Business Standard 7
- This is a single pane of glass for costing and pricing across private and public clouds I was particularly interested in the role based showback and reporting so you can show the business how much things really cost even if you don’t really charge them.
VMware Site Recovery Manager 6.1
- Now includes Integration to NSX (what product form VMware doesn’t now integrate with NSX if only we all had NSX in our estates)
- Zero downtime application mobility so no more outages even small ones.
VMware vCloud air
- Now Supporting Amazon Web Services and Microsoft Azure.
- And with the hybrid cloud manager you can now stretch layer over the wan so you can vMotion.
VMware Integrated Openstack (VIO)
- So you can now run a production grade openstack on VMware.
- Free if you are a vSphere enterprise plus customer.
“AppVolumes Beyond the Limits of Physics” – Simon Gallagher
Then it was time for #LonVmug star Simon Gallagher @vinf_net who gave us an overview of AppVolumes although I think a better title would have been “this is not the Business case you’re looking for”
Simon is quite clearly working in a complex and secure environment something I know only too well, and is building a VDI environment for several thousand users. As someone who uses Horizon 6 and thinapps I was quite interested in his use of app volumes, app stacks and Writable Volumes.
- Appstack is an extra VMDK and filter driver.
- Up to 15 stacks per VM.
- The more stacks, the more redirections so the slower it gets.
- The last stack wins so beware version conflicts.
- Managing file versions in Appstack can be an issue so Simon recommends using a 3rd party tool like Xvolumes.
- Middleware is a great use case, i.e. Java as Appstack is part of the native OS it’s easier to work with App V packages, as someone who regularly Thinapp’s Java apps with a specific version of Java I can see this use case being very useful.
- Grouping / clustering of applications can be useful to create “packs” a useful tip from Simon is to create a debug pack containing all the support tools that you might need to investigate any issues such as Wireshark. Another quote of the day here “Debugging the rabbit hole of redirection” and here I have been there and got the T-shirt as well.
Design challenges / Issues
- AppVolumes can replicate AppStacks but not writable Volumes.
- AppVolumes only supports a single Active Directory server so for production requires a IP load balancer.
- No understanding of AD sites (really no AD support it is 2016 isn’t it)
- Don’t backup app volume data treat it like a PC and if the data is lost you rebuild it.
- Check out postman a tool to repeat app configuration based on a master copy.
- Watch out for what happens when users change, ie logon location, (in my case even changing OU can cause some issues)
- Make sure Everything Is in sync.
- Make sure all domain controllers are working.
- If you offer a writable persistence volume everyone wants it (again as someone who offers a persistent / non persistent environment I know which one I prefer J)
- No Role-based access control (RBAC) for Admins.
- You can get a blue screen of death (BSOD) with McAfee filter driver.
- It’s a complex environment and it can be very easy to confuse Level 1 / 2 staff without training.
“Eliminate the Guesswork with ‘Analytics Driven Storage’” - James Smith
After lunch we had a choice to either see Nimble Storage or PernixData so I opted to see the presentation from James Smith, @james55smith Systems Engineer, PernixData. Having been to the vendor stand I was quite intrigued by the concept of their software, as they say they are leading a new era of analytics driven storage:
- Traditional sans weren’t designed for virtualization.
- Storage bottlenecks cause 70% of VMware performance issues.
- Virtual Machine workloads are dynamic.
- You can’t fix what you don’t understand.
- Lack of analytics makes design difficult.
They offer two products:
Pernixdata FVP (acceleration)
Is a software solution to bring flash performance to the rescue in servers, Hyper convergence and storage, it software base solution that sits on the VMware host and can effectively turn a SAN in to a flash array, increasing the performance of a VM 10x faster than storage alone.
PernixData Architect (analytics)
Is a tool to provide system data and analytics, it will analyse your VM and storage requirements and help optimise your environment based on results. Its 100 hardware agnostic and will help optimise storage for applications.
One of the biggest discussions was around the importance of block sizes, now this was not a shock to me as I learnt about block size 20 years ago. When you needed to take the block size in to consideration when building Netware servers for calculating the amount of Ram and Disk you needed. So the important question is what is a block?
- A block is a chunk of data a single unit in a data stream.
- This unit would be a read or a write from a single I/O operation
- Block size referees to the payload size of a unit.
- So a 256K block has 64 times the payload of a 4K block.
A block size can be changed by:
- Changing Read / Write activity of a VM.
- Changes in demand of applications.
- Upgrading the operating system of a VM.
- Upgrading applications running on a VM.
- Enabling features or functions on an application.
- Changing virtual resources.
Why does it matter, well any of the above changes may have unintended consequences on storage performance. Large block require more efforts resources and time to pass across the storage, Impacting performance of VM’s and infrastructure.
Flash doesn’t always solve the problem in fact it can make it worse as Flash tends to struggle with large blocks as its optimised for 8K blocks.
Using PernixData Architect it’s possible to identify a number of conclusions:
Architect has proven that write I/O’s are generally larger that read I/O’s affecting the performance of VM’s.
- The correlation between block size and latency becomes crystal clear.
- Workloads have an ever changing distribution of block sizes.
- Changes in one workload will impact other workloads on the same shared storage.
- A minimal amount of large block I/O’s can have a negative impact on well performing small block I/O.
- Block sizes can have a profound impact on storage performance and consistency.
- Metrics regarding block sizes are largely invisible to Administrators.
- PernixData Architect is the only solution that allows you to easily understand block sizes and their impact on the applications and the infrastructure.
I was certainly impressed by the product offerings and may well download the Trials form their website to install in my lab, there is even a free slimmed down version of FVP called FVP Freedom that I will also add to the list for installation at some point. In fact if it improve’s the performance of my Homelab that may be a feature for Alex’s new project #OpenHomeLab (see below)
Next up we had a session from LonVMUG’s own Alex Galbraith @alexgalbraith titled, Home Lab Geek-Out, this was more of a session about what kinds of labs people run so:
- Labs at home
- Labs in the cloud
- Labs on real Kit v home friendly kit
We discussed a number of issues around home labs focusing on:
- power consumption
However the real purpose of Alex’s session was the kickoff of a new venture he is working on entitled the #OpenHomeLab Project.
For more information you can follow the new twitter account @OpenHomelab and I would make sure you are following Alex and keep an eye on his blog. I hope to be able to contribute in some way to the project in the future as I build up my home lab.
“Daily challenges of vsphere projects” – Graeme Vermeulen
To finish the day off we then had a session from Graeme Vermeulen @VermeulenGraeme entitled Daily challenges of vSphere projects, Graham gave us the benefits and wisdom of his experience of working on a number of VMware projects.
His key points were:
- Change the default network name.
- Best practice to use vlans where possible.
- Always check network port speeds done get caught with a 100Mb Nic!
- Do things the VMware way where possiable.
- Follow best practice (allowing for any budget constraints)
- Always check the compatibility matrix.
- Understand how the infrastructure connects together.
- Understand the storage.
- Fc can be seen as the dinosaur of the storage world in a hyper converged environment so ISCSI and NFS are now more common. Maybe best applied to green field sites.
- As long as you can justify a design decision it’s ok but don’t blag as it can be dangerous.
vBeers – sponsored by 10ZiG
Then it was off to the pub for a bit more socialising I can only thank 10 Zig again for an excellent evening at the Old Bank of England Pub before I headed back to Waterloo for the train home.
The Dates are now out for the next 5 meetings so make a date in the diary for:
- 23rd June 2016 (Election day)
- 17th November 2016 (UK VMUG The National Motorcycle Museum Birmingham)
- 19th January 2017
- 6th April 2017
- 22nd June 2017