Search This Blog

Wednesday, April 1, 2020

Central AHV Backup Proxy Appliance Deployment Using Veeam

With v10, veeam included an enhancement for Nutanix AHV data protection. Some of the enhancement of Veeam & Backup is
1. AHV Cluster registration
2. Nutanix snapshot aware
3. Linux FLR
4. License management
5. Instant VM recovery to Vmware
6. VeeamZip
7. Centralized AHV appliance deployment

The AHV Backup Proxy v2 provides enhancement such as:-
a) Native deduplication appliance support (DDBoost, Catalyst)
b) Snapshot-only job
c) Restore from Protection Domain created, user-created snapshot (VM, file-level, disk)
d) Multi-User VAN UI
e) Email job status notification
f) drive exclusion
g) Schedule Active Full (no synthetic full yet)
h) Community Edition support
i) Restore any backup to AHV
j) Proxy snapshot protection job

Check out below step on how to add Nutanix Cluster, Linux Appliance and AHV Proxy v2

Step 1:- Add Nutanix Cluster


Step 2:- Enter Nutanix Cluster name ( not ip address)


Step 3: Provide credential that able to connect to Nutanix Cluster


Step 4: Provide Linux FLR Appliance ip address. Use when to recover Linux file-level recovery


Step 5: Apply the configuration.



Step 5: After complete adding Nutanix Cluster, you can proceed to add AHV Proxy or later. 
 Step 6: If decide to create a new proxy, select "Deploy New Proxy" option


Step 7: Define proxy name & size. You can define vcpu, vmemory & concurrent task.





Step 8: Provide DNS for AHV Proxy & define ip address (either use static or dhcp)


Step 9 : Provide AHV Proxy credential. Remember this credential when want to access from Web console.


Step 10: Specify which repository to allow access


Step 11: Apply the setting. Veeam will start to push the installer, create a new VM and apply the setting. It is crucial that the entire setting use DNS name.


If no issue, Veeam will start to push configuration with setting to integrate Nutanix Cluster & Veeam Backup Server. 

If you encountered an issue, please refer to here.

Open AHV Web Console, to verify IP deployment with the correct IP that you map to DNS record.








Configuration completed. You able to click on Web Console. System will open web browser to access AHV proxy.

Hope this guide help.

Good Luck with your deployment.

Tuesday, March 31, 2020

Veeam How to Series: v10

Previous How to Series Video:
In this post, we will share Veeam v10 How to Series enhancement:-

Video 1:- v10 Overview



Video 2:- Enhancement Instant Recovery Engine




Video 3: NAS



Video 4: Cloud Tier Enhancement



Video 5:- Data Integration API



 Remember to bookmark this page. We will share more when a video is available.

Monday, March 30, 2020

Failed to Deploy AHV Proxy V2 From VBR Console

With v10, you can now deploy AHV proxy from VBR Console.

Before you jump to deployment, remember to follow this pre-requisite:

1. Veeam Backup & Replication v10
2. Download the plug-in and install on Veeam Backup Server
3. DNS Server
- configure A record & PTR record for Veeam Backup Server, AHV Proxy & Nutanix Cluster
4. optional - set static on the host file
5. Add Nutanix Cluster using DNS name rather than IP Address. DO NOT USE IP ADD. [MUST]
5. Add AHV Proxy using DNS name rather than using IP Address. DO NOT USE IP ADD. [MUST]

Most likely error encountered when deploying AHV proxy:

Error 1: AHV Proxy service not started
Issue:- Wrong mapping name resolution.
Resolution:- Try using dynamic IP when deploying AHV Proxy. After Proxy deploys, take note on IP address on Nutanix Web Console & make changes on dns server & host files.



Error 2:- Backup REST error.
Issue:- Wrong mapping name resolution on Nutanix Cluster & Proxy
Resolution:- Re-check host files and DNS records. or check certificate on Nutanix Web Console. Make sure the validity date is valid.



Validity certificate must after your date. This happened when you just set up 1st Nutanix Cluster. By default, CVM is using UTC timezone & certificate date is after a day. If that's the case, change the timezone, regenerate new certificate or wait a day before deploy AHV Proxy from VBR console


Error 3:- Timeout was reached
Issue:- Incorrect use IP address
Resolution:- Duplicate ip when static & Nutanix Network DHCP or internal DNS. AHV Proxy getting multiple IP



Keypoint is

DO NOT USE IP ADDRESS when adding Nutanix Cluster & AHV Proxy.



Sunday, March 29, 2020

Unable to Boot Nutanix AHV VM When running Nutanix CE 2019.11.22

During testing, we have encountered an issue whereby new VM created from Nutanix CE 2019.11.22 was unable to boot. From the console, we found out that it stuck at bios.

Here is how it look when console to VM:



After troubleshooting, we found that it is a bug when running on nested virtualization.

If you encountered the same issue, please follow this step which I found from Nutanix Forum.  Here is the copy/paste workaround.


a) Add pmu state in svm template by following the next steps:
  • Boot CE VM
  • Login with root / nutanix/4u
  • Navigate to /home/install/phx_iso/phoenix/svm_template/kvm

Note: If this does not appear, you need to start creating a dumb VM first.

  • Edit default.xml and add pmu state value. default.xml should look like the below screenshot:
«apic 
Off 
</features>
b)
  • Navigate to /var/cache/libvirt/qemu/capabilities/
  • There should be 1 xml file, in my case it's 3c76bc41d59c0c7314b1ae8e63f4f765d2cf16abaeea081b3ca1f5d8732f7bb1.xml

Note: Modified both xml if you got multiple xml. The second xml will appear after create a dummy VM

Edit the file and at the bottom you have different machine types defined
remove the line


Edit the line



To the following



Final config should look like similar to this:

pc-i440fx-rhe17.2.W 
hotplugCpus•'yes' 
nane.'rhe16.3.0' 
hOtp maxCpus•' 240 
nane.'rhe16.4.0' 
hOtp maxCpus•' 240 
nane.'rhe16.0.0' 
hOtp yes maxCpus•' 240 
pc-i440fx-rhe17.1.0' hotplugCpus—'yes' 
name—• pc-q35-rhe17.3.0' alias—'q35' 
<machi ne 
qemuCaps> 
name-•rhe16.5.0' 
rhe16.6.0' 
name—' rhe16.1.0' 
rhe16.2.0' 
hotplugCpus— ' yes 
hotplugCpus— ' yes ' 
hotpl ugCpus— ' yes ' 
hotplugCpus— ' yes ' 
hotplugCpus— yes 
maxCpus— ' 240 ' 
maxCpus— ' 240 ' 
maxCpus— ' 240 ' 
maxCpus— ' 240 '

Save the file and reboot the VM, after this you can login with install and proceed as normal.


Thanks to Matmassez who provide this workaround. Infomation took from here.
.