aii::opennebula::schema

Types

  • structure_aii_opennebula
    • structure_aii_opennebula/module
      • Required
      • Type: string
    • structure_aii_opennebula/image
      • Description: force create image from scratch, also stop/delete vm. VM images are not updated, if you want to resize or modify an available image from scratch use remove hook first.
      • Required
      • Type: boolean
      • Default value: false
    • structure_aii_opennebula/template
      • Description: force (re)create template, also stop/delete vm
      • Required
      • Type: boolean
      • Default value: false
    • structure_aii_opennebula/vm
      • Description: instantiate template (i.e. make vm)
      • Required
      • Type: boolean
      • Default value: false
    • structure_aii_opennebula/onhold
      • Description: vm is placed onhold, if false the VM execution is scheduled asap
      • Required
      • Type: boolean
      • Default value: true
  • opennebula_vmtemplate_vnet
  • opennebula_rdm_disk
  • opennebula_vmtemplate_datastore
  • valid_interface_ignoremac
    • Description: Type that checks if the network interface is available from the quattor tree
  • opennebula_ignoremac
    • Description: Type that sets which net interfaces/MACs will not include MAC values within ONE templates
    • opennebula_ignoremac/macaddr
      • Optional
      • Type: type_hwaddr
    • opennebula_ignoremac/interface
      • Optional
      • Type: valid_interface_ignoremac
  • opennebula_permissions
    • Description: Type that changes resources owner/group permissions. By default opennebula-aii generates all the resources as oneadmin owner/group. owner: OpenNebula user id or user name group: OpenNebula group id or username mode: Octal notation, e.g. 0600
    • opennebula_permissions/owner
      • Optional
      • Type: string
    • opennebula_permissions/group
      • Optional
      • Type: string
    • opennebula_permissions/mode
      • Optional
      • Type: long
  • opennebula_vmtemplate_pci
    • Description: It is possible to discover PCI devices in the hosts and assign them to Virtual Machines for the KVM host. I/O MMU and SR-IOV must be supported and enabled by the host OS and BIOS. More than one PCI option can be added to attach more than one PCI device to the VM. The device can be also specified without all the type values. PCI values must be hexadecimal (0xhex) If the PCI values are not found in any host the VM is queued waiting for the required resouces. “onehost show <host>” command gives us the list of PCI devices and “vendor”, “device” and “class” values within PCI DEVICES section as example: VM ADDR TYPE NAME 06:00.1 15b3:1002:0c06 MT25400 Family [ConnectX-2 Virtual Function] VM: The VM ID using that specific device. Empty if no VMs are using that device. ADDR: PCI Address. TYPE: Values describing the device. These are VENDOR:DEVICE:CLASS. These values are used when selecting a PCI device do to passthrough. NAME: Name of the PCI device. In this case to request this IB device we should set: vendor: 0x15b3 device: 0x1002 class: 0x0c06 For more info: http://docs.opennebula.org/5.0/deployment/open_cloud_host_setup/pci_passthrough.html
    • opennebula_vmtemplate_pci/vendor
      • Description: first value from onehost TYPE section
      • Optional
      • Type: long
    • opennebula_vmtemplate_pci/device
      • Description: second value from onehost TYPE section
      • Optional
      • Type: long
    • opennebula_vmtemplate_pci/class
      • Description: third value from onehost TYPE section
      • Optional
      • Type: long
  • opennebula_vmtemplate_vmgroup
    • Description: Type that sets VM Groups and Roles for a specifc VM. VMGroups are placed by dynamically generating the requirement (SCHED_REQUIREMENTS) of each VM an re-evaluating these expressions. Moreover, the following is also considered: The scheduler will look for a host with enough capacity for an affined set of VMs. If there is no such host all the affined VMs will remain pending. If new VMs are added to an affined role, it will pick one of the hosts where the VMs are running. By default, all should be running in the same host but if you manually migrate a VM to another host it will be considered feasible for the role. The scheduler does not have any synchronization point with the state of the VM group, it will start scheduling pending VMs as soon as they show up. Re-scheduling of VMs works as for any other VM, it will look for a different host considering the placement constraints. For more info: https://docs.opennebula.org/5.8/advanced_components/application_flow_and_auto-scaling/vmgroups.html
    • opennebula_vmtemplate_vmgroup/vmgroup_name
      • Required
      • Type: string
    • opennebula_vmtemplate_vmgroup/role
      • Required
      • Type: string
  • opennebula_placements
    • Description: Type that sets placement constraints and preferences for the VM, valid for all hosts More info: http://docs.opennebula.org/5.0/operation/references/template.html#placement-section
    • opennebula_placements/sched_requirements
      • Description: Boolean expression that rules out provisioning hosts from list of machines suitable to run this VM.
      • Optional
      • Type: string
    • opennebula_placements/sched_rank
      • Description: This field sets which attribute will be used to sort the suitable hosts for this VM. Basically, it defines which hosts are more suitable than others.
      • Optional
      • Type: string
    • opennebula_placements/sched_ds_requirements
      • Description: Boolean expression that rules out entries from the pool of datastores suitable to run this VM.
      • Optional
      • Type: string
    • opennebula_placements/sched_ds_rank
      • Description: States which attribute will be used to sort the suitable datastores for this VM. Basically, it defines which datastores are more suitable than others.
      • Optional
      • Type: string
  • opennebula_vmtemplate
    • opennebula_vmtemplate/vnet
      • Description: Set the VNETs opennebula/vnet (bridges) required by each VM network interface
      • Required
      • Type: opennebula_vmtemplate_vnet
    • opennebula_vmtemplate/datastore
      • Description: Set the OpenNebula opennebula/datastore name for each vdx
      • Required
      • Type: opennebula_vmtemplate_datastore
    • opennebula_vmtemplate/diskrdmpath
    • opennebula_vmtemplate/ignoremac
      • Description: Set ignoremac tree to avoid to include MAC values within AR/VM templates
      • Optional
      • Type: opennebula_ignoremac
    • opennebula_vmtemplate/virtio_queues
    • opennebula_vmtemplate/graphics
      • Description: Set graphics to export VM graphical display (VNC is used by default)
      • Required
      • Type: string
      • Default value: VNC
    • opennebula_vmtemplate/diskcache
      • Description: Select the cache mechanism for your disks. (by default is set to none)
      • Optional
      • Type: string
    • opennebula_vmtemplate/diskdriver
      • Description: specific image mapping driver. qcow2 is not supported by Ceph storage backends
      • Optional
      • Type: string
    • opennebula_vmtemplate/permissions
      • Optional
      • Type: opennebula_permissions
    • opennebula_vmtemplate/pci
      • Description: Set pci list values to enable PCI Passthrough. PCI passthrough section is also generated based on /hardware/cards/<card_type>/<interface>/pci values.
      • Optional
      • Type: opennebula_vmtemplate_pci
    • opennebula_vmtemplate/labels
      • Description: labels is a list of strings to group the VMs under a given name and filter them in the admin and cloud views. It is also possible to include in the list sub-labels using a common slash: list(“Name”, “Name/SubName”) This feature is available since OpenNebula 5.x, below this version the change does not take effect.
      • Optional
      • Type: string
    • opennebula_vmtemplate/placements
      • Optional
      • Type: opennebula_placements
    • opennebula_vmtemplate/memorybacking
      • Description: The optional memoryBacking element may contain several elements that influence how virtual memory pages are backed by host pages. hugepages: This tells the hypervisor that the guest should have its memory allocated using hugepages instead of the normal native page size. nosharepages: Instructs hypervisor to disable shared pages (memory merge, KSM) for this domain. locked: When set and supported by the hypervisor, memory pages belonging to the domain will be locked in hosts memory and the host will not be allowed to swap them out, which might be required for some workloads such as real-time. For QEMU/KVM guests, the memory used by the QEMU process itself will be locked too: unlike guest memory, this is an amount libvirt has no way of figuring out in advance, so it has to remove the limit on locked memory altogether. Thus, enabling this option opens up to a potential security risk: the host will be unable to reclaim the locked memory back from the guest when its running out of memory, which means a malicious guest allocating large amounts of locked memory could cause a denial-of-service attach on the host.
      • Optional
      • Type: string
    • opennebula_vmtemplate/vmgroup
      • Description: Request existing VM Group and roles. A VM Group defines a set of related VMs, and associated placement constraints for the VMs in the group. A VM Group allows you to place together (or separately) ceartain VMs (or VM classes, roles). VMGroups will help you to optimize the performance (e.g. not placing all the cpu bound VMs in the same host) or improve the fault tolerance (e.g. not placing all your front-ends in the same host) of your multi-VM applications.
      • Optional
      • Type: opennebula_vmtemplate_vmgroup

Variables

  • OPENNEBULA_AII_MODULE_NAME

Functions

  • validate_aii_opennebula_hooks
    • Description: Function to validate all aii_opennebula hooks
  • is_consistent_memorybacking