Skip to main content

Monitorización de eventos

Esta funcionalidad realiza una copia de los eventos presentes en el vCenter de VMware® a la lista de eventos de Pandora FMS.

Estos eventos pasan a formar parte del flujo de eventos normales de Pandora FMS y quedan asociados de forma automática al Agente que representa el vCenter del que provienen (si el Agente existe en el momento de creación del evento).

image-1684314855677.png

En el proceso de volcado de eventos se respeta la información y severidad que VMware® indica en la creación del evento, de tal forma que los eventos con un nivel de severidad crítico, advertencia o informativo conservarán estos niveles en Pandora FMS. La siguiente imagen muestra un ejemplo de la información detallada de un evento volcado de VMware a Pandora FMS.

image-1684314893073.png

Con todos los eventos presentes en Pandora FMS podrá realizar todas las acciones disponibles para la gestión de eventos, como por ejemplo: creación de alertas, configuración de filtros, apertura de incidencias, etc.

Tabla de eventos

EventoSeveridadTipo de eventoGrupo
An account was created on host {host.name}InformationalSystemAll
Account {account} was removed on host {host.name}InformationalSystemAll
An account was updated on host {host.name}InformationalSystemAll
The default password for the root user on the host {host.name} has not been changedInformationalSystemAll
Alarm '{alarm.name}' on {entity.name} triggered an actionInformationalSystemAll
Created alarm '{alarm.name}' on {entity.name}InformationalSystemAll
Alarm '{alarm.name}' on {entity.name} sent email to {to}InformationalSystemAll
Alarm '{alarm.name}' on {entity.name} cannot send email to {to}CriticalSystemAll
Reconfigured alarm '{alarm.name}' on {entity.name}InformationalSystemAll
Removed alarm '{alarm.name}' on {entity.name}InformationalSystemAll
Alarm '{alarm.name}' on {entity.name} ran script {script}InformationalSystemAll
Alarm '{alarm.name}' on {entity.name} did not complete script: {reason.msg}CriticalSystemAll
Alarm '{alarm.name}': an SNMP trap for entity {entity.name} was sentInformationalSystemAll
Alarm '{alarm.name}' on entity {entity.name} did not send SNMP trap: {reason.msg}CriticalSystemAll
Alarm '{alarm.name}' on {entity.name} changed from {[email protected]} to {[email protected]}InformationalSystemAll
All running virtual machines are licensedInformationalSystemAll
User cannot logon since the user is already logged onInformationalSystemAll
Cannot login {userName}@{ipAddress}CriticalSystemAll
The operation performed on host {host.name} in {datacenter.name} was canceledInformationalSystemAll
Changed ownership of file name {filename} from {oldOwner} to {newOwner} on {host.name} in {datacenter.name}.InformationalSystemAll
Cannot change ownership of file name {filename} from {owner} to {attemptedOwner} on {host.name} in {datacenter.name}.CriticalSystemAll
Checked cluster for complianceInformationalSystemAll
Created cluster {computeResource.name} in {datacenter.name}InformationalSystemAll
Removed cluster {computeResource.name} in datacenter {datacenter.name}InformationalSystemAll
Insufficient capacity in cluster {computeResource.name} to satisfy resource configuration in {datacenter.name}CriticalSystemAll
Reconfigured cluster {computeResource.name} in datacenter {datacenter.name}InformationalSystemAll
Configuration status on cluster {computeResource.name} changed from {[email protected]} to {[email protected]} in {datacenter.name}InformationalSystemAll
Created new custom field definition {name}InformationalSystemAll
Removed field definition {name}InformationalSystemAll
Renamed field definition from {name} to {newName}InformationalSystemAll
Changed custom field {name} on {entity.name} in {datacenter.name} to {value}InformationalSystemAll
Cannot complete customization of VM {vm.name}. See customization log at {logLocation} on the guest OS for details.InformationalSystemAll
An error occurred while setting up Linux identity. See log file '{logLocation}' on guest OS for details.CriticalSystemAll
An error occurred while setting up network properties of the guest OS. See the log file {logLocation} in the guest OS for details.CriticalSystemAll
Started customization of VM {vm.name}. Customization log located at {logLocation} in the guest OS.InformationalSystemAll
Customization of VM {vm.name} succeeded. Customization log located at {logLocation} in the guest OS.InformationalSystemAll
The version of Sysprep {sysprepVersion} provided for customizing VM {vm.name} does not match the version of guest OS {systemVersion}. See the log file {logLocation} in the guest OS for more information.CriticalSystemAll
An error occurred while customizing VM {vm.name}. For details reference the log file {logLocation} in the guest OS.CriticalSystemAll
dvPort group {net.name} in {datacenter.name} was added to switch {dvs.name}.InformationalSystemAll
dvPort group {net.name} in {datacenter.name} was deleted.InformationalSystemAll
 InformationalSystemAll
dvPort group {net.name} in {datacenter.name} was reconfigured.InformationalSystemAll
dvPort group {oldName} in {datacenter.name} was renamed to {newName}InformationalSystemAll
HA admission control disabled on cluster {computeResource.name} in {datacenter.name}InformationalSystemAll
HA admission control enabled on cluster {computeResource.name} in {datacenter.name}InformationalSystemAll
Re-established contact with a primary host in this HA clusterInformationalSystemAll
Unable to contact a primary HA agent in cluster {computeResource.name} in {datacenter.name}CriticalSystemAll
All hosts in the HA cluster {computeResource.name} in {datacenter.name} were isolated from the network. Check the network configuration for proper network redundancy in the management network.CriticalSystemAll
HA disabled on cluster {computeResource.name} in {datacenter.name}InformationalSystemAll
HA enabled on cluster {computeResource.name} in {datacenter.name}InformationalSystemAll
A possible host failure has been detected by HA on {failedHost.name} in cluster {computeResource.name} in {datacenter.name}CriticalSystemAll
Host {isolatedHost.name} has been isolated from cluster {computeResource.name} in {datacenter.name}WarningSystemAll
Created datacenter {datacenter.name} in folder {parent.name}InformationalSystemAll
Renamed datacenter from {oldName} to {newName}InformationalSystemAll
Datastore {datastore.name} increased in capacity from {oldCapacity} bytes to {newCapacity} bytes in {datacenter.name}InformationalSystemAll
Removed unconfigured datastore {datastore.name}InformationalSystemAll
Discovered datastore {datastore.name} on {host.name} in {datacenter.name}InformationalSystemAll
Multiple datastores named {datastore} detected on host {host.name} in {datacenter.name}CriticalSystemAll
<internal>InformationalSystemAll
File or directory {sourceFile} copied from {sourceDatastore.name} to {datastore.name} as {targetFile}InformationalSystemAll
File or directory {targetFile} deleted from {datastore.name}InformationalSystemAll
File or directory {sourceFile} moved from {sourceDatastore.name} to {datastore.name} as {targetFile}InformationalSystemAll
Reconfigured Storage I/O Control on datastore {datastore.name}InformationalSystemAll
Configured datastore principal {datastorePrincipal} on host {host.name} in {datacenter.name}InformationalSystemAll
Removed datastore {datastore.name} from {host.name} in {datacenter.name}InformationalSystemAll
Renamed datastore from {oldName} to {newName} in {datacenter.name}InformationalSystemAll
Renamed datastore from {oldName} to {newName} in {datacenter.name}InformationalSystemAll
Disabled DRS on cluster {computeResource.name} in datacenter {datacenter.name}InformationalSystemAll
Enabled DRS on {computeResource.name} with automation level {behavior} in {datacenter.name}InformationalSystemAll
DRS put {host.name} into standby modeInformationalSystemAll
DRS is putting {host.name} into standby modeInformationalSystemAll
DRS cannot move {host.name} out of standby modeCriticalSystemAll
DRS moved {host.name} out of standby modeInformationalSystemAll
DRS is moving {host.name} out of standby modeInformationalSystemAll
DRS invocation not completedCriticalSystemAll
DRS has recovered from the failureInformationalSystemAll
Unable to apply DRS resource settings on host {host.name} in {datacenter.name}. {reason.msg}. This can significantly reduce the effectiveness of DRS.CriticalSystemAll
Resource configuration specification returns to synchronization from previous failure on host '{host.name}' in {datacenter.name}InformationalSystemAll
{vm.name} on {host.name} in {datacenter.name} is now compliant with DRS VM-Host affinity rulesInformationalSystemAll
{vm.name} on {host.name} in {datacenter.name} is violating a DRS VM-Host affinity ruleInformationalSystemAll
DRS migrated {vm.name} from {sourceHost.name} to {host.name} in cluster {computeResource.name} in {datacenter.name}InformationalSystemAll
DRS powered On {vm.name} on {host.name} in {datacenter.name}InformationalSystemAll
Virtual machine {macAddress} on host {host.name} has a duplicate IP {duplicateIP}InformationalSystemAll
A vNetwork Distributed Switch {dvs.name} was created in {datacenter.name}.InformationalSystemAll
vNetwork Distributed Switch {dvs.name} in {datacenter.name} was deleted.InformationalSystemAll
vNetwork Distributed Switch eventInformationalSystemAll
The vNetwork Distributed Switch {dvs.name} configuration on the host was synchronized with that of the vCenter Server.InformationalSystemAll
The host {hostJoined.name} joined the vNetwork Distributed Switch {dvs.name} in {datacenter.name}.InformationalSystemAll
The host {hostLeft.name} left the vNetwork Distributed Switch {dvs.name} in {datacenter.name}.InformationalSystemAll
The host {hostMember.name} changed status on the vNetwork Distributed Switch {dvs.name} in {datacenter.name}.InformationalSystemAll
The vNetwork Distributed Switch {dvs.name} configuration on the host differed from that of the vCenter Server.WarningSystemAll
vNetwork Distributed Switch {srcDvs.name} was merged into {dstDvs.name} in {datacenter.name}.InformationalSystemAll
dvPort {portKey} was blocked in the vNetwork Distributed Switch {dvs.name} in {datacenter.name}.InformationalSystemAll
The port {portKey} was connected in the vNetwork Distributed Switch {dvs.name} in {datacenter.name}InformationalSystemAll
New ports were created in the vNetwork Distributed Switch {dvs.name} in {datacenter.name}.InformationalSystemAll
Deleted ports in the vNetwork Distributed Switch {dvs.name} in {datacenter.name}.InformationalSystemAll
The dvPort {portKey} was disconnected in the vNetwork Distributed Switch {dvs.name} in {datacenter.name}.InformationalSystemAll
dvPort {portKey} entered passthrough mode in the vNetwork Distributed Switch {dvs.name} in {datacenter.name}.InformationalSystemAll
dvPort {portKey} exited passthrough mode in the vNetwork Distributed Switch {dvs.name} in {datacenter.name}.InformationalSystemAll
dvPort {portKey} was moved into the dvPort group {portgroupName} in {datacenter.name}.InformationalSystemAll
dvPort {portKey} was moved out of the dvPort group {portgroupName} in {datacenter.name}.InformationalSystemAll
The port {portKey} link was down in the vNetwork Distributed Switch {dvs.name} in {datacenter.name}InformationalSystemAll
The port {portKey} link was up in the vNetwork Distributed Switch {dvs.name} in {datacenter.name}InformationalSystemAll
Reconfigured ports in the vNetwork Distributed Switch {dvs.name} in {datacenter.name}.InformationalSystemAll
dvPort {portKey} was unblocked in the vNetwork Distributed Switch {dvs.name} in {datacenter.name}.InformationalSystemAll
The vNetwork Distributed Switch {dvs.name} in {datacenter.name} was reconfigured.InformationalSystemAll
The vNetwork Distributed Switch {oldName} in {datacenter.name} was renamed to {newName}.InformationalSystemAll
An upgrade for the vNetwork Distributed Switch {dvs.name} in datacenter {datacenter.name} is available.InformationalSystemAll
An upgrade for the vNetwork Distributed Switch {dvs.name} in datacenter {datacenter.name} is in progress.InformationalSystemAll
Cannot complete an upgrade for the vNetwork Distributed Switch {dvs.name} in datacenter {datacenter.name}InformationalSystemAll
vNetwork Distributed Switch {dvs.name} in datacenter {datacenter.name} was upgraded.InformationalSystemAll
Host {host.name} in {datacenter.name} has entered maintenance modeInformationalSystemAll
The host {host.name} is in standby modeInformationalSystemAll
Host {host.name} in {datacenter.name} has started to enter maintenance modeInformationalSystemAll
The host {host.name} is entering standby modeInformationalSystemAll
{message}CriticalSystemAll
Host {host.name} in {datacenter.name} has exited maintenance modeInformationalSystemAll
The host {host.name} could not exit standby modeCriticalSystemAll
The host {host.name} is no longer in standby modeInformationalSystemAll
The host {host.name} is exiting standby modeInformationalSystemAll
Sufficient resources are available to satisfy HA failover level in cluster {computeResource.name} in {datacenter.name}InformationalSystemAll
General event: {message}InformationalSystemAll
Error detected on {host.name} in {datacenter.name}: {message}CriticalSystemAll
Issue detected on {host.name} in {datacenter.name}: {message}InformationalSystemAll
Issue detected on {host.name} in {datacenter.name}: {message}WarningSystemAll
User logged event: {message}InformationalSystemAll
Error detected for {vm.name} on {host.name} in {datacenter.name}: {message}CriticalSystemAll
Issue detected for {vm.name} on {host.name} in {datacenter.name}: {message}InformationalSystemAll
Issue detected for {vm.name} on {host.name} in {datacenter.name}: {message}WarningSystemAll
The vNetwork Distributed Switch corresponding to the proxy switches {switchUuid} on the host {host.name} does not exist in vCenter Server or does not contain this host.InformationalSystemAll
A ghost proxy switch {switchUuid} on the host {host.name} was resolved.InformationalSystemAll
The message changed: {message}InformationalSystemAll
{componentName} status changed from {oldStatus} to {newStatus}InformationalSystemAll
Cannot add host {hostname} to datacenter {datacenter.name}CriticalSystemAll
Added host {host.name} to datacenter {datacenter.name}InformationalSystemAll
Administrator access to the host {host.name} is disabledWarningSystemAll
Administrator access to the host {host.name} has been restoredWarningSystemAll
Cannot connect {host.name} in {datacenter.name}: cannot configure management accountCriticalSystemAll
Cannot connect {host.name} in {datacenter.name}: already managed by {serverName}CriticalSystemAll
Cannot connect host {host.name} in {datacenter.name} : server agent is not respondingCriticalSystemAll
Cannot connect {host.name} in {datacenter.name}: incorrect user name or passwordCriticalSystemAll
Cannot connect {host.name} in {datacenter.name}: incompatible versionCriticalSystemAll
Cannot connect host {host.name} in {datacenter.name}. Did not install or upgrade vCenter agent service.CriticalSystemAll
Cannot connect {host.name} in {datacenter.name}: error connecting to hostCriticalSystemAll
Cannot connect {host.name} in {datacenter.name}: network errorCriticalSystemAll
Cannot connect host {host.name} in {datacenter.name}: account has insufficient privilegesCriticalSystemAll
Cannot connect host {host.name} in {datacenter.name}CriticalSystemAll
Cannot connect {host.name} in {datacenter.name}: not enough CPU licensesCriticalSystemAll
Cannot connect {host.name} in {datacenter.name}: incorrect host nameCriticalSystemAll
Cannot connect {host.name} in {datacenter.name}: time-out waiting for host responseCriticalSystemAll
Host {host.name} checked for compliance.InformationalSystemAll
Host {host.name} is in compliance with the attached profileInformationalSystemAll
Host configuration changes applied.InformationalSystemAll
Connected to {host.name} in {datacenter.name}InformationalSystemAll
Host {host.name} in {datacenter.name} is not respondingCriticalSystemAll
dvPort connected to host {host.name} in {datacenter.name} changed statusInformationalSystemAll
HA agent disabled on {host.name} in cluster {computeResource.name} in {datacenter.name}InformationalSystemAll
HA is being disabled on {host.name} in cluster {computeResource.name} in datacenter {datacenter.name}InformationalSystemAll
HA agent enabled on {host.name} in cluster {computeResource.name} in {datacenter.name}InformationalSystemAll
Enabling HA agent on {host.name} in cluster {computeResource.name} in {datacenter.name}WarningSystemAll
HA agent on {host.name} in cluster {computeResource.name} in {datacenter.name} has an error {message}: {[email protected]}CriticalSystemAll
HA agent on host {host.name} in cluster {computeResource.name} in {datacenter.name} is configured correctlyInformationalSystemAll
Disconnected from {host.name} in {datacenter.name}. Reason: {[email protected]}InformationalSystemAll
Cannot restore some administrator permissions to the host {host.name}CriticalSystemAll
Host {host.name} has the following extra networks not used by other hosts for HA communication:{ips}. Consider using HA advanced option das.allowNetwork to control network usageCriticalSystemAll
Cannot complete command 'hostname -s' on host {host.name} or returned incorrect name formatCriticalSystemAll
Maximum ({capacity}) number of hosts allowed for this edition of vCenter Server has been reachedCriticalSystemAll
The virtual machine inventory file on host {host.name} is damaged or unreadable.InformationalSystemAll
IP address of the host {host.name} changed from {oldIP} to {newIP}InformationalSystemAll
Configuration of host IP address is inconsistent on host {host.name}: address resolved to {ipAddress} and {ipAddress2}CriticalSystemAll
Cannot resolve IP address to short name on host {host.name}CriticalSystemAll
Host {host.name} could not reach isolation address: {isolationIp}CriticalSystemAll
A host license for {host.name} has expiredCriticalSystemAll
Host {host.name} does not have the following networks used by other hosts for HA communication:{ips}. Consider using HA advanced option das.allowNetwork to control network usageCriticalSystemAll
Host monitoring state in {computeResource.name} in {datacenter.name} changed to {[email protected]}InformationalSystemAll
Host {host.name} currently has no available networks for HA Communication. The following networks are currently used by HA: {ips}CriticalSystemAll
Host {host.name} has no port groups enabled for HA communication.CriticalSystemAll
Host {host.name} currently has no management network redundancyCriticalSystemAll
Host {host.name} is not in compliance with the attached profileCriticalSystemAll
Host {host.name} is not a cluster member in {datacenter.name}CriticalSystemAll
Insufficient capacity in host {computeResource.name} to satisfy resource configuration in {datacenter.name}CriticalSystemAll
Primary agent {primaryAgent} was not specified as a short name to host {host.name}CriticalSystemAll
Profile is applied on the host {host.name}InformationalSystemAll
Cannot reconnect to {host.name} in {datacenter.name}CriticalSystemAll
Removed host {host.name} in {datacenter.name}InformationalSystemAll
Host names {shortName} and {shortName2} both resolved to the same IP address. Check the host's network configuration and DNS entriesCriticalSystemAll
Cannot resolve short name {shortName} to IP address on host {host.name}CriticalSystemAll
Shut down of {host.name} in {datacenter.name}: {reason}InformationalSystemAll
Configuration status on host {computeResource.name} changed from {[email protected]} to {[email protected]} in {datacenter.name}InformationalSystemAll
Cannot synchronize host {host.name}. {reason.msg}CriticalSystemAll
Cannot install or upgrade vCenter agent service on {host.name} in {datacenter.name}CriticalSystemAll
The userworld swap is not enabled on the host {host.name}WarningSystemAll
Host {host.name} vNIC {vnic.vnic} was reconfigured to use dvPort {vnic.port.portKey} with port level configuration, which might be different from the dvPort group.InformationalSystemAll
WWNs are changed for {host.name}WarningSystemAll
The WWN ({wwn}) of {host.name} conflicts with the currently registered WWNCriticalSystemAll
Host {host.name} did not provide the information needed to acquire the correct set of licensesCriticalSystemAll
{message}InformationalSystemAll
Insufficient resources to satisfy HA failover level on cluster {computeResource.name} in {datacenter.name}CriticalSystemAll
The license edition '{feature}' is invalidCriticalSystemAll
License {feature.featureName} has expiredCriticalSystemAll
License inventory is not compliant. Licenses are overusedCriticalSystemAll
Unable to acquire licenses due to a restriction in the option file on the license server.CriticalSystemAll
License server {licenseServer} is availableInformationalSystemAll
License server {licenseServer} is unavailableCriticalSystemAll
Created local datastore {datastore.name} on {host.name} in {datacenter.name}InformationalSystemAll
The Local Tech Support Mode for the host {host.name} has been enabledInformationalSystemAll
Datastore {datastore} which is configured to back the locker does not existWarningSystemAll
Locker was reconfigured from {oldDatastore} to {newDatastore} datastoreInformationalSystemAll
Unable to migrate {vm.name} from {host.name} in {datacenter.name}: {fault.msg}CriticalSystemAll
Unable to migrate {vm.name} from {host.name} to {dstHost.name} in {datacenter.name}: {fault.msg}CriticalSystemAll
Migration of {vm.name} from {host.name} to {dstHost.name} in {datacenter.name}: {fault.msg}WarningSystemAll
Cannot migrate {vm.name} from {host.name} to {dstHost.name} and resource pool {dstPool.name} in {datacenter.name}: {fault.msg}CriticalSystemAll
Migration of {vm.name} from {host.name} to {dstHost.name} and resource pool {dstPool.name} in {datacenter.name}: {fault.msg}WarningSystemAll
Migration of {vm.name} from {host.name} in {datacenter.name}: {fault.msg}WarningSystemAll
Created NAS datastore {datastore.name} on {host.name} in {datacenter.name}InformationalSystemAll
Cannot login user {userName}@{ipAddress}: no permissionCriticalSystemAll
No datastores have been configured on the host {host.name}InformationalSystemAll
A required license {feature.featureName} is not reservedCriticalSystemAll
Unable to automatically migrate {vm.name} from {host.name}InformationalSystemAll
Non-VI workload detected on datastore {datastore.name}CriticalSystemAll
Not enough resources to failover {vm.name} in {computeResource.name} in {datacenter.name}InformationalSystemAll
The vNetwork Distributed Switch configuration on some hosts differed from that of the vCenter Server.WarningSystemAll
Permission created for {principal} on {entity.name}, role is {role.name}, propagation is {[email protected]}InformationalSystemAll
Permission rule removed for {principal} on {entity.name}InformationalSystemAll
Permission changed for {principal} on {entity.name}, role is {role.name}, propagation is {[email protected]}InformationalSystemAll
Profile {profile.name} attached.InformationalSystemAll
Profile {profile.name} was changed.InformationalSystemAll
Profile is created.InformationalSystemAll
Profile {profile.name} detached.InformationalSystemAll
Profile {profile.name} reference host changed.InformationalSystemAll
Profile was removed.InformationalSystemAll
Remote Tech Support Mode (SSH) for the host {host.name} has been enabledInformationalSystemAll
Created resource pool {resourcePool.name} in compute-resource {computeResource.name} in {datacenter.name}InformationalSystemAll
Removed resource pool {resourcePool.name} on {computeResource.name} in {datacenter.name}InformationalSystemAll
Moved resource pool {resourcePool.name} from {oldParent.name} to {newParent.name} on {computeResource.name} in {datacenter.name}InformationalSystemAll
Updated configuration for {resourcePool.name} in compute-resource {computeResource.name} in {datacenter.name}InformationalSystemAll
Resource usage exceeds configuration for resource pool {resourcePool.name} in compute-resource {computeResource.name} in {datacenter.name}CriticalSystemAll
New role {role.name} createdInformationalSystemAll
Role {role.name} removedInformationalSystemAll
Modifed role {role.name}InformationalSystemAll
Task {scheduledTask.name} on {entity.name} in {datacenter.name} completed successfullyInformationalSystemAll
Created task {scheduledTask.name} on {entity.name} in {datacenter.name}InformationalSystemAll
Task {scheduledTask.name} on {entity.name} in {datacenter.name} sent email to {to}InformationalSystemAll
Task {scheduledTask.name} on {entity.name} in {datacenter.name} cannot send email to {to}: {reason.msg}CriticalSystemAll
Task {scheduledTask.name} on {entity.name} in {datacenter.name} cannot be completed: {reason.msg}CriticalSystemAll
Reconfigured task {scheduledTask.name} on {entity.name} in {datacenter.name}InformationalSystemAll
Removed task {scheduledTask.name} on {entity.name} in {datacenter.name}InformationalSystemAll
Running task {scheduledTask.name} on {entity.name} in {datacenter.name}InformationalSystemAll
A vCenter Server license has expiredCriticalSystemAll
vCenter startedInformationalSystemAll
A session for user '{terminatedUsername}' has stoppedInformationalSystemAll
Task: {info.descriptionId}InformationalSystemAll
Task: {info.descriptionId} time-outInformationalSystemAll
Upgrading template {legacyTemplate}InformationalSystemAll
Cannot upgrade template {legacyTemplate} due to: {reason.msg}InformationalSystemAll
Template {legacyTemplate} upgrade completedInformationalSystemAll
The operation performed on {host.name} in {datacenter.name} timed outWarningSystemAll
There are {unlicensed} unlicensed virtual machines on host {host} - there are only {available} licenses availableInformationalSystemAll
{unlicensed} unlicensed virtual machines found on host {host}InformationalSystemAll
The agent on host {host.name} is updated and will soon restartInformationalSystemAll
User {userLogin} was added to group {group}InformationalSystemAll
User {userName}@{ipAddress} logged inInformationalSystemAll
User {userName} logged outInformationalSystemAll
Password was changed for account {userLogin} on host {host.name}InformationalSystemAll
User {userLogin} removed from group {group}InformationalSystemAll
{message}InformationalSystemAll
Created VMFS datastore {datastore.name} on {host.name} in {datacenter.name}InformationalSystemAll
Expanded VMFS datastore {datastore.name} on {host.name} in {datacenter.name}InformationalSystemAll
Extended VMFS datastore {datastore.name} on {host.name} in {datacenter.name}InformationalSystemAll
A vMotion license for {host.name} has expiredCriticalSystemAll
Cannot uninstall vCenter agent from {host.name} in {datacenter.name}. {[email protected]}CriticalSystemAll
vCenter agent has been uninstalled from {host.name} in {datacenter.name}InformationalSystemAll
Cannot upgrade vCenter agent on {host.name} in {datacenter.name}. {[email protected]}CriticalSystemAll
vCenter agent has been upgraded on {host.name} in {datacenter.name}InformationalSystemAll
VIM account password was changed on host {host.name}InformationalSystemAll
Remote console to {vm.name} on {host.name} in {datacenter.name} has been openedInformationalSystemAll
A ticket for {vm.name} of type {ticketType} on {host.name} in {datacenter.name} has been acquiredInformationalSystemAll
Invalid name for {vm.name} on {host.name} in {datacenter.name}. Renamed from {oldName} to {newName}InformationalSystemAll
Cloning {vm.name} on host {host.name} in {datacenter.name} to {destName} on host {destHost.name}InformationalSystemAll
Cloning {vm.name} on host {host.name} in {datacenter.name} to {destName} on host {destHost.name}InformationalSystemAll
Creating {vm.name} on host {host.name} in {datacenter.name}InformationalSystemAll
Deploying {vm.name} on host {host.name} in {datacenter.name} from template {srcTemplate.name}InformationalSystemAll
Migrating {vm.name} from {host.name} to {destHost.name} in {datacenter.name}InformationalSystemAll
Relocating {vm.name} from {host.name} to {destHost.name} in {datacenter.name}InformationalSystemAll
Relocating {vm.name} in {datacenter.name} from {host.name} to {destHost.name}InformationalSystemAll
Cannot clone {vm.name}: {reason.msg}CriticalSystemAll
Clone of {sourceVm.name} completedInformationalSystemAll
Configuration file for {vm.name} on {host.name} in {datacenter.name} cannot be foundInformationalSystemAll
Virtual machine {vm.name} is connectedInformationalSystemAll
Created virtual machine {vm.name} on {host.name} in {datacenter.name}InformationalSystemAll
dvPort connected to VM {vm.name} on {host.name} in {datacenter.name} changed statusInformationalSystemAll
{vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name} reset by HA. Reason: {[email protected]}InformationalSystemAll
{vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name} reset by HA. Reason: {[email protected]}. A screenshot is saved at {screenshotFilePath}.InformationalSystemAll
Cannot reset {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name}WarningSystemAll
Unable to update HA agents given the state of {vm.name}CriticalSystemAll
HA agents have been updated with the current state of the virtual machineInformationalSystemAll
Disconnecting all hosts as the date of virtual machine {vm.name} has been rolled backCriticalSystemAll
Cannot deploy template: {reason.msg}CriticalSystemAll
Template {srcTemplate.name} deployed on host {host.name}InformationalSystemAll
{vm.name} on host {host.name} in {datacenter.name} is disconnectedInformationalSystemAll
Discovered {vm.name} on {host.name} in {datacenter.name}InformationalSystemAll
Cannot create virtual disk {disk}CriticalSystemAll
Migrating {vm.name} off host {host.name} in {datacenter.name}InformationalSystemAll
End a recording session on {vm.name}InformationalSystemAll
End a replay session on {vm.name}InformationalSystemAll
Cannot migrate {vm.name} from {host.name} to {destHost.name} in {datacenter.name}CriticalSystemAll
Cannot complete relayout {vm.name} on {host.name} in {datacenter.name}: {reason.msg}CriticalSystemAll
Cannot complete relayout for virtual machine {vm.name} which has disks on a VMFS2 volume.CriticalSystemAll
vCenter cannot start the Secondary VM {vm.name}. Reason: {[email protected]}CriticalSystemAll
Cannot power Off {vm.name} on {host.name} in {datacenter.name}: {reason.msg}CriticalSystemAll
Cannot power On {vm.name} on {host.name} in {datacenter.name}. {reason.msg}CriticalSystemAll
Cannot reboot the guest OS for {vm.name} on {host.name} in {datacenter.name}. {reason.msg}CriticalSystemAll
Cannot suspend {vm.name} on {host.name} in {datacenter.name}: {reason.msg}CriticalSystemAll
{vm.name} cannot shut down the guest OS on {host.name} in {datacenter.name}: {reason.msg}CriticalSystemAll
{vm.name} cannot standby the guest OS on {host.name} in {datacenter.name}: {reason.msg}CriticalSystemAll
Cannot suspend {vm.name} on {host.name} in {datacenter.name}: {reason.msg}CriticalSystemAll
vCenter cannot update the Secondary VM {vm.name} configurationCriticalSystemAll
Failover unsuccessful for {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name}. Reason: {reason.msg}WarningSystemAll
Fault Tolerance state on {vm.name} changed from {[email protected]} to {[email protected]}InformationalSystemAll
Fault Tolerance protection has been turned off for {vm.name}InformationalSystemAll
The Fault Tolerance VM ({vm.name}) has been terminated. {[email protected]}InformationalSystemAll
Guest OS reboot for {vm.name} on {host.name} in {datacenter.name}InformationalSystemAll
Guest OS shut down for {vm.name} on {host.name} in {datacenter.name}InformationalSystemAll
Guest OS standby for {vm.name} on {host.name} in {datacenter.name}InformationalSystemAll
VM monitoring state in {computeResource.name} in {datacenter.name} changed to {[email protected]}InformationalSystemAll
Assign a new instance UUID ({instanceUuid}) to {vm.name}InformationalSystemAll
The instance UUID of {vm.name} has been changed from ({oldInstanceUuid}) to ({newInstanceUuid})InformationalSystemAll
The instance UUID ({instanceUuid}) of {vm.name} conflicts with the instance UUID assigned to {conflictedVm.name}CriticalSystemAll
New MAC address ({mac}) assigned to adapter {adapter} for {vm.name}InformationalSystemAll
Changed MAC address from {oldMac} to {newMac} for adapter {adapter} for {vm.name}WarningSystemAll
The MAC address ({mac}) of {vm.name} conflicts with MAC assigned to {conflictedVm.name}CriticalSystemAll
Reached maximum Secondary VM (with FT turned On) restart count for {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name}.WarningSystemAll
Reached maximum VM restart count for {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name}.WarningSystemAll
Error message on {vm.name} on {host.name} in {datacenter.name}: {message}CriticalSystemAll
Message on {vm.name} on {host.name} in {datacenter.name}: {message}InformationalSystemAll
Warning message on {vm.name} on {host.name} in {datacenter.name}: {message}WarningSystemAll
Migration of virtual machine {vm.name} from {sourceHost.name} to {host.name} completedInformationalSystemAll
No compatible host for the Secondary VM {vm.name}CriticalSystemAll
Not all networks for {vm.name} are accessible by {destHost.name}WarningSystemAll
{vm.name} does not exist on {host.name} in {datacenter.name}WarningSystemAll
{vm.name} was powered Off on the isolated host {isolatedHost.name} in cluster {computeResource.name} in {datacenter.name}InformationalSystemAll
{vm.name} on {host.name} in {datacenter.name} is powered offInformationalSystemAll
{vm.name} on {host.name} in {datacenter.name} is powered onInformationalSystemAll
Virtual machine {vm.name} powered On with vNICs connected to dvPorts that have a port level configuration, which might be different from the dvPort group configuration.InformationalSystemAll
VM ({vm.name}) failed over to {host.name}. {[email protected]}CriticalSystemAll
Reconfigured {vm.name} on {host.name} in {datacenter.name}InformationalSystemAll
Registered {vm.name} on {host.name} in {datacenter.name}InformationalSystemAll
Relayout of {vm.name} on {host.name} in {datacenter.name} completedInformationalSystemAll
{vm.name} on {host.name} in {datacenter.name} is in the correct format and relayout is not necessaryInformationalSystemAll
{vm.name} on {host.name} reloaded from new configuration {configPath}.InformationalSystemAll
{vm.name} on {host.name} could not be reloaded from {configPath}.CriticalSystemAll
Cannot relocate virtual machine '{vm.name}' in {datacenter.name}CriticalSystemAll
Completed the relocation of the virtual machineInformationalSystemAll
Remote console connected to {vm.name} on host {host.name}InformationalSystemAll
Remote console disconnected from {vm.name} on host {host.name}InformationalSystemAll
Removed {vm.name} on {host.name} from {datacenter.name}InformationalSystemAll
Renamed {vm.name} from {oldName} to {newName} in {datacenter.name}WarningSystemAll
{vm.name} on {host.name} in {datacenter.name} is resetInformationalSystemAll
Moved {vm.name} from resource pool {oldParent.name} to {newParent.name} in {datacenter.name}InformationalSystemAll
Changed resource allocation for {vm.name}InformationalSystemAll
Virtual machine {vm.name} was restarted on {host.name} since {sourceHost.name} failedInformationalSystemAll
{vm.name} on {host.name} in {datacenter.name} is resumedInformationalSystemAll
A Secondary VM has been added for {vm.name}InformationalSystemAll
vCenter disabled Fault Tolerance on VM '{vm.name}' because the Secondary VM could not be powered On.CriticalSystemAll
Disabled Secondary VM for {vm.name}InformationalSystemAll
Enabled Secondary VM for {vm.name}InformationalSystemAll
Started Secondary VM for {vm.name}InformationalSystemAll
{vm.name} was shut down on the isolated host {isolatedHost.name} in cluster {computeResource.name} in {datacenter.name}: {[email protected]}InformationalSystemAll
Start a recording session on {vm.name}InformationalSystemAll
Start a replay session on {vm.name}InformationalSystemAll
{vm.name} on host {host.name} in {datacenter.name} is startingInformationalSystemAll
Starting Secondary VM for {vm.name}InformationalSystemAll
The static MAC address ({mac}) of {vm.name} conflicts with MAC assigned to {conflictedVm.name}CriticalSystemAll
{vm.name} on {host.name} in {datacenter.name} is stoppingInformationalSystemAll
{vm.name} on {host.name} in {datacenter.name} is suspendedInformationalSystemAll
{vm.name} on {host.name} in {datacenter.name} is being suspendedInformationalSystemAll
Starting the Secondary VM {vm.name} timed out within {timeout} msCriticalSystemAll
Unsupported guest OS {guestId} for {vm.name} on {host.name} in {datacenter.name}WarningSystemAll
Virtual hardware upgraded to version {version}InformationalSystemAll
Cannot upgrade virtual hardwareCriticalSystemAll
Upgrading virtual hardware on {vm.name} in {datacenter.name} to version {version}InformationalSystemAll
Assigned new BIOS UUID ({uuid}) to {vm.name} on {host.name} in {datacenter.name}InformationalSystemAll
Changed BIOS UUID from {oldUuid} to {newUuid} for {vm.name} on {host.name} in {datacenter.name}WarningSystemAll
BIOS ID ({uuid}) of {vm.name} conflicts with that of {conflictedVm.name}CriticalSystemAll
New WWNs assigned to {vm.name}InformationalSystemAll
WWNs are changed for {vm.name}WarningSystemAll
The WWN ({wwn}) of {vm.name} conflicts with the currently registered WWNCriticalSystemAll
{message}WarningSystemAll
Booting from iSCSI failed with an error. See the VMware Knowledge Base for information on configuring iBFT networking.WarningSystemAll
com.vmware.license.AddLicenseEventLicense {licenseKey} added to VirtualCenterInformationalSystem
com.vmware.license.AssignLicenseEventLicense {licenseKey} assigned to asset {entityName} with id {entityId}InformationalSystem
com.vmware.license.DLFDownloadFailedEventFailed to download license information from the host {hostname} due to {errorReason.@enum.com.vmware.license.DLFDownloadFailedEvent.DLFDownloadFailedReason}WarningSystem
com.vmware.license.LicenseAssignFailedEventLicense assignment on the host fails. Reasons: {[email protected]}.InformationalSystem
com.vmware.license.LicenseExpiryEventYour host license will expire in {remainingDays} days. The host will be disconnected from VC when its license expires.WarningSystem
com.vmware.license.LicenseUserThresholdExceededEventCurrent license usage ({currentUsage} {costUnitText}) for {edition} exceeded the user-defined threshold ({threshold} {costUnitText})WarningSystem
com.vmware.license.RemoveLicenseEventLicense {licenseKey} removed from VirtualCenterInformationalSystem
com.vmware.license.UnassignLicenseEventLicense unassigned from asset {entityName} with id {entityId}InformationalSystem
com.vmware.vc.HA.ClusterFailoverActionCompletedEventHA completed a failover action in cluster {computeResource.name} in datacenter {datacenter.name}InformationalSystem
com.vmware.vc.HA.ClusterFailoverActionInitiatedEventHA initiated a failover action in cluster {computeResource.name} in datacenter {datacenter.name}WarningSystem
com.vmware.vc.HA.DasAgentRunningEventHA Agent on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} is runningInformationalSystem
com.vmware.vc.HA.DasFailoverHostFailedEventHA failover host {host.name} in cluster {computeResource.name} in {datacenter.name} has failedCriticalSystem
com.vmware.vc.HA.DasHostCompleteDatastoreFailureEventAll shared datastores failed on the host {hostName} in cluster {computeResource.name} in {datacenter.name}CriticalSystem
com.vmware.vc.HA.DasHostCompleteNetworkFailureEventAll VM networks failed on the host {hostName} in cluster {computeResource.name} in {datacenter.name}CriticalSystem
com.vmware.vc.HA.DasHostFailedEventA possible host failure has been detected by HA on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name}CriticalSystem
com.vmware.vc.HA.DasHostMonitoringDisabledEventNo virtual machine failover will occur until Host Monitoring is enabled in cluster {computeResource.name} in {datacenter.name}WarningSystem
com.vmware.vc.HA.DasTotalClusterFailureEventHA recovered from a total cluster failure in cluster {computeResource.name} in datacenter {datacenter.name}WarningSystem
com.vmware.vc.HA.HostDasAgentHealthyEventHA Agent on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} is healthyInformationalSystem
com.vmware.vc.HA.HostDasErrorEventHA agent on {host.name} in cluster {computeResource.name} in {datacenter.name} has an error: {[email protected]}CriticalSystem
com.vmware.vc.VCHealthStateChangedEventvCenter Service overall health changed from '{oldState}' to '{newState}'InformationalSystem
com.vmware.vc.cim.CIMGroupHealthStateChangedHealth of [data.group] changed from [data.oldState] to [data.newState].InformationalSystem
com.vmware.vc.datastore.UpdateVmFilesFailedEventFailed to update VM files on datastore {ds.name} using host {hostName}CriticalSystem
com.vmware.vc.datastore.UpdatedVmFilesEventUpdated VM files on datastore {ds.name} using host {hostName}InformationalSystem
com.vmware.vc.datastore.UpdatingVmFilesEventUpdating VM files on datastore {ds.name} using host {hostName}InformationalSystem
com.vmware.vc.ft.VmAffectedByDasDisabledEventVMware HA has been disabled in cluster {computeResource.name} of datacenter {datacenter.name}. HA will not restart VM {vm.name} or its Secondary VM after a failure.WarningSystem
com.vmware.vc.npt.VmAdapterEnteredPassthroughEventNetwork passthrough is active on adapter {deviceLabel} of virtual machine {vm.name} on host {host.name} in {datacenter.name}InformationalSystem
com.vmware.vc.npt.VmAdapterExitedPassthroughEventNetwork passthrough is inactive on adapter {deviceLabel} of virtual machine {vm.name} on host {host.name} in {datacenter.name}InformationalSystem
com.vmware.vc.vcp.FtDisabledVmTreatAsNonFtEventHA VM Component Protection protects virtual machine {vm.name} on {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} as non-FT virtual machine because the FT state is disabledInformationalSystem
com.vmware.vc.vcp.FtFailoverEventFT Primary VM {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} is going to fail over to Secondary VM due to component failureInformationalSystem
com.vmware.vc.vcp.FtFailoverFailedEventFT virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} failed to failover to secondaryCriticalSystem
com.vmware.vc.vcp.FtSecondaryRestartEventHA VM Component Protection is restarting FT secondary virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} due to component failureInformationalSystem
com.vmware.vc.vcp.FtSecondaryRestartFailedEventFT Secondary VM {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} failed to restartCriticalSystem
com.vmware.vc.vcp.NeedSecondaryFtVmTreatAsNonFtEventHA VM Component Protection protects virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} as non-FT virtual machine because it has been in the needSecondary state too longInformationalSystem
com.vmware.vc.vcp.TestEndEventVM Component Protection test ends on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name}InformationalSystem
com.vmware.vc.vcp.TestStartEventVM Component Protection test starts on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name}InformationalSystem
com.vmware.vc.vcp.VcpNoActionEventHA VM Component Protection did not take action on virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} due to the feature configuration settingInformationalSystem
com.vmware.vc.vcp.VmDatastoreFailedEventVirtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} lost access to {datastore}CriticalSystem
com.vmware.vc.vcp.VmNetworkFailedEventVirtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} lost access to {network}CriticalSystem
com.vmware.vc.vcp.VmPowerOffHangEventHA VM Component Protection could not power off virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} successfully after trying {numTimes} times and will keep tryingCriticalSystem
com.vmware.vc.vcp.VmRestartEventHA VM Component Protection is restarting virtual machine {vm.name} due to component failure on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name}InformationalSystem
com.vmware.vc.vcp.VmRestartFailedEventVirtual machine {vm.name} affected by component failure on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} failed to restartCriticalSystem
com.vmware.vc.vcp.VmWaitForCandidateHostEventHA VM Component Protection could not find a destination host for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} after waiting {numSecWait} seconds and will keep tryingCriticalSystem
com.vmware.vc.vmam.AppMonitoringNotSupportedApplication monitoring is not supported on {host.name} in cluster {computeResource.name} in {datacenter.name}WarningSystem
com.vmware.vc.vmam.VmAppHealthMonitoringStateChangedEventApplication heartbeat status changed to {status} for {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name}WarningSystem
com.vmware.vc.vmam.VmDasAppHeartbeatFailedEventApplication heartbeat failed for {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name}WarningSystem
esx.clear.net.connectivity.restoredNetwork connectivity restored on virtual switch {1}, portgroups: {2}. Physical NIC {3} is up.InformationalSystem
esx.clear.net.dvport.connectivity.restoredNetwork connectivity restored on DVPorts: {1}. Physical NIC {2} is up.InformationalSystem
esx.clear.net.dvport.redundancy.restoredUplink redundancy restored on DVPorts: {1}. Physical NIC {2} is up.InformationalSystem
esx.clear.net.redundancy.restoredUplink redundancy restored on virtual switch {1}, portgroups: {2}. Physical NIC {3} is up.InformationalSystem
esx.clear.net.vmnic.linkstate.upPhysical NIC {1} linkstate is up.InformationalSystem
esx.clear.storage.connectivity.restoredConnectivity to storage device {1} (Datastores: {2}) restored. Path {3} is active again.InformationalSystem
esx.clear.storage.redundancy.restoredPath redundancy to storage device {1} (Datastores: {2}) restored. Path {3} is active again.InformationalSystem
esx.problem.apei.bert.memory.error.correctedA corrected memory error occurred in last boot. The following details were reported. Physical Addr: {1}, Physical Addr Mask: {2}, Node: {3}, Card: {4}, Module: {5}, Bank: {6}, Device: {7}, Row: {8}, Column: {9} Error type: {10}CriticalSystem
esx.problem.apei.bert.memory.error.fatalA fatal memory error occurred in the last boot. The following details were reported. Physical Addr: {1}, Physical Addr Mask: {2}, Node: {3}, Card: {4}, Module: {5}, Bank: {6}, Device: {7}, Row: {8}, Column: {9} Error type: {10}CriticalSystem
esx.problem.apei.bert.memory.error.recoverableA recoverable memory error occurred in last boot. The following details were reported. Physical Addr: {1}, Physical Addr Mask: {2}, Node: {3}, Card: {4}, Module: {5}, Bank: {6}, Device: {7}, Row: {8}, Column: {9} Error type: {10}CriticalSystem
esx.problem.apei.bert.pcie.error.correctedA corrected PCIe error occurred in last boot. The following details were reported. Port Type: {1}, Device: {2}, Bus #: {3}, Function: {4}, Slot: {5}, Device Vendor: {6}, Version: {7}, Command Register: {8}, Status Register: {9}.CriticalSystem
esx.problem.apei.bert.pcie.error.fatalPlatform encounterd a fatal PCIe error in last boot. The following details were reported. Port Type: {1}, Device: {2}, Bus #: {3}, Function: {4}, Slot: {5}, Device Vendor: {6}, Version: {7}, Command Register: {8}, Status Register: {9}.CriticalSystem
esx.problem.apei.bert.pcie.error.recoverableA recoverable PCIe error occurred in last boot. The following details were reported. Port Type: {1}, Device: {2}, Bus #: {3}, Function: {4}, Slot: {5}, Device Vendor: {6}, Version: {7}, Command Register: {8}, Status Register: {9}.CriticalSystem
esx.problem.iorm.nonviworkloadAn external I/O activity is detected on datastore {1}, this is an unsupported configuration. Consult the Resource Management Guide or follow the Ask VMware link for more information.InformationalSystem
esx.problem.net.connectivity.lostLost network connectivity on virtual switch {1}. Physical NIC {2} is down. Affected portgroups:{3}.CriticalSystem
esx.problem.net.dvport.connectivity.lostLost network connectivity on DVPorts: {1}. Physical NIC {2} is down.CriticalSystem
esx.problem.net.dvport.redundancy.degradedUplink redundancy degraded on DVPorts: {1}. Physical NIC {2} is down.WarningSystem
esx.problem.net.dvport.redundancy.lostLost uplink redundancy on DVPorts: {1}. Physical NIC {2} is down.WarningSystem
esx.problem.net.e1000.tso6.notsupportedGuest-initiated IPv6 TCP Segmentation Offload (TSO) packets ignored. Manually disable TSO inside the guest operating system in virtual machine {1}, or use a different virtual adapter.CriticalSystem
esx.problem.net.migrate.bindtovmkThe ESX advanced configuration option /Migrate/Vmknic is set to an invalid vmknic: {1}. /Migrate/Vmknic specifies a vmknic that vMotion binds to for improved performance. Update the configuration option with a valid vmknic. Alternatively, if you do not want vMotion to bind to a specific vmknic, remove the invalid vmknic and leave the option blank.WarningSystem
esx.problem.net.proxyswitch.port.unavailableVirtual NIC with hardware address {1} failed to connect to distributed virtual port {2} on switch {3}. There are no more ports available on the host proxy switch.WarningSystem
esx.problem.net.redundancy.degradedUplink redundancy degraded on virtual switch {1}. Physical NIC {2} is down. Affected portgroups:{3}.WarningSystem
esx.problem.net.redundancy.lostLost uplink redundancy on virtual switch {1}. Physical NIC {2} is down. Affected portgroups:{3}.WarningSystem
esx.problem.net.uplink.mtu.failedVMkernel failed to set the MTU value {1} on the uplink {2}.WarningSystem
esx.problem.net.vmknic.ip.duplicateA duplicate IP address was detected for {1} on the interface {2}. The current owner is {3}.WarningSystem
esx.problem.net.vmnic.linkstate.downPhysical NIC {1} linkstate is down.InformationalSystem
esx.problem.net.vmnic.watchdog.resetUplink {1} has recovered from a transient failure due to watchdog timeoutInformationalSystem
esx.problem.scsi.device.limitreachedThe maximum number of supported devices of {1} has been reached. A device from plugin {2} could not be created.CriticalSystem
esx.problem.scsi.device.thinprov.atquotaSpace utilization on thin-provisioned device {1} exceeded configured threshold. Affected datastores (if any): {2}.WarningSystem
esx.problem.scsi.scsipath.limitreachedThe maximum number of supported paths of {1} has been reached. Path {2} could not be added.CriticalSystem
esx.problem.storage.connectivity.deviceporFrequent PowerOn Reset Unit Attentions are occurring on device {1}. This might indicate a storage problem. Affected datastores: {2}WarningSystem
esx.problem.storage.connectivity.lostLost connectivity to storage device {1}. Path {2} is down. Affected datastores: {3}.CriticalSystem
esx.problem.storage.connectivity.pathporFrequent PowerOn Reset Unit Attentions are occurring on path {1}. This might indicate a storage problem. Affected device: {2}. Affected datastores: {3}WarningSystem
esx.problem.storage.connectivity.pathstatechangesFrequent path state changes are occurring for path {1}. This might indicate a storage problem. Affected device: {2}. Affected datastores: {3}WarningSystem
esx.problem.storage.redundancy.degradedPath redundancy to storage device {1} degraded. Path {2} is down. Affected datastores: {3}.WarningSystem
esx.problem.storage.redundancy.lostLost path redundancy to storage device {1}. Path {2} is down. Affected datastores: {3}.WarningSystem
esx.problem.vmfs.heartbeat.recoveredSuccessfully restored access to volume {1} ({2}) following connectivity issues.InformationalSystem
esx.problem.vmfs.heartbeat.timedoutLost access to volume {1} ({2}) due to connectivity issues. Recovery attempt is in progress and outcome will be reported shortly.InformationalSystem
esx.problem.vmfs.heartbeat.unrecoverableLost connectivity to volume {1} ({2}) and subsequent recovery attempts have failed.CriticalSystem
esx.problem.vmfs.journal.createfailedNo space for journal on volume {1} ({2}). Opening volume in read-only metadata mode with limited write support.CriticalSystem
esx.problem.vmfs.lock.corruptondiskAt least one corrupt on-disk lock was detected on volume {1} ({2}). Other regions of the volume might be damaged too.CriticalSystem
esx.problem.vmfs.nfs.mount.connect.failedFailed to mount to the server {1} mount point {2}. {3}CriticalSystem
esx.problem.vmfs.nfs.mount.limit.exceededFailed to mount to the server {1} mount point {2}. {3}CriticalSystem
esx.problem.vmfs.nfs.server.disconnectLost connection to server {1} mount point {2} mounted as {3} ({4}).CriticalSystem
esx.problem.vmfs.nfs.server.restoredRestored connection to server {1} mount point {2} mounted as {3} ({4}).InformationalSystem
esx.problem.vmfs.resource.corruptondiskAt least one corrupt resource metadata region was detected on volume {1} ({2}). Other regions of the volume might be damaged too.CriticalSystem
esx.problem.vmfs.volume.lockedVolume on device {1} locked, possibly because remote host {2} encountered an error during a volume operation and could not recover.CriticalSystem
vim.event.LicenseDowngradedEventLicense downgrade: {licenseKey} removes the following features: {lostFeatures}WarningSystem
vprob.net.connectivity.lostLost network connectivity on virtual switch {1}. Physical NIC {2} is down. Affected portgroups:{3}.CriticalSystem
vprob.net.e1000.tso6.notsupportedGuest-initiated IPv6 TCP Segmentation Offload (TSO) packets ignored. Manually disable TSO inside the guest operating system in virtual machine {1}, or use a different virtual adapter.CriticalSystem
vprob.net.migrate.bindtovmkThe ESX advanced config option /Migrate/Vmknic is set to an invalid vmknic: {1}. /Migrate/Vmknic specifies a vmknic that vMotion binds to for improved performance. Please update the config option with a valid vmknic or, if you do not want vMotion to bind to a specific vmknic, remove the invalid vmknic and leave the option blank.WarningSystem
vprob.net.proxyswitch.port.unavailableVirtual NIC with hardware address {1} failed to connect to distributed virtual port {2} on switch {3}. There are no more ports available on the host proxy switch.WarningSystem
vprob.net.redundancy.degradedUplink redundancy degraded on virtual switch {1}. Physical NIC {2} is down. {3} uplinks still up. Affected portgroups:{4}.WarningSystem
vprob.net.redundancy.lostLost uplink redundancy on virtual switch {1}. Physical NIC {2} is down. Affected portgroups:{3}.WarningSystem
vprob.scsi.device.thinprov.atquotaSpace utilization on thin-provisioned device {1} exceeded configured threshold.WarningSystem
vprob.storage.connectivity.lostLost connectivity to storage device {1}. Path {2} is down. Affected datastores: {3}.CriticalSystem
vprob.storage.redundancy.degradedPath redundancy to storage device {1} degraded. Path {2} is down. {3} remaining active paths. Affected datastores: {4}.WarningSystem
vprob.storage.redundancy.lostLost path redundancy to storage device {1}. Path {2} is down. Affected datastores: {3}.WarningSystem
vprob.vmfs.heartbeat.recoveredSuccessfully restored access to volume {1} ({2}) following connectivity issues.InformationalSystem
vprob.vmfs.heartbeat.timedoutLost access to volume {1} ({2}) due to connectivity issues. Recovery attempt is in progress and outcome will be reported shortly.InformationalSystem
vprob.vmfs.heartbeat.unrecoverableLost connectivity to volume {1} ({2}) and subsequent recovery attempts have failed.CriticalSystem
vprob.vmfs.journal.createfailedNo space for journal on volume {1} ({2}). Opening volume in read-only metadata mode with limited write support.CriticalSystem
vprob.vmfs.lock.corruptondiskAt least one corrupt on-disk lock was detected on volume {1} ({2}). Other regions of the volume may be damaged too.CriticalSystem
vprob.vmfs.nfs.server.disconnectLost connection to server {1} mount point {2} mounted as {3} ({4}).CriticalSystem
vprob.vmfs.nfs.server.restoredRestored connection to server {1} mount point {2} mounted as {3} ({4}).InformationalSystem
vprob.vmfs.resource.corruptondiskAt least one corrupt resource metadata region was detected on volume {1} ({2}). Other regions of the volume might be damaged too.CriticalSystem
vprob.vmfs.volume.lockedVolume on device {1} locked, possibly because remote host {2} encountered an error during a volume operation and could not recover.CriticalSystem