# Monitorización de eventos Esta funcionalidad realiza una copia de los eventos presentes en el vCenter de VMware® a la lista de eventos de Pandora FMS. Estos eventos pasan a formar parte del flujo de eventos normales de Pandora FMS y quedan asociados de forma automática al Agente que representa el vCenter del que provienen (si el Agente existe en el momento de creación del evento). [](https://pandorafms.com/guides/public/uploads/images/gallery/2023-05/image-1684314855677.png) En el proceso de volcado de eventos se respeta la información y severidad que VMware® indica en la creación del evento, de tal forma que los eventos con un nivel de severidad crítico, advertencia o informativo conservarán estos niveles en Pandora FMS. La siguiente imagen muestra un ejemplo de la información detallada de un evento volcado de VMware a Pandora FMS. [](https://pandorafms.com/guides/public/uploads/images/gallery/2023-05/image-1684314893073.png) Con todos los eventos presentes en Pandora FMS podrá realizar todas las acciones disponibles para la gestión de eventos, como por ejemplo: creación de alertas, configuración de filtros, apertura de incidencias, etc. #### Tabla de eventos
Esta lista de eventos se facilita para hacer más sencilla la tarea de configuración de alertas de eventos en Pandora FMS. Para obtener una referencia completa y actualizada de todos los posibles eventos deberá consultar la documentación que VMware® tenga al respecto
Evento | Severidad | Tipo de evento |
---|---|---|
An account was created on host {host.name} | Informational | System |
Account {account} was removed on host {host.name} | Informational | System |
An account was updated on host {host.name} | Informational | System |
The default password for the root user on the host {host.name} has not been changed | Informational | System |
Alarm '{alarm.name}' on {entity.name} triggered an action | Informational | System |
Created alarm '{alarm.name}' on {entity.name} | Informational | System |
Alarm '{alarm.name}' on {entity.name} sent email to {to} | Informational | System |
Alarm '{alarm.name}' on {entity.name} cannot send email to {to} | Critical | System |
Reconfigured alarm '{alarm.name}' on {entity.name} | Informational | System |
Removed alarm '{alarm.name}' on {entity.name} | Informational | System |
Alarm '{alarm.name}' on {entity.name} ran script {script} | Informational | System |
Alarm '{alarm.name}' on {entity.name} did not complete script: {reason.msg} | Critical | System |
Alarm '{alarm.name}': an SNMP trap for entity {entity.name} was sent | Informational | System |
Alarm '{alarm.name}' on entity {entity.name} did not send SNMP trap: {reason.msg} | Critical | System |
Alarm '{alarm.name}' on {entity.name} changed from {from.@enum.ManagedEntity.Status} to {to.@enum.ManagedEntity.Status} | Informational | System |
All running virtual machines are licensed | Informational | System |
User cannot logon since the user is already logged on | Informational | System |
Cannot login {userName}@{ipAddress} | Critical | System |
The operation performed on host {host.name} in {datacenter.name} was canceled | Informational | System |
Changed ownership of file name {filename} from {oldOwner} to {newOwner} on {host.name} in {datacenter.name}. | Informational | System |
Cannot change ownership of file name {filename} from {owner} to {attemptedOwner} on {host.name} in {datacenter.name}. | Critical | System |
Checked cluster for compliance | Informational | System |
Created cluster {computeResource.name} in {datacenter.name} | Informational | System |
Removed cluster {computeResource.name} in datacenter {datacenter.name} | Informational | System |
Insufficient capacity in cluster {computeResource.name} to satisfy resource configuration in {datacenter.name} | Critical | System |
Reconfigured cluster {computeResource.name} in datacenter {datacenter.name} | Informational | System |
Configuration status on cluster {computeResource.name} changed from {oldStatus.@enum.ManagedEntity.Status} to {newStatus.@enum.ManagedEntity.Status} in {datacenter.name} | Informational | System |
Created new custom field definition {name} | Informational | System |
Removed field definition {name} | Informational | System |
Renamed field definition from {name} to {newName} | Informational | System |
Changed custom field {name} on {entity.name} in {datacenter.name} to {value} | Informational | System |
Cannot complete customization of VM {vm.name}. See customization log at {logLocation} on the guest OS for details. | Informational | System |
An error occurred while setting up Linux identity. See log file '{logLocation}' on guest OS for details. | Critical | System |
An error occurred while setting up network properties of the guest OS. See the log file {logLocation} in the guest OS for details. | Critical | System |
Started customization of VM {vm.name}. Customization log located at {logLocation} in the guest OS. | Informational | System |
Customization of VM {vm.name} succeeded. Customization log located at {logLocation} in the guest OS. | Informational | System |
The version of Sysprep {sysprepVersion} provided for customizing VM {vm.name} does not match the version of guest OS {systemVersion}. See the log file {logLocation} in the guest OS for more information. | Critical | System |
An error occurred while customizing VM {vm.name}. For details reference the log file {logLocation} in the guest OS. | Critical | System |
dvPort group {net.name} in {datacenter.name} was added to switch {dvs.name}. | Informational | System |
dvPort group {net.name} in {datacenter.name} was deleted. | Informational | System |
Informational | System | |
dvPort group {net.name} in {datacenter.name} was reconfigured. | Informational | System |
dvPort group {oldName} in {datacenter.name} was renamed to {newName} | Informational | System |
HA admission control disabled on cluster {computeResource.name} in {datacenter.name} | Informational | System |
HA admission control enabled on cluster {computeResource.name} in {datacenter.name} | Informational | System |
Re-established contact with a primary host in this HA cluster | Informational | System |
Unable to contact a primary HA agent in cluster {computeResource.name} in {datacenter.name} | Critical | System |
All hosts in the HA cluster {computeResource.name} in {datacenter.name} were isolated from the network. Check the network configuration for proper network redundancy in the management network. | Critical | System |
HA disabled on cluster {computeResource.name} in {datacenter.name} | Informational | System |
HA enabled on cluster {computeResource.name} in {datacenter.name} | Informational | System |
A possible host failure has been detected by HA on {failedHost.name} in cluster {computeResource.name} in {datacenter.name} | Critical | System |
Host {isolatedHost.name} has been isolated from cluster {computeResource.name} in {datacenter.name} | Warning | System |
Created datacenter {datacenter.name} in folder {parent.name} | Informational | System |
Renamed datacenter from {oldName} to {newName} | Informational | System |
Datastore {datastore.name} increased in capacity from {oldCapacity} bytes to {newCapacity} bytes in {datacenter.name} | Informational | System |
Removed unconfigured datastore {datastore.name} | Informational | System |
Discovered datastore {datastore.name} on {host.name} in {datacenter.name} | Informational | System |
Multiple datastores named {datastore} detected on host {host.name} in {datacenter.name} | Critical | System |
<internal> | Informational | System |
File or directory {sourceFile} copied from {sourceDatastore.name} to {datastore.name} as {targetFile} | Informational | System |
File or directory {targetFile} deleted from {datastore.name} | Informational | System |
File or directory {sourceFile} moved from {sourceDatastore.name} to {datastore.name} as {targetFile} | Informational | System |
Reconfigured Storage I/O Control on datastore {datastore.name} | Informational | System |
Configured datastore principal {datastorePrincipal} on host {host.name} in {datacenter.name} | Informational | System |
Removed datastore {datastore.name} from {host.name} in {datacenter.name} | Informational | System |
Renamed datastore from {oldName} to {newName} in {datacenter.name} | Informational | System |
Renamed datastore from {oldName} to {newName} in {datacenter.name} | Informational | System |
Disabled DRS on cluster {computeResource.name} in datacenter {datacenter.name} | Informational | System |
Enabled DRS on {computeResource.name} with automation level {behavior} in {datacenter.name} | Informational | System |
DRS put {host.name} into standby mode | Informational | System |
DRS is putting {host.name} into standby mode | Informational | System |
DRS cannot move {host.name} out of standby mode | Critical | System |
DRS moved {host.name} out of standby mode | Informational | System |
DRS is moving {host.name} out of standby mode | Informational | System |
DRS invocation not completed | Critical | System |
DRS has recovered from the failure | Informational | System |
Unable to apply DRS resource settings on host {host.name} in {datacenter.name}. {reason.msg}. This can significantly reduce the effectiveness of DRS. | Critical | System |
Resource configuration specification returns to synchronization from previous failure on host '{host.name}' in {datacenter.name} | Informational | System |
{vm.name} on {host.name} in {datacenter.name} is now compliant with DRS VM-Host affinity rules | Informational | System |
{vm.name} on {host.name} in {datacenter.name} is violating a DRS VM-Host affinity rule | Informational | System |
DRS migrated {vm.name} from {sourceHost.name} to {host.name} in cluster {computeResource.name} in {datacenter.name} | Informational | System |
DRS powered On {vm.name} on {host.name} in {datacenter.name} | Informational | System |
Virtual machine {macAddress} on host {host.name} has a duplicate IP {duplicateIP} | Informational | System |
A vNetwork Distributed Switch {dvs.name} was created in {datacenter.name}. | Informational | System |
vNetwork Distributed Switch {dvs.name} in {datacenter.name} was deleted. | Informational | System |
vNetwork Distributed Switch event | Informational | System |
The vNetwork Distributed Switch {dvs.name} configuration on the host was synchronized with that of the vCenter Server. | Informational | System |
The host {hostJoined.name} joined the vNetwork Distributed Switch {dvs.name} in {datacenter.name}. | Informational | System |
The host {hostLeft.name} left the vNetwork Distributed Switch {dvs.name} in {datacenter.name}. | Informational | System |
The host {hostMember.name} changed status on the vNetwork Distributed Switch {dvs.name} in {datacenter.name}. | Informational | System |
The vNetwork Distributed Switch {dvs.name} configuration on the host differed from that of the vCenter Server. | Warning | System |
vNetwork Distributed Switch {srcDvs.name} was merged into {dstDvs.name} in {datacenter.name}. | Informational | System |
dvPort {portKey} was blocked in the vNetwork Distributed Switch {dvs.name} in {datacenter.name}. | Informational | System |
The port {portKey} was connected in the vNetwork Distributed Switch {dvs.name} in {datacenter.name} | Informational | System |
New ports were created in the vNetwork Distributed Switch {dvs.name} in {datacenter.name}. | Informational | System |
Deleted ports in the vNetwork Distributed Switch {dvs.name} in {datacenter.name}. | Informational | System |
The dvPort {portKey} was disconnected in the vNetwork Distributed Switch {dvs.name} in {datacenter.name}. | Informational | System |
dvPort {portKey} entered passthrough mode in the vNetwork Distributed Switch {dvs.name} in {datacenter.name}. | Informational | System |
dvPort {portKey} exited passthrough mode in the vNetwork Distributed Switch {dvs.name} in {datacenter.name}. | Informational | System |
dvPort {portKey} was moved into the dvPort group {portgroupName} in {datacenter.name}. | Informational | System |
dvPort {portKey} was moved out of the dvPort group {portgroupName} in {datacenter.name}. | Informational | System |
The port {portKey} link was down in the vNetwork Distributed Switch {dvs.name} in {datacenter.name} | Informational | System |
The port {portKey} link was up in the vNetwork Distributed Switch {dvs.name} in {datacenter.name} | Informational | System |
Reconfigured ports in the vNetwork Distributed Switch {dvs.name} in {datacenter.name}. | Informational | System |
dvPort {portKey} was unblocked in the vNetwork Distributed Switch {dvs.name} in {datacenter.name}. | Informational | System |
The vNetwork Distributed Switch {dvs.name} in {datacenter.name} was reconfigured. | Informational | System |
The vNetwork Distributed Switch {oldName} in {datacenter.name} was renamed to {newName}. | Informational | System |
An upgrade for the vNetwork Distributed Switch {dvs.name} in datacenter {datacenter.name} is available. | Informational | System |
An upgrade for the vNetwork Distributed Switch {dvs.name} in datacenter {datacenter.name} is in progress. | Informational | System |
Cannot complete an upgrade for the vNetwork Distributed Switch {dvs.name} in datacenter {datacenter.name} | Informational | System |
vNetwork Distributed Switch {dvs.name} in datacenter {datacenter.name} was upgraded. | Informational | System |
Host {host.name} in {datacenter.name} has entered maintenance mode | Informational | System |
The host {host.name} is in standby mode | Informational | System |
Host {host.name} in {datacenter.name} has started to enter maintenance mode | Informational | System |
The host {host.name} is entering standby mode | Informational | System |
{message} | Critical | System |
Host {host.name} in {datacenter.name} has exited maintenance mode | Informational | System |
The host {host.name} could not exit standby mode | Critical | System |
The host {host.name} is no longer in standby mode | Informational | System |
The host {host.name} is exiting standby mode | Informational | System |
Sufficient resources are available to satisfy HA failover level in cluster {computeResource.name} in {datacenter.name} | Informational | System |
General event: {message} | Informational | System |
Error detected on {host.name} in {datacenter.name}: {message} | Critical | System |
Issue detected on {host.name} in {datacenter.name}: {message} | Informational | System |
Issue detected on {host.name} in {datacenter.name}: {message} | Warning | System |
User logged event: {message} | Informational | System |
Error detected for {vm.name} on {host.name} in {datacenter.name}: {message} | Critical | System |
Issue detected for {vm.name} on {host.name} in {datacenter.name}: {message} | Informational | System |
Issue detected for {vm.name} on {host.name} in {datacenter.name}: {message} | Warning | System |
The vNetwork Distributed Switch corresponding to the proxy switches {switchUuid} on the host {host.name} does not exist in vCenter Server or does not contain this host. | Informational | System |
A ghost proxy switch {switchUuid} on the host {host.name} was resolved. | Informational | System |
The message changed: {message} | Informational | System |
{componentName} status changed from {oldStatus} to {newStatus} | Informational | System |
Cannot add host {hostname} to datacenter {datacenter.name} | Critical | System |
Added host {host.name} to datacenter {datacenter.name} | Informational | System |
Administrator access to the host {host.name} is disabled | Warning | System |
Administrator access to the host {host.name} has been restored | Warning | System |
Cannot connect {host.name} in {datacenter.name}: cannot configure management account | Critical | System |
Cannot connect {host.name} in {datacenter.name}: already managed by {serverName} | Critical | System |
Cannot connect host {host.name} in {datacenter.name} : server agent is not responding | Critical | System |
Cannot connect {host.name} in {datacenter.name}: incorrect user name or password | Critical | System |
Cannot connect {host.name} in {datacenter.name}: incompatible version | Critical | System |
Cannot connect host {host.name} in {datacenter.name}. Did not install or upgrade vCenter agent service. | Critical | System |
Cannot connect {host.name} in {datacenter.name}: error connecting to host | Critical | System |
Cannot connect {host.name} in {datacenter.name}: network error | Critical | System |
Cannot connect host {host.name} in {datacenter.name}: account has insufficient privileges | Critical | System |
Cannot connect host {host.name} in {datacenter.name} | Critical | System |
Cannot connect {host.name} in {datacenter.name}: not enough CPU licenses | Critical | System |
Cannot connect {host.name} in {datacenter.name}: incorrect host name | Critical | System |
Cannot connect {host.name} in {datacenter.name}: time-out waiting for host response | Critical | System |
Host {host.name} checked for compliance. | Informational | System |
Host {host.name} is in compliance with the attached profile | Informational | System |
Host configuration changes applied. | Informational | System |
Connected to {host.name} in {datacenter.name} | Informational | System |
Host {host.name} in {datacenter.name} is not responding | Critical | System |
dvPort connected to host {host.name} in {datacenter.name} changed status | Informational | System |
HA agent disabled on {host.name} in cluster {computeResource.name} in {datacenter.name} | Informational | System |
HA is being disabled on {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} | Informational | System |
HA agent enabled on {host.name} in cluster {computeResource.name} in {datacenter.name} | Informational | System |
Enabling HA agent on {host.name} in cluster {computeResource.name} in {datacenter.name} | Warning | System |
HA agent on {host.name} in cluster {computeResource.name} in {datacenter.name} has an error {message}: {reason.@enum.HostDasErrorEvent.HostDasErrorReason} | Critical | System |
HA agent on host {host.name} in cluster {computeResource.name} in {datacenter.name} is configured correctly | Informational | System |
Disconnected from {host.name} in {datacenter.name}. Reason: {reason.@enum.HostDisconnectedEvent.ReasonCode} | Informational | System |
Cannot restore some administrator permissions to the host {host.name} | Critical | System |
Host {host.name} has the following extra networks not used by other hosts for HA communication:{ips}. Consider using HA advanced option das.allowNetwork to control network usage | Critical | System |
Cannot complete command 'hostname -s' on host {host.name} or returned incorrect name format | Critical | System |
Maximum ({capacity}) number of hosts allowed for this edition of vCenter Server has been reached | Critical | System |
The virtual machine inventory file on host {host.name} is damaged or unreadable. | Informational | System |
IP address of the host {host.name} changed from {oldIP} to {newIP} | Informational | System |
Configuration of host IP address is inconsistent on host {host.name}: address resolved to {ipAddress} and {ipAddress2} | Critical | System |
Cannot resolve IP address to short name on host {host.name} | Critical | System |
Host {host.name} could not reach isolation address: {isolationIp} | Critical | System |
A host license for {host.name} has expired | Critical | System |
Host {host.name} does not have the following networks used by other hosts for HA communication:{ips}. Consider using HA advanced option das.allowNetwork to control network usage | Critical | System |
Host monitoring state in {computeResource.name} in {datacenter.name} changed to {state.@enum.DasConfigInfo.ServiceState} | Informational | System |
Host {host.name} currently has no available networks for HA Communication. The following networks are currently used by HA: {ips} | Critical | System |
Host {host.name} has no port groups enabled for HA communication. | Critical | System |
Host {host.name} currently has no management network redundancy | Critical | System |
Host {host.name} is not in compliance with the attached profile | Critical | System |
Host {host.name} is not a cluster member in {datacenter.name} | Critical | System |
Insufficient capacity in host {computeResource.name} to satisfy resource configuration in {datacenter.name} | Critical | System |
Primary agent {primaryAgent} was not specified as a short name to host {host.name} | Critical | System |
Profile is applied on the host {host.name} | Informational | System |
Cannot reconnect to {host.name} in {datacenter.name} | Critical | System |
Removed host {host.name} in {datacenter.name} | Informational | System |
Host names {shortName} and {shortName2} both resolved to the same IP address. Check the host's network configuration and DNS entries | Critical | System |
Cannot resolve short name {shortName} to IP address on host {host.name} | Critical | System |
Shut down of {host.name} in {datacenter.name}: {reason} | Informational | System |
Configuration status on host {computeResource.name} changed from {oldStatus.@enum.ManagedEntity.Status} to {newStatus.@enum.ManagedEntity.Status} in {datacenter.name} | Informational | System |
Cannot synchronize host {host.name}. {reason.msg} | Critical | System |
Cannot install or upgrade vCenter agent service on {host.name} in {datacenter.name} | Critical | System |
The userworld swap is not enabled on the host {host.name} | Warning | System |
Host {host.name} vNIC {vnic.vnic} was reconfigured to use dvPort {vnic.port.portKey} with port level configuration, which might be different from the dvPort group. | Informational | System |
WWNs are changed for {host.name} | Warning | System |
The WWN ({wwn}) of {host.name} conflicts with the currently registered WWN | Critical | System |
Host {host.name} did not provide the information needed to acquire the correct set of licenses | Critical | System |
{message} | Informational | System |
Insufficient resources to satisfy HA failover level on cluster {computeResource.name} in {datacenter.name} | Critical | System |
The license edition '{feature}' is invalid | Critical | System |
License {feature.featureName} has expired | Critical | System |
License inventory is not compliant. Licenses are overused | Critical | System |
Unable to acquire licenses due to a restriction in the option file on the license server. | Critical | System |
License server {licenseServer} is available | Informational | System |
License server {licenseServer} is unavailable | Critical | System |
Created local datastore {datastore.name} on {host.name} in {datacenter.name} | Informational | System |
The Local Tech Support Mode for the host {host.name} has been enabled | Informational | System |
Datastore {datastore} which is configured to back the locker does not exist | Warning | System |
Locker was reconfigured from {oldDatastore} to {newDatastore} datastore | Informational | System |
Unable to migrate {vm.name} from {host.name} in {datacenter.name}: {fault.msg} | Critical | System |
Unable to migrate {vm.name} from {host.name} to {dstHost.name} in {datacenter.name}: {fault.msg} | Critical | System |
Migration of {vm.name} from {host.name} to {dstHost.name} in {datacenter.name}: {fault.msg} | Warning | System |
Cannot migrate {vm.name} from {host.name} to {dstHost.name} and resource pool {dstPool.name} in {datacenter.name}: {fault.msg} | Critical | System |
Migration of {vm.name} from {host.name} to {dstHost.name} and resource pool {dstPool.name} in {datacenter.name}: {fault.msg} | Warning | System |
Migration of {vm.name} from {host.name} in {datacenter.name}: {fault.msg} | Warning | System |
Created NAS datastore {datastore.name} on {host.name} in {datacenter.name} | Informational | System |
Cannot login user {userName}@{ipAddress}: no permission | Critical | System |
No datastores have been configured on the host {host.name} | Informational | System |
A required license {feature.featureName} is not reserved | Critical | System |
Unable to automatically migrate {vm.name} from {host.name} | Informational | System |
Non-VI workload detected on datastore {datastore.name} | Critical | System |
Not enough resources to failover {vm.name} in {computeResource.name} in {datacenter.name} | Informational | System |
The vNetwork Distributed Switch configuration on some hosts differed from that of the vCenter Server. | Warning | System |
Permission created for {principal} on {entity.name}, role is {role.name}, propagation is {propagate.@enum.auth.Permission.propagate} | Informational | System |
Permission rule removed for {principal} on {entity.name} | Informational | System |
Permission changed for {principal} on {entity.name}, role is {role.name}, propagation is {propagate.@enum.auth.Permission.propagate} | Informational | System |
Profile {profile.name} attached. | Informational | System |
Profile {profile.name} was changed. | Informational | System |
Profile is created. | Informational | System |
Profile {profile.name} detached. | Informational | System |
Profile {profile.name} reference host changed. | Informational | System |
Profile was removed. | Informational | System |
Remote Tech Support Mode (SSH) for the host {host.name} has been enabled | Informational | System |
Created resource pool {resourcePool.name} in compute-resource {computeResource.name} in {datacenter.name} | Informational | System |
Removed resource pool {resourcePool.name} on {computeResource.name} in {datacenter.name} | Informational | System |
Moved resource pool {resourcePool.name} from {oldParent.name} to {newParent.name} on {computeResource.name} in {datacenter.name} | Informational | System |
Updated configuration for {resourcePool.name} in compute-resource {computeResource.name} in {datacenter.name} | Informational | System |
Resource usage exceeds configuration for resource pool {resourcePool.name} in compute-resource {computeResource.name} in {datacenter.name} | Critical | System |
New role {role.name} created | Informational | System |
Role {role.name} removed | Informational | System |
Modifed role {role.name} | Informational | System |
Task {scheduledTask.name} on {entity.name} in {datacenter.name} completed successfully | Informational | System |
Created task {scheduledTask.name} on {entity.name} in {datacenter.name} | Informational | System |
Task {scheduledTask.name} on {entity.name} in {datacenter.name} sent email to {to} | Informational | System |
Task {scheduledTask.name} on {entity.name} in {datacenter.name} cannot send email to {to}: {reason.msg} | Critical | System |
Task {scheduledTask.name} on {entity.name} in {datacenter.name} cannot be completed: {reason.msg} | Critical | System |
Reconfigured task {scheduledTask.name} on {entity.name} in {datacenter.name} | Informational | System |
Removed task {scheduledTask.name} on {entity.name} in {datacenter.name} | Informational | System |
Running task {scheduledTask.name} on {entity.name} in {datacenter.name} | Informational | System |
A vCenter Server license has expired | Critical | System |
vCenter started | Informational | System |
A session for user '{terminatedUsername}' has stopped | Informational | System |
Task: {info.descriptionId} | Informational | System |
Task: {info.descriptionId} time-out | Informational | System |
Upgrading template {legacyTemplate} | Informational | System |
Cannot upgrade template {legacyTemplate} due to: {reason.msg} | Informational | System |
Template {legacyTemplate} upgrade completed | Informational | System |
The operation performed on {host.name} in {datacenter.name} timed out | Warning | System |
There are {unlicensed} unlicensed virtual machines on host {host} - there are only {available} licenses available | Informational | System |
{unlicensed} unlicensed virtual machines found on host {host} | Informational | System |
The agent on host {host.name} is updated and will soon restart | Informational | System |
User {userLogin} was added to group {group} | Informational | System |
User {userName}@{ipAddress} logged in | Informational | System |
User {userName} logged out | Informational | System |
Password was changed for account {userLogin} on host {host.name} | Informational | System |
User {userLogin} removed from group {group} | Informational | System |
{message} | Informational | System |
Created VMFS datastore {datastore.name} on {host.name} in {datacenter.name} | Informational | System |
Expanded VMFS datastore {datastore.name} on {host.name} in {datacenter.name} | Informational | System |
Extended VMFS datastore {datastore.name} on {host.name} in {datacenter.name} | Informational | System |
A vMotion license for {host.name} has expired | Critical | System |
Cannot uninstall vCenter agent from {host.name} in {datacenter.name}. {reason.@enum.fault.AgentInstallFailed.Reason} | Critical | System |
vCenter agent has been uninstalled from {host.name} in {datacenter.name} | Informational | System |
Cannot upgrade vCenter agent on {host.name} in {datacenter.name}. {reason.@enum.fault.AgentInstallFailed.Reason} | Critical | System |
vCenter agent has been upgraded on {host.name} in {datacenter.name} | Informational | System |
VIM account password was changed on host {host.name} | Informational | System |
Remote console to {vm.name} on {host.name} in {datacenter.name} has been opened | Informational | System |
A ticket for {vm.name} of type {ticketType} on {host.name} in {datacenter.name} has been acquired | Informational | System |
Invalid name for {vm.name} on {host.name} in {datacenter.name}. Renamed from {oldName} to {newName} | Informational | System |
Cloning {vm.name} on host {host.name} in {datacenter.name} to {destName} on host {destHost.name} | Informational | System |
Cloning {vm.name} on host {host.name} in {datacenter.name} to {destName} on host {destHost.name} | Informational | System |
Creating {vm.name} on host {host.name} in {datacenter.name} | Informational | System |
Deploying {vm.name} on host {host.name} in {datacenter.name} from template {srcTemplate.name} | Informational | System |
Migrating {vm.name} from {host.name} to {destHost.name} in {datacenter.name} | Informational | System |
Relocating {vm.name} from {host.name} to {destHost.name} in {datacenter.name} | Informational | System |
Relocating {vm.name} in {datacenter.name} from {host.name} to {destHost.name} | Informational | System |
Cannot clone {vm.name}: {reason.msg} | Critical | System |
Clone of {sourceVm.name} completed | Informational | System |
Configuration file for {vm.name} on {host.name} in {datacenter.name} cannot be found | Informational | System |
Virtual machine {vm.name} is connected | Informational | System |
Created virtual machine {vm.name} on {host.name} in {datacenter.name} | Informational | System |
dvPort connected to VM {vm.name} on {host.name} in {datacenter.name} changed status | Informational | System |
{vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name} reset by HA. Reason: {reason.@enum.VmDasBeingResetEvent.ReasonCode} | Informational | System |
{vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name} reset by HA. Reason: {reason.@enum.VmDasBeingResetEvent.ReasonCode}. A screenshot is saved at {screenshotFilePath}. | Informational | System |
Cannot reset {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name} | Warning | System |
Unable to update HA agents given the state of {vm.name} | Critical | System |
HA agents have been updated with the current state of the virtual machine | Informational | System |
Disconnecting all hosts as the date of virtual machine {vm.name} has been rolled back | Critical | System |
Cannot deploy template: {reason.msg} | Critical | System |
Template {srcTemplate.name} deployed on host {host.name} | Informational | System |
{vm.name} on host {host.name} in {datacenter.name} is disconnected | Informational | System |
Discovered {vm.name} on {host.name} in {datacenter.name} | Informational | System |
Cannot create virtual disk {disk} | Critical | System |
Migrating {vm.name} off host {host.name} in {datacenter.name} | Informational | System |
End a recording session on {vm.name} | Informational | System |
End a replay session on {vm.name} | Informational | System |
Cannot migrate {vm.name} from {host.name} to {destHost.name} in {datacenter.name} | Critical | System |
Cannot complete relayout {vm.name} on {host.name} in {datacenter.name}: {reason.msg} | Critical | System |
Cannot complete relayout for virtual machine {vm.name} which has disks on a VMFS2 volume. | Critical | System |
vCenter cannot start the Secondary VM {vm.name}. Reason: {reason.@enum.VmFailedStartingSecondaryEvent.FailureReason} | Critical | System |
Cannot power Off {vm.name} on {host.name} in {datacenter.name}: {reason.msg} | Critical | System |
Cannot power On {vm.name} on {host.name} in {datacenter.name}. {reason.msg} | Critical | System |
Cannot reboot the guest OS for {vm.name} on {host.name} in {datacenter.name}. {reason.msg} | Critical | System |
Cannot suspend {vm.name} on {host.name} in {datacenter.name}: {reason.msg} | Critical | System |
{vm.name} cannot shut down the guest OS on {host.name} in {datacenter.name}: {reason.msg} | Critical | System |
{vm.name} cannot standby the guest OS on {host.name} in {datacenter.name}: {reason.msg} | Critical | System |
Cannot suspend {vm.name} on {host.name} in {datacenter.name}: {reason.msg} | Critical | System |
vCenter cannot update the Secondary VM {vm.name} configuration | Critical | System |
Failover unsuccessful for {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name}. Reason: {reason.msg} | Warning | System |
Fault Tolerance state on {vm.name} changed from {oldState.@enum.VirtualMachine.FaultToleranceState} to {newState.@enum.VirtualMachine.FaultToleranceState} | Informational | System |
Fault Tolerance protection has been turned off for {vm.name} | Informational | System |
The Fault Tolerance VM ({vm.name}) has been terminated. {reason.@enum.VmFaultToleranceVmTerminatedEvent.TerminateReason} | Informational | System |
Guest OS reboot for {vm.name} on {host.name} in {datacenter.name} | Informational | System |
Guest OS shut down for {vm.name} on {host.name} in {datacenter.name} | Informational | System |
Guest OS standby for {vm.name} on {host.name} in {datacenter.name} | Informational | System |
VM monitoring state in {computeResource.name} in {datacenter.name} changed to {state.@enum.DasConfigInfo.VmMonitoringState} | Informational | System |
Assign a new instance UUID ({instanceUuid}) to {vm.name} | Informational | System |
The instance UUID of {vm.name} has been changed from ({oldInstanceUuid}) to ({newInstanceUuid}) | Informational | System |
The instance UUID ({instanceUuid}) of {vm.name} conflicts with the instance UUID assigned to {conflictedVm.name} | Critical | System |
New MAC address ({mac}) assigned to adapter {adapter} for {vm.name} | Informational | System |
Changed MAC address from {oldMac} to {newMac} for adapter {adapter} for {vm.name} | Warning | System |
The MAC address ({mac}) of {vm.name} conflicts with MAC assigned to {conflictedVm.name} | Critical | System |
Reached maximum Secondary VM (with FT turned On) restart count for {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name}. | Warning | System |
Reached maximum VM restart count for {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name}. | Warning | System |
Error message on {vm.name} on {host.name} in {datacenter.name}: {message} | Critical | System |
Message on {vm.name} on {host.name} in {datacenter.name}: {message} | Informational | System |
Warning message on {vm.name} on {host.name} in {datacenter.name}: {message} | Warning | System |
Migration of virtual machine {vm.name} from {sourceHost.name} to {host.name} completed | Informational | System |
No compatible host for the Secondary VM {vm.name} | Critical | System |
Not all networks for {vm.name} are accessible by {destHost.name} | Warning | System |
{vm.name} does not exist on {host.name} in {datacenter.name} | Warning | System |
{vm.name} was powered Off on the isolated host {isolatedHost.name} in cluster {computeResource.name} in {datacenter.name} | Informational | System |
{vm.name} on {host.name} in {datacenter.name} is powered off | Informational | System |
{vm.name} on {host.name} in {datacenter.name} is powered on | Informational | System |
Virtual machine {vm.name} powered On with vNICs connected to dvPorts that have a port level configuration, which might be different from the dvPort group configuration. | Informational | System |
VM ({vm.name}) failed over to {host.name}. {reason.@enum.VirtualMachine.NeedSecondaryReason} | Critical | System |
Reconfigured {vm.name} on {host.name} in {datacenter.name} | Informational | System |
Registered {vm.name} on {host.name} in {datacenter.name} | Informational | System |
Relayout of {vm.name} on {host.name} in {datacenter.name} completed | Informational | System |
{vm.name} on {host.name} in {datacenter.name} is in the correct format and relayout is not necessary | Informational | System |
{vm.name} on {host.name} reloaded from new configuration {configPath}. | Informational | System |
{vm.name} on {host.name} could not be reloaded from {configPath}. | Critical | System |
Cannot relocate virtual machine '{vm.name}' in {datacenter.name} | Critical | System |
Completed the relocation of the virtual machine | Informational | System |
Remote console connected to {vm.name} on host {host.name} | Informational | System |
Remote console disconnected from {vm.name} on host {host.name} | Informational | System |
Removed {vm.name} on {host.name} from {datacenter.name} | Informational | System |
Renamed {vm.name} from {oldName} to {newName} in {datacenter.name} | Warning | System |
{vm.name} on {host.name} in {datacenter.name} is reset | Informational | System |
Moved {vm.name} from resource pool {oldParent.name} to {newParent.name} in {datacenter.name} | Informational | System |
Changed resource allocation for {vm.name} | Informational | System |
Virtual machine {vm.name} was restarted on {host.name} since {sourceHost.name} failed | Informational | System |
{vm.name} on {host.name} in {datacenter.name} is resumed | Informational | System |
A Secondary VM has been added for {vm.name} | Informational | System |
vCenter disabled Fault Tolerance on VM '{vm.name}' because the Secondary VM could not be powered On. | Critical | System |
Disabled Secondary VM for {vm.name} | Informational | System |
Enabled Secondary VM for {vm.name} | Informational | System |
Started Secondary VM for {vm.name} | Informational | System |
{vm.name} was shut down on the isolated host {isolatedHost.name} in cluster {computeResource.name} in {datacenter.name}: {shutdownResult.@enum.VmShutdownOnIsolationEvent.Operation} | Informational | System |
Start a recording session on {vm.name} | Informational | System |
Start a replay session on {vm.name} | Informational | System |
{vm.name} on host {host.name} in {datacenter.name} is starting | Informational | System |
Starting Secondary VM for {vm.name} | Informational | System |
The static MAC address ({mac}) of {vm.name} conflicts with MAC assigned to {conflictedVm.name} | Critical | System |
{vm.name} on {host.name} in {datacenter.name} is stopping | Informational | System |
{vm.name} on {host.name} in {datacenter.name} is suspended | Informational | System |
{vm.name} on {host.name} in {datacenter.name} is being suspended | Informational | System |
Starting the Secondary VM {vm.name} timed out within {timeout} ms | Critical | System |
Unsupported guest OS {guestId} for {vm.name} on {host.name} in {datacenter.name} | Warning | System |
Virtual hardware upgraded to version {version} | Informational | System |
Cannot upgrade virtual hardware | Critical | System |
Upgrading virtual hardware on {vm.name} in {datacenter.name} to version {version} | Informational | System |
Assigned new BIOS UUID ({uuid}) to {vm.name} on {host.name} in {datacenter.name} | Informational | System |
Changed BIOS UUID from {oldUuid} to {newUuid} for {vm.name} on {host.name} in {datacenter.name} | Warning | System |
BIOS ID ({uuid}) of {vm.name} conflicts with that of {conflictedVm.name} | Critical | System |
New WWNs assigned to {vm.name} | Informational | System |
WWNs are changed for {vm.name} | Warning | System |
The WWN ({wwn}) of {vm.name} conflicts with the currently registered WWN | Critical | System |
{message} | Warning | System |
Booting from iSCSI failed with an error. See the VMware Knowledge Base for information on configuring iBFT networking. | Warning | System |
License {licenseKey} added to VirtualCenter | Informational | System |
License {licenseKey} assigned to asset {entityName} with id {entityId} | Informational | System |
Failed to download license information from the host {hostname} due to {errorReason.@enum.com.vmware.license.DLFDownloadFailedEvent.DLFDownloadFailedReason} | Warning | System |
License assignment on the host fails. Reasons: {errorMessage.@enum.com.vmware.license.LicenseAssignError}. | Informational | System |
Your host license will expire in {remainingDays} days. The host will be disconnected from VC when its license expires. | Warning | System |
Current license usage ({currentUsage} {costUnitText}) for {edition} exceeded the user-defined threshold ({threshold} {costUnitText}) | Warning | System |
License {licenseKey} removed from VirtualCenter | Informational | System |
License unassigned from asset {entityName} with id {entityId} | Informational | System |
HA completed a failover action in cluster {computeResource.name} in datacenter {datacenter.name} | Informational | System |
HA initiated a failover action in cluster {computeResource.name} in datacenter {datacenter.name} | Warning | System |
HA Agent on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} is running | Informational | System |
HA failover host {host.name} in cluster {computeResource.name} in {datacenter.name} has failed | Critical | System |
All shared datastores failed on the host {hostName} in cluster {computeResource.name} in {datacenter.name} | Critical | System |
All VM networks failed on the host {hostName} in cluster {computeResource.name} in {datacenter.name} | Critical | System |
A possible host failure has been detected by HA on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} | Critical | System |
No virtual machine failover will occur until Host Monitoring is enabled in cluster {computeResource.name} in {datacenter.name} | Warning | System |
HA recovered from a total cluster failure in cluster {computeResource.name} in datacenter {datacenter.name} | Warning | System |
HA Agent on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} is healthy | Informational | System |
HA agent on {host.name} in cluster {computeResource.name} in {datacenter.name} has an error: {reason.@enum.HostDasErrorEvent.HostDasErrorReason} | Critical | System |
vCenter Service overall health changed from '{oldState}' to '{newState}' | Informational | System |
Health of \[data.group\] changed from \[data.oldState\] to \[data.newState\]. | Informational | System |
Failed to update VM files on datastore {ds.name} using host {hostName} | Critical | System |
Updated VM files on datastore {ds.name} using host {hostName} | Informational | System |
Updating VM files on datastore {ds.name} using host {hostName} | Informational | System |
VMware HA has been disabled in cluster {computeResource.name} of datacenter {datacenter.name}. HA will not restart VM {vm.name} or its Secondary VM after a failure. | Warning | System |
Network passthrough is active on adapter {deviceLabel} of virtual machine {vm.name} on host {host.name} in {datacenter.name} | Informational | System |
Network passthrough is inactive on adapter {deviceLabel} of virtual machine {vm.name} on host {host.name} in {datacenter.name} | Informational | System |
HA VM Component Protection protects virtual machine {vm.name} on {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} as non-FT virtual machine because the FT state is disabled | Informational | System |
FT Primary VM {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} is going to fail over to Secondary VM due to component failure | Informational | System |
FT virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} failed to failover to secondary | Critical | System |
HA VM Component Protection is restarting FT secondary virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} due to component failure | Informational | System |
FT Secondary VM {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} failed to restart | Critical | System |
HA VM Component Protection protects virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} as non-FT virtual machine because it has been in the needSecondary state too long | Informational | System |
VM Component Protection test ends on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} | Informational | System |
VM Component Protection test starts on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} | Informational | System |
HA VM Component Protection did not take action on virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} due to the feature configuration setting | Informational | System |
Virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} lost access to {datastore} | Critical | System |
Virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} lost access to {network} | Critical | System |
HA VM Component Protection could not power off virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} successfully after trying {numTimes} times and will keep trying | Critical | System |
HA VM Component Protection is restarting virtual machine {vm.name} due to component failure on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} | Informational | System |
Virtual machine {vm.name} affected by component failure on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} failed to restart | Critical | System |
HA VM Component Protection could not find a destination host for virtual machine {vm.name} on host {host.name} in cluster {computeResource.name} in datacenter {datacenter.name} after waiting {numSecWait} seconds and will keep trying | Critical | System |
Application monitoring is not supported on {host.name} in cluster {computeResource.name} in {datacenter.name} | Warning | System |
Application heartbeat status changed to {status} for {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name} | Warning | System |
Application heartbeat failed for {vm.name} on {host.name} in cluster {computeResource.name} in {datacenter.name} | Warning | System |
Network connectivity restored on virtual switch {1}, portgroups: {2}. Physical NIC {3} is up. | Informational | System |
Network connectivity restored on DVPorts: {1}. Physical NIC {2} is up. | Informational | System |
Uplink redundancy restored on DVPorts: {1}. Physical NIC {2} is up. | Informational | System |
Uplink redundancy restored on virtual switch {1}, portgroups: {2}. Physical NIC {3} is up. | Informational | System |
Physical NIC {1} linkstate is up. | Informational | System |
Connectivity to storage device {1} (Datastores: {2}) restored. Path {3} is active again. | Informational | System |
Path redundancy to storage device {1} (Datastores: {2}) restored. Path {3} is active again. | Informational | System |
A corrected memory error occurred in last boot. The following details were reported. Physical Addr: {1}, Physical Addr Mask: {2}, Node: {3}, Card: {4}, Module: {5}, Bank: {6}, Device: {7}, Row: {8}, Column: {9} Error type: {10} | Critical | System |
A fatal memory error occurred in the last boot. The following details were reported. Physical Addr: {1}, Physical Addr Mask: {2}, Node: {3}, Card: {4}, Module: {5}, Bank: {6}, Device: {7}, Row: {8}, Column: {9} Error type: {10} | Critical | System |
A recoverable memory error occurred in last boot. The following details were reported. Physical Addr: {1}, Physical Addr Mask: {2}, Node: {3}, Card: {4}, Module: {5}, Bank: {6}, Device: {7}, Row: {8}, Column: {9} Error type: {10} | Critical | System |
A corrected PCIe error occurred in last boot. The following details were reported. Port Type: {1}, Device: {2}, Bus #: {3}, Function: {4}, Slot: {5}, Device Vendor: {6}, Version: {7}, Command Register: {8}, Status Register: {9}. | Critical | System |
Platform encounterd a fatal PCIe error in last boot. The following details were reported. Port Type: {1}, Device: {2}, Bus #: {3}, Function: {4}, Slot: {5}, Device Vendor: {6}, Version: {7}, Command Register: {8}, Status Register: {9}. | Critical | System |
A recoverable PCIe error occurred in last boot. The following details were reported. Port Type: {1}, Device: {2}, Bus #: {3}, Function: {4}, Slot: {5}, Device Vendor: {6}, Version: {7}, Command Register: {8}, Status Register: {9}. | Critical | System |
An external I/O activity is detected on datastore {1}, this is an unsupported configuration. Consult the Resource Management Guide or follow the Ask VMware link for more information. | Informational | System |
Lost network connectivity on virtual switch {1}. Physical NIC {2} is down. Affected portgroups:{3}. | Critical | System |
Lost network connectivity on DVPorts: {1}. Physical NIC {2} is down. | Critical | System |
Uplink redundancy degraded on DVPorts: {1}. Physical NIC {2} is down. | Warning | System |
Lost uplink redundancy on DVPorts: {1}. Physical NIC {2} is down. | Warning | System |
Guest-initiated IPv6 TCP Segmentation Offload (TSO) packets ignored. Manually disable TSO inside the guest operating system in virtual machine {1}, or use a different virtual adapter. | Critical | System |
The ESX advanced configuration option /Migrate/Vmknic is set to an invalid vmknic: {1}. /Migrate/Vmknic specifies a vmknic that vMotion binds to for improved performance. Update the configuration option with a valid vmknic. Alternatively, if you do not want vMotion to bind to a specific vmknic, remove the invalid vmknic and leave the option blank. | Warning | System |
Virtual NIC with hardware address {1} failed to connect to distributed virtual port {2} on switch {3}. There are no more ports available on the host proxy switch. | Warning | System |
Uplink redundancy degraded on virtual switch {1}. Physical NIC {2} is down. Affected portgroups:{3}. | Warning | System |
Lost uplink redundancy on virtual switch {1}. Physical NIC {2} is down. Affected portgroups:{3}. | Warning | System |
VMkernel failed to set the MTU value {1} on the uplink {2}. | Warning | System |
A duplicate IP address was detected for {1} on the interface {2}. The current owner is {3}. | Warning | System |
Physical NIC {1} linkstate is down. | Informational | System |
Uplink {1} has recovered from a transient failure due to watchdog timeout | Informational | System |
The maximum number of supported devices of {1} has been reached. A device from plugin {2} could not be created. | Critical | System |
Space utilization on thin-provisioned device {1} exceeded configured threshold. Affected datastores (if any): {2}. | Warning | System |
The maximum number of supported paths of {1} has been reached. Path {2} could not be added. | Critical | System |
Frequent PowerOn Reset Unit Attentions are occurring on device {1}. This might indicate a storage problem. Affected datastores: {2} | Warning | System |
Lost connectivity to storage device {1}. Path {2} is down. Affected datastores: {3}. | Critical | System |
Frequent PowerOn Reset Unit Attentions are occurring on path {1}. This might indicate a storage problem. Affected device: {2}. Affected datastores: {3} | Warning | System |
Frequent path state changes are occurring for path {1}. This might indicate a storage problem. Affected device: {2}. Affected datastores: {3} | Warning | System |
Path redundancy to storage device {1} degraded. Path {2} is down. Affected datastores: {3}. | Warning | System |
Lost path redundancy to storage device {1}. Path {2} is down. Affected datastores: {3}. | Warning | System |
Successfully restored access to volume {1} ({2}) following connectivity issues. | Informational | System |
Lost access to volume {1} ({2}) due to connectivity issues. Recovery attempt is in progress and outcome will be reported shortly. | Informational | System |
Lost connectivity to volume {1} ({2}) and subsequent recovery attempts have failed. | Critical | System |
No space for journal on volume {1} ({2}). Opening volume in read-only metadata mode with limited write support. | Critical | System |
At least one corrupt on-disk lock was detected on volume {1} ({2}). Other regions of the volume might be damaged too. | Critical | System |
Failed to mount to the server {1} mount point {2}. {3} | Critical | System |
Failed to mount to the server {1} mount point {2}. {3} | Critical | System |
Lost connection to server {1} mount point {2} mounted as {3} ({4}). | Critical | System |
Restored connection to server {1} mount point {2} mounted as {3} ({4}). | Informational | System |
At least one corrupt resource metadata region was detected on volume {1} ({2}). Other regions of the volume might be damaged too. | Critical | System |
Volume on device {1} locked, possibly because remote host {2} encountered an error during a volume operation and could not recover. | Critical | System |
License downgrade: {licenseKey} removes the following features: {lostFeatures} | Warning | System |
Lost network connectivity on virtual switch {1}. Physical NIC {2} is down. Affected portgroups:{3}. | Critical | System |
Guest-initiated IPv6 TCP Segmentation Offload (TSO) packets ignored. Manually disable TSO inside the guest operating system in virtual machine {1}, or use a different virtual adapter. | Critical | System |
The ESX advanced config option /Migrate/Vmknic is set to an invalid vmknic: {1}. /Migrate/Vmknic specifies a vmknic that vMotion binds to for improved performance. Please update the config option with a valid vmknic or, if you do not want vMotion to bind to a specific vmknic, remove the invalid vmknic and leave the option blank. | Warning | System |
Virtual NIC with hardware address {1} failed to connect to distributed virtual port {2} on switch {3}. There are no more ports available on the host proxy switch. | Warning | System |
Uplink redundancy degraded on virtual switch {1}. Physical NIC {2} is down. {3} uplinks still up. Affected portgroups:{4}. | Warning | System |
Lost uplink redundancy on virtual switch {1}. Physical NIC {2} is down. Affected portgroups:{3}. | Warning | System |
Space utilization on thin-provisioned device {1} exceeded configured threshold. | Warning | System |
Lost connectivity to storage device {1}. Path {2} is down. Affected datastores: {3}. | Critical | System |
Path redundancy to storage device {1} degraded. Path {2} is down. {3} remaining active paths. Affected datastores: {4}. | Warning | System |
Lost path redundancy to storage device {1}. Path {2} is down. Affected datastores: {3}. | Warning | System |
Successfully restored access to volume {1} ({2}) following connectivity issues. | Informational | System |
Lost access to volume {1} ({2}) due to connectivity issues. Recovery attempt is in progress and outcome will be reported shortly. | Informational | System |
Lost connectivity to volume {1} ({2}) and subsequent recovery attempts have failed. | Critical | System |
No space for journal on volume {1} ({2}). Opening volume in read-only metadata mode with limited write support. | Critical | System |
At least one corrupt on-disk lock was detected on volume {1} ({2}). Other regions of the volume may be damaged too. | Critical | System |
Lost connection to server {1} mount point {2} mounted as {3} ({4}). | Critical | System |
Restored connection to server {1} mount point {2} mounted as {3} ({4}). | Informational | System |
At least one corrupt resource metadata region was detected on volume {1} ({2}). Other regions of the volume might be damaged too. | Critical | System |
Volume on device {1} locked, possibly because remote host {2} encountered an error during a volume operation and could not recover. | Critical | System |