Quantcast
Channel: VMware Communities : All Content - All Communities
Viewing all 180329 articles
Browse latest View live

Host Migration failed

$
0
0

Hello,

I have 3 node cluster setup Esxi 6.5

ESXi120

ESXi121

ESXi122

I have one VM running pre-prod network ---when this VM is running on 121&122 no issues.

If it is migrated to 120 Host, then VM is out network --unfortunately no other VM is using pre-prod Vlan.

 

Rest all VM's migration no issues, only this Vm's getting issue.

 

tried all the ways

I am able to ping the Vm IP and GW from the 120 host.

Host fix this ....Please suggest


Disk consolidation needed: How to fix it manually?

$
0
0

Hi there

 

I've got a big (2.5TB) legacy server in an old data center that I need to move to our new data center. I have almost no knowledge about the infrastructure in our old data center and it seems that there is no working backup there (I wasn't working here when that data center was set up and nothing is documented).

 

The old data center is running a vCenter 5.5.0 with 2 ESX hosts, the new data center uses vCenter 7 with 4 ESX hosts. I planned on using VEEAM Backup to move the server from the old data center to our new one (back up from old data center, restore on new data center). I am using VEEAM because the vCenter in the old data center is so old, that I have trouble getting the Standalone converter to both read from the old data center and copy to the new one (SSL issues for example).

 

I've got a 1GBit connection between the 2 data centers. On a weekend I attempted to start a backup while the server was running in the hopes that I could do incremental backups before finally migrating the server to the new data center. That approach failed horribly.

 

During the VEEAM backup job the legacy server stopped responding. Upon further inspection I noticed that the HA cluster, for whatever reason, decided to do a failover to the other ESX node. After a few minutes, it switched back to the other ESX again. It kept doing that, because the disk was now corrupt and the server kept freezing during the boot procedure. I am still not sure what caused the corruption, maybe there was an pre-exsting condition with the VMWare disk files that caused the corruption.

 

It took us quite a while to figure out what needed to be done to get the server back running again, but once the server was running again we had the following disk files lying around:

  • SERVERNAME-ctk.vmdk
  • SERVERNAME-flat.vmdk
  • SERVERNAME.vmdk
  • SERVERNAME_1-ctk.vmdk
  • SERVERNAME_1-flat.vmdk
  • SERVERNAME_1.vmdk
  • SERVERNAME_2-000001-ctk.vmdk
  • SERVERNAME_2-000001-flat.vmdk
  • SERVERNAME_2-000001.vmdk
  • SERVERNAME_2-ctk.vmdk
  • SERVERNAME_2-flat.vmdk
  • SERVERNAME_2.vmdk

 

What catched my eye is that despite the fact that vCenter doesn't show any more snapshots, there is still a SERVERNAME_2-000001.vmdk there, suggesting a snapshot of disk SERVERNAME_2. But that is not the case, it is the actual disk of the operating system whereas SERVERNAME_2.vmdk is the disk of an application data partition. It is actually referenced in the VMX file:

scsi0.virtualDev = "lsilogic"

scsi0.present = "TRUE"

scsi0:0.deviceType = "scsi-hardDisk"

scsi0:0.fileName = "SERVERNAME_2-000001.vmdk"

scsi0:0.present = "TRUE"

scsi0:0.redo = ""

scsi0.pciSlotNumber = "16"

scsi0:1.deviceType = "scsi-hardDisk"

scsi0:1.fileName = "SERVERNAME.vmdk"

scsi0:1.ctkEnabled = "TRUE"

scsi0:1.present = "TRUE"

scsi0:1.redo = ""

sched.scsi0:1.throughputCap = "off"

sched.scsi0:1.shares = "normal"

scsi0:2.deviceType = "scsi-hardDisk"

scsi0:2.fileName = "SERVERNAME_1.vmdk"

scsi0:2.ctkEnabled = "TRUE"

scsi0:2.present = "TRUE"

scsi0:2.redo = ""

scsi0:3.deviceType = "scsi-hardDisk"

scsi0:3.fileName = "SERVERNAME_2.vmdk"

scsi0:3.ctkEnabled = "TRUE"

scsi0:3.present = "TRUE"

scsi0:3.redo = ""

 

vCenter tells me that the server needs a disk consolidation, but I'm too afraid to run the consolidation, fearing that it might attempt to actually consolidate disks SERVERNAME_2-000001 and SERVERNAME_2 together and that this might have been the issue to begin with (when starting the VEEAM backup job).

 

I have no idea if the names were already botched up in the beginning or if that is an outcome of the failed backup job. I've been reading a few KB articles and some seem to suggest to create a new snapshot, and then delete it, though that seems a bit risky for me on this server, as I don't have a backup. For the same reason, I don't want to run the consolidate option in the snapshots menu.

 

How does vCenter / ESX actually detect that a consolidation is needed? Is it just based on the file names? If I were to rename SERVERNAME_2-000001 to SERVERNAME_3 and then update the VMX file, should that be working and the vCenter warning be gone?

 

Any help is greatly appreciated.

 

Cheers,

ahatius

Content Gateway on UAG 3.7 with File Shares

$
0
0
Has anyone successfully configured fileshare access repository to a Windows File server through Content Gateway on UAG 3.7 in a Relay-Endpoint configuration?

We have it configured, console tests are successful, we can even list the contents of the repository but cannot download any files. 

If we switch the same repository back to our existing Content Gateway on Linux, it works without issue.

Thanks
Lowell

Can you add a custom Powershell module to the vRO8 PSCore instance?

$
0
0

Looking at Get-Module -ListAvailable, it lists the modules as being in /root/.local/share/powershell/Modules/. I'm assuming that since this directory doesn't exist on the appliance, that this path reflects something in a container.

 

So is it possible to drop a Module folder somewhere on the appliance to get picked up by the vRO Powershell environment?

 

Thanks

PowerCLI - renaming multiple datatstores at the same time?

$
0
0

Hello everyone - is it possible to rename multiple datastores? i was able to find one-liner code which can rename datastore one by one. but can someone help me if we can rename multiple datastores at the same time? Thank you!

VMWare Horizon Client

$
0
0

I have a question about this client.  When installed on a PC and connecting to a VM via this client, does the VM holder (or company in this case), are they granted any special priveledges to alter or modify any of your system settings?

 

I ask because during this pandemic I have been issued a laptop with the client installed. However, I prefer running my own personal desktop and was given the choice to install this client and connect to the VDI this way.

 

My concerns are:

- activity monitoring (when using my local machine, not through the VM)

- removal of applications (again, on the client not the VM)

- any sort of wiping or administrative privileges granted to the client to perform such operations over the client (in this case the Windows desktop running the client).

 

Thank you.

POWERCLI: How to find dvswitches not used for management (VM only)

$
0
0

I have a script that checks dvswitch properties.  Currently I exclude management dvSwitches by name.   I am trying to figure out how to exclude management dvSwitches (no VM dvPortgroups) from my object via code.   For example:

  • dvswith1
    • dvportgroup-mgmt - vmk0
    • dvportgroup-vmotion - vmk1
  • dvswitch2
    • dvportgroup-vm
    • dvportgroup-vm2

 

In this example only want it to return dvswitch2

OS/X lost all accounts with installation of Big Sur

$
0
0

I (accidentally) installed Big Sur over a Mojave installation and seem to have lost all the accounts on the machine. I didn't have the latest version of Workspace One. When Big Sur boots I am presented with a login screen that asks for the "Setup User". Using my domain user id doesn't satisfy it and even my local IT folks can't get in using the administrator accounts that they set up on the machine originally.


UDP Packets dropping from host to VM

$
0
0

My setup:

 

Workstation Pro 15.5

 

Host

CPUs: 16 cores, 32 threads

RAM: 128 GB

OS: Cent OS 7.6.1810

5 NICs - 2x40 Gbps, 2x10Gbps, 1x10Gbps

 

VM

CPUs: 16 threads

RAM: 64 GB

OS: Red Hat Enterprise 7.8

2 VNICs - 2x10 Gbps

 

I'm trying to transfer UDP packets from Host to VM using iperf command at 850 Mbps bandwidth. In the attached screen shots. I am dropping about 2-3% of my packets. I have MTU=9000 in all the ifcfg files for all the VNICs and NICs.

 

What configurations do I need to change to stop packet loss between Host and VM?

VMware workstation 8 poor network performance between Ubuntu Host and VM guest

$
0
0

I have VM Workstation 8.0.2 installed on a host with

- OS as Ubuntu 64 bit 12.04

- 32 CPUs                                                   

- 64 GB of RAM

- 4 x Intel NICs each 1Gbps speed

 

My guest VM is

-         32bit Ubuntu 10.10

-         4GB of RAM

-         4 CPUs

-         2 Virtual NICs at speeds of 1000Mbps (I checked with ethtool)

- VMWare client tools installed (and reinstalled several times)

 

I get max of 120 Mbps between the guest and the host (measured using many tools; i.e. iperf and cp over NFS4 from and to guest). On the other hand, the performance of the host (where the VM Work Station 8 is installed) to another host on the same network is 900+Mbps. BTW, pinging from the host to the guest is 600ms and to another host on the network around 95 ms. In other words, the VM Guest performance is 1/6 the performance of calling another host on the network.

 

Most solutions on the net (including the one below),

http://communities.vmware.com/thread/105389

recommended turning OFF tcp offloading, and my output on the host and guest as follows:-

#host, same output for other eth1,2,&3

sudo ethtool –k eth0

Offload parameters for eth0:

rx-checksumming: off

tx-checksumming: off

scatter-gather: off

tcp-segmentation-offload: off

udp-fragmentation-offload: off

generic-segmentation-offload: off

generic-receive-offload: on

large-receive-offload: off

rx-vlan-offload: on

tx-vlan-offload: on

ntuple-filters: off

receive-hashing: on

 

#Guest

rx-checksumming: on

tx-checksumming: off

scatter-gather: off

tcp-segmentation-offload: off

udp-fragmentation-offload: off

generic-segmentation-offload: off

generic-receive-offload: off

large-receive-offload: off

 

 

I looked for solutions on the net and couldn't find any, any ideas guys?

Workstation 15.5.5 performance hit

$
0
0

Just a couple of hours ago, I've updated from 15.5.1 (?) to version 15.5.5 of Workstation Pro.

I've been exited about this update, especially being able to use WSL2 on my developer system is a plus.

 

However, immediately after updating, I noticed a quite dramatic performance hit.

My mouse cursor doesn't even move fluently anymore, it hangs and stutters a lot.

I haven't installed Hyper-V related services yet.

 

Anyone else experiencing this?

 

System summary:

i9-10940X, 64GB RAM, NVIDIA GeForce RTX 2070.

Running Windows 10 1909 still.

Kubernetes で PowerCLI を起動してみる。

$
0
0

vSphere の運用では、PowerCLI が利用できると便利です。

PowerCLI は PowerShell ベースのツールです。

そのため、以前は Windows が必要でしたが、最近では Linux の PowerShell でも利用でき、

Docker コンテナ イメージ(vmware/powerclicore)も用意されていたりします。

 

Doker Hub の PowerCLI Core コンテナ イメージ

https://hub.docker.com/r/vmware/powerclicore

 

そこで、今回は Kubernetes で PowerCLI コンテナを起動してみます。

 

今回の環境。

Kubernetes は、vSphere with Kubernetes によるスーパーバイザー クラスタを利用して、

コンテナは vSphere Pod として起動してみます。

 

スーパーバイザー クラスタの環境は下記のように構築しています。

vSphere with Kubernetes ラボ環境構築。(まとめ)

 

kubectl では、下記のようにスーパーバイザ クラスタに login してあります。

vSphere with Kubernetes ラボ環境構築。Part-11: kubectl で vSphere Pod 起動編

 

Kubernetes での PowerCLI の起動。

それでは、kubectl run コマンドで、Pod として起動します。

  • Pod の名前は「pcli01」にしています。
  • 「-it」を指定して、そのままコンテナに接続して対話的にコマンド実行できるようにします。
  • vmware/powerclicore コンテナ イメージを起動します。イメージは Docker Hub からダウンロードしているので、インターネットへの接続が必要です。
  • kubectl run コマンドに、「--restart=Never」オプションを指定すると、Pod(Deployment リソースではなく)として起動できます。
  • 「--rm」を指定すると、コンテナ終了と同時に Pod が自動削除されます。

 

Pod が起動されると、そのままコンテナの中の PowerShell プロンプトが表示されます。

$ kubectl run pcli01 --image=vmware/powerclicore -it --restart=Never --rm

If you don't see a command prompt, try pressing enter.

PowerShell 7.0.0

Copyright (c) Microsoft Corporation. All rights reserved.

 

https://aka.ms/powershell

Type 'help' to get help.

 

PS /root>

 

これで、PowerCLI が利用できます。

※今回は CEIP の警告は無視しています。

PS /root> Connect-VIServer -Force -Server lab-vc-41.go-lab.jp

WARNING: Please consider joining the VMware Customer Experience Improvement Program, so you can help us make PowerCLI a better product. You can join using the following command:

 

Set-PowerCLIConfiguration -Scope User -ParticipateInCEIP $true

 

VMware's Customer Experience Improvement Program ("CEIP") provides VMware with information that enables VMware to improve its products and services, to fix problems, and to advise you on how best to deploy and use our products.  As part of the CEIP, VMware collects technical information about your organization?s use of VMware products and services on a regular basis in association with your organization?s VMware license key(s).  This information does not personally identify any individual.

 

For more details: type "help about_ceip" to see the related help article.

 

To disable this warning and set your preference use the following command and restart PowerShell:

Set-PowerCLIConfiguration -Scope User -ParticipateInCEIP $true or $false.

 

Specify Credential

Please specify server credential

User: administrator@vsphere.local ★ユーザー名とパスワードを入力。

Password for user administrator@vsphere.local: ********

 

Name                           Port  User

----                           ----  ----

lab-vc-41.go-lab.jp            443   VSPHERE.LOCAL\Administrator

 

PS /root>

 

接続した vCenter から、情報取得ができました。

ちなみに、今回はスーパーバイザー クラスタを管理している vCenter に接続しており、

VM の一覧には vSphere Pod(pcli01)も表示されています。

PS /root> Get-Cluster

 

Name                           HAEnabled  HAFailover DrsEnabled DrsAutomationLe

                                          Level                 vel

----                           ---------  ---------- ---------- ---------------

wcp-cluster-41                 True       1          True       FullyAutomated

 

PS /root> Get-VMHost

 

Name                 ConnectionState PowerState NumCpu CpuUsageMhz CpuTotalMhz

----                 --------------- ---------- ------ ----------- -----------

lab-wcp-esxi-41.go-… Connected       PoweredOn       2        4322        6000

lab-wcp-esxi-42.go-… Connected       PoweredOn       2        1526        4608

lab-wcp-esxi-43.go-… Connected       PoweredOn       2        1990        4608

 

PS /root> Get-VM

 

Name                 PowerState Num CPUs MemoryGB

----                 ---------- -------- --------

SupervisorControlPl… PoweredOn  2        8.000

SupervisorControlPl… PoweredOn  2        8.000

SupervisorControlPl… PoweredOn  2        8.000

pcli01               PoweredOn  1        0.500

 

vSphere Clinet でも、PowerCLI の vSphere Pod が起動されたことが確認できます。

k8s-powercli-01.png

 

PowerCLI では、vCenter Server に接続する際に DNS による名前解決が重要になります。

そこで、コンテナの中の DNS サーバ参照の設定を確認してみると、

Kubernetes の機能による DNS サーバのアドレスが指定されています。

これは、このラボの DNS サーバのアドレスではありません。

PS /root> cat /etc/resolv.conf

nameserver 10.96.0.254

search sc-ns-01.svc.cluster.local svc.cluster.local cluster.local

 

起動した Pod は、デフォルトで「dnsPolicy: ClusterFirst」が設定されます。

これにより、Kubernetes の機能による DNS サーバで名前解決できなくても、

外部の DNS サーバで名前解決できるようになっています。

 

dnsPolicy については、下記が参考になります。

https://kubernetes.io/ja/docs/concepts/services-networking/dns-pod-service/ 

 

ちなみにスーパーバイザー クラスタでは、外部の DNS サーバのアドレスは、

vSphere クラスタで「ワークロード管理」を有効化する際に設定したものです。

この設定は、kubectl get pods -o yaml といったコマンドラインや、

Pod 起動時の --dry-run -o yaml オプションなどで確認することもできます。

$ kubectl run pcli01 --image=vmware/powerclicore --restart=Never --dry-run -o yaml

apiVersion: v1

kind: Pod

metadata:

  creationTimestamp: null

  labels:

    run: pcli01

  name: pcli01

spec:

  containers:

  - image: vmware/powerclicore

    name: pcli01

    resources: {}

  dnsPolicy: ClusterFirst

  restartPolicy: Never

status: {}

 

vSphere Pod の YAML は、vSphere Client でも確認できます。

vSphere Pod の「サマリ」→「メタデータ」などから、Pod の YAML を表示できます。

k8s-powercli-02.png

 

dnsPolicy は ClusterFirst になっています。

k8s-powercli-03.png

 

ちなみに、この Pod から exit すると、自動的に Pod は停止 ~ 削除されます。(--rm オプションにより)

PS /root> exit

pod "pcli01" deleted

$

 

DNS サーバ設定を指定した Pod の起動。

Pod の DNS サーバ設定は、Pod 起動時に YAML で指定することもできます。

kubectl run であっても、下記のように Pod 起動時に、dnsPolicy と dnsConfig を上書きできたりします。

 

今回は、自宅ラボの DNS サーバのアドレスとして、192.168.1.101 と 192.168.1.102 を指定して、

見やすくなるように JSON 部分を整形してあります。

kubectl run pcli01 --image=vmware/powerclicore -it --restart=Never --rm --overrides='

{

  "apiVersion": "v1",

  "spec": {

    "dnsPolicy": "None",

    "dnsConfig": {

      "nameservers": ["192.168.1.101", "192.168.1.102"],

      "searches": ["go-lab.jp"]

    }

  }

}'

 

実際に実行してみると、下記のようになります。

起動後の Pod では、/etc/resolv.conf ファイルを確認すると、DNS サーバの設定が変更できています。

$ kubectl run pcli01 --image=vmware/powerclicore -it --restart=Never --rm --overrides='

> {

>   "apiVersion": "v1",

>   "spec": {

>     "dnsPolicy": "None",

>     "dnsConfig": {

>       "nameservers": ["192.168.1.101", "192.168.1.102"],

>       "searches": ["go-lab.jp"]

>     }

>   }

> }'

If you don't see a command prompt, try pressing enter.

PowerShell 7.0.0

Copyright (c) Microsoft Corporation. All rights reserved.

 

https://aka.ms/powershell

Type 'help' to get help.

 

PS /root> cat /etc/resolv.conf

nameserver 192.168.1.101

nameserver 192.168.1.102

search go-lab.jp

PS /root>

 

vSphere Client で表示できる  Pod の YAML でも、同様に DNS 設定が変更できています。

k8s-powercli-04.png

 

vSphere with Kubernetes の動作確認やデモなどでも利用できそうかなと思います。

また、スーパーバイザー クラスタ特有の機能を利用するものではないので、

Kubernetes があればどこでも同じように PowerCLI コンテナが起動できるはず・・・

 

以上、Kubernetes で PowerCLI を起動してみる話でした。

Cannot remove file to reduce space of hard disk in vmware on Mac

$
0
0

Hi guy,

 

My name is Nam.

Please help me solve this problem.

 

I am running windows 10 on VMWare Fusion.

Although I installed 2 small softwares, but my disk space increase so fast.

I have removed 180GB of vmbundle file.

When I opened Show Package Contents, I saw many big files which is created but never automatically removed.

How can I remove or clear that files to release space in my hard disk?

 

Thanks.

Screen Shot 2020-08-28 at 17.46.59.png

ESXiでHDDを別PCに移動してもVMFSデータストアがマウントされない

$
0
0

壊れた旧esxi6.7から取り出したHDDを、新環境のesxi7へ移動したのですが、HDDは物理的にVMware Host Clientからは見えているのですがVMFS6がデータストアとして認識されず困っています。

 

データストアをそのまま使いたいと言うよりかは、中身のデータを新環境へ移動して使いたいのですが中身が認識されず困っています。

 

データストアを認識させデータをコピーする方法はありますでしょうか。

Upgrading from Mac OS Mojave to Catalina and VM 10.1.5 to 11.5.6 -- W7 VM Not Responding to Keystrokes or Mouse Clicks

$
0
0

I moved to a new MacBook Pro and in the process upgraded from Mojave to the Catalina OS.  I initialized the new machine from my Time Machine backup.

 

When I tried VMWare (10.1.5) on the new machine, I ended up with a black screen when I tried to access my Windows 7 VM.  Then VMware became unresponsive and said that components were unavailable -- I gather it wasn't compatible with OS Catalina.

 

So I downloaded the latest VM Fusion 11.5.6 (which I'll be happy to purchase outright or via an upgrade).  I can now see my W7 desktop and applications open (for instance, a Skype upgrade began automatically).  But I can't seem to use my mouse or keyboard in this environment.  (However, my keyboard does work when typing in my W7 password at the logon screen).

 

Any thoughts?

 

Harry

 

 

P.S. In Fusion 10.1.5, there is a small control element at the bottom of the screen that is used to map USB devices and networking settings -- I'm curious how that functionality is exposed in 11.5.6.


How to optionally add disks

$
0
0

I need to add up to 4 disks of different sizes.  I tried setting the count to ${input.optionalDisk1Size == 0? 0 : 1} but that stops me from adding any other disks to the blueprint.  I can't seem to figure out how to add UP to 4 disks where the sizes vary.  It seems that I need 4 separate disks on the blueprint that may or may not be provisioned.

 

Any suggestions gratefully accepted.

Carl L.

Sharing esata disk (Windows LAN)

$
0
0

I currently use Windows 10 running several VMs in Workstation, CPU is i7-3930K, i.e. no VT-d. It also has 24TB Raid5 external array (Areca), connected to the PC via esata, and shared as r/w folder in Windows, so that other PCs can use it as file server. I read that switching to Esxi allows better memory management and therefore - better performance for VMs. I have following questions:

1. Will performance difference be noticeable, especially if I increase number of VMs to 10+?
2. Will it be possible to share the esata disk in one of the VMs, or by means of the Esxi, so that it is visible to other Windows machines as shared folder?

3. Since the PC is pretty old (SaberTooth X79), should I use older versions of Esxi for drivers compatibility, or 7.0 will work just fine?

 

thx!

仮想マシンのバックアップ取得方法を教えてください

$
0
0

仮想マシンをエクスポート機能でバックアップしたが、「ファイルのダウンロード」ガイダンスには、2つのvmdkファイルが表示されるが、実際には1つのvmdkファイル(xxxx-0.vdmkのみ作成され、xxxx-1.vmdkは作成されない。タスクは50%実行中のまま進捗無し)のみ作成される。原因として何が考えられるか教えてください。

また、バックアップする方法も教えてください。

参考情報)

 該当仮想マシンは、2つのDISKで構成(1つ目のDISKにはCドライブ、2つ目のDISKにはD,Eドライブが存在)しており、Cドライブ用のxxxx-0.vdmkファイルのみ作成(ovfファイルは作成される)。

ESXi 6.7 Multiple Monitors for VMs

$
0
0

Hello everyone,

 

I do a lot of search, but i dont find how to use a multiple monitors for VM in ESXi 6.7

 

I get the message "ESXi does not support the multiple monitors feature in shared or remote virtual machines" or sometimes " The virtual machin's Display setting must be 'Use host settings' or specify more than one monitor if a suffciently-high maximum resolution".

 

Does anyone find a solution for that ?

 

Here is my configuration:

 

Thanks

FT with 2 nodes and 1 SAN

$
0
0

Hi, I have read some threads in this forum, but could not find answers to my questions. I'd like to configure a single FT VM on 2 nodes and 1 SAN.

 

Question 1, if node 1 (happens to be the primary node with the VM running) is down for few minutes or hours, node 2 will take over the FT VM, but it cannot make the VM redundant. What will happen if node 1 comes back online (without any changes) to the cluster, is this allowed? if node 1 is allowed to join the cluster back, will node 2 start replicating the FT VM to node 1? to the same VM or a new VM?

 

Question 2 is the opposite of case 1, node 2 (secondary node, no VM running) fails, no failover occurs. If node 2 is allowed to join the cluster back, will node 1 start replicating the FT VM to node 2? to the same VM or a new VM?

 

Question 3: What will trigger failover, will it occur only when there is complete hardware failure? or will the failover be triggered by simple events such as:

  • one local disk failure, even though RAID still kicks in, i.e. storage is still online
  • RAM issue
  • any one network interface is down
  • one fan failure, etc.

Thanks.

 

BR, Andreas

Viewing all 180329 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>