Quantcast
Channel: VMware Communities : All Content - All Communities
Viewing all 180329 articles
Browse latest View live

Get the registered vCenter info from the datastore VM folder files

$
0
0

Hi Team,

I have VMs from multiple vcenters in a datastore , because the datastore is connected with multiple vCenters.

Is there any script available to get the registered vcenter for each VM from the VM folder files (vmdk,vmx,etc..)

 

How to get the registered vCenter from VM files in datastore.

 

Thanks,


App volumes 2.13.2.13 - Storage group cannot copy file

$
0
0

Hi all,

 

We are running Apps volumes 2.13.2.13 with Horizon View in instant clone mode , V.7.4.

We want to use storage group feature to load balance our differents volumes.

We have 2 datacenters with 4 esxi host each.  Each datacenter is a Dell VRTX hyperconverged box.

All of this stuff is managed from a vcsa vcenter 6.5.0.15000. vcsa have access to ALL the datastores enable in each datacenter.

When setting up "storage group" only some volumes are replicated, for the rest we got the following message in vcsa, datastore ,events tab (see attached file)

 

Could you help us ?

 

Thank you in advance

 

Regards

Humberto

 

copy-error.png

Where can i download the VMmark User's Guide?

$
0
0

Even Google can't find a download link. Where is it?

QNAP branded X550

$
0
0

Hello

 

After searching the compatibility matrix, I found the Intel X550-T2 works with multiple versions of vSphere.  Sadly, the QNAP version of the card does not appear (different VID?).  Has anyone had any success installing this card and making it fully functional?

 

QNAP 10G2T-X5502T installed in a Dell R710.

vSphere 6 build 5112508

 

Thank you

stretched cluster - full site maintenance

$
0
0

Hello,

 

I'm running a vsan 6.5 stretched cluster and I would need to put an entire site in maintenance for 1 day for a power maintenance.

 

Our cluster has 5 nodes on each site with a FTT=1 and a third site running the witness. I'm going to put the preferred site in maintenance, and I did not found any official doc on the best way to do it but I found a couple of post with some answers:

Doing maintenance on a Two-Node (Direct Connect) vSAN configuration - Yellow Bricks

Re: How to safely shut down one side of a 2+2 node stretched cluster

 

I'm planning to apply this:

- check vsan health

- change the preferred site (fault domain) to the one that will remain UP

- temporally stop DRS and manually move all vm on the remaining site/hosts (doing it manually to avoid DRS to move the vm on hosts in the same site)

- place all hosts on the site that will shutdown in maintenance mode using "ensure accessibility"

- Restart DRS

- Shutdown the hosts

 

Does it make sense ?

 

Do you know if vsan will also re-apply the FTT=1 policy and resync the vm within the same site after 1 hour ? If yes I guess it will kill the performance as we have over 60TB of used space (showing in cluster vsan capacity overview so 30TB to resync I guess ?) I currently have 108TB of free space in the whole cluster so 54TB with half nodes down, it should have enough space to re-sync locally if needed.

 

Thank you !

Creating Routed Org Network in vCD 9.0.0.2

$
0
0

Hi guys.

 

I can't create Routed Org Network by PowerCLI.

 

PowerCLI version is 6.5.0.234 and vCD for SP version is 8.20.0.2.

 

I'm connected to vCD server and all objects are exists: Org "Test", OrgVDC "Test-VDC" and EdgeGW "Test-EdgeGW".

But last one (EdgeGW) isn't using in script and it's strange because in process of creation routed org network in GUI we are choosing existing Edge Gateway... (see pic).

I suppose that script is out of date...

RoutedOrgNetwork.png

 

PowerCLI C:\> $OrgName = "Test"

PowerCLI C:\> $Org = Get-Org -Name $OrgName

PowerCLI C:\> $OrgVDCName = "$OrgName-VDC"

PowerCLI C:\> $OrgVDC = Get-OrgVdc -Name $OrgVDCName

 

PowerCLI C:\> $edgeGateway = Search-Cloud -QueryType EdgeGateway -Name $orgName | Get-CIView | where {$_.name -like "$orgName*"}

PowerCLI C:\> $ExNetnetwork = New-Object VMware.VimAutomation.Cloud.Views.OrgVdcNetwork

PowerCLI C:\> $ExNetnetwork.EdgeGateway = $edgeGateway.Id

PowerCLI C:\> $ExNetnetwork.isShared = $false

PowerCLI C:\> $ExNetnetwork.Configuration = New-Object VMware.VimAutomation.Cloud.Views.NetworkConfiguration

PowerCLI C:\> $ExNetnetwork.Name = "$OrgName-Org-Net01"

PowerCLI C:\> $ExNetnetwork.Configuration.IpScopes = New-Object VMware.VimAutomation.Cloud.Views.IpScopes

PowerCLI C:\> $ExNetnetwork.Configuration.FenceMode = "natRouted"

PowerCLI C:\> $IpScope = New-Object VMware.VimAutomation.Cloud.Views.IpScope

PowerCLI C:\> $IpScope.Gateway = "192.168.100.1"

PowerCLI C:\> $IpScope.Netmask = "255.255.255.0"

PowerCLI C:\> $IpScope.Dns1 = "8.8.8.8"

PowerCLI C:\> $IpScope.IpRanges = New-Object VMware.VimAutomation.Cloud.Views.IpRanges

PowerCLI C:\> $IpScope.IpRanges.IpRange = New-Object VMware.VimAutomation.Cloud.Views.IpRange

PowerCLI C:\> $IpScope.IpRanges.IpRange[0].StartAddress = "192.168.100.2"

PowerCLI C:\> $IpScope.IpRanges.IpRange[0].EndAddress = "192.168.100.50"

PowerCLI C:\> $ExNetnetwork.Configuration.IpScopes.IpScope += $IpScope

PowerCLI C:\> $orgVdc.ExtensionData.CreateNetwork($ExNetnetwork)

 

Exception calling "CreateNetwork" with "1" argument(s): "The server returned 'Server Error' with the status code 500 - InternalServerError."

At line:1 char:1

+ $orgVdc.ExtensionData.CreateNetwork($ExNetnetwork)

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

  + CategoryInfo : NotSpecified: (:) [], MethodInvocationException

  + FullyQualifiedErrorId : CloudException

 

What's wrong? Can anybody give me working example of creating Routed Org Network by PowerCLI?

PowerCLI for SCSI Unmap

$
0
0

Hi,

 

I am getting following error while performing powerCLI, i assume it's still running in the back ground, but would like to know what is causing this error.

 

The operation has timed out

At <Location>.ps1:21 char:1

+ $esxcli.storage.vmfs.unmap.Invoke($arguments)

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

    + CategoryInfo          : OperationStopped: (:) [], ViError

    + FullyQualifiedErrorId : VMware.VimAutomation.Sdk.Types.V1.ErrorHandling.

   VimException.ViError

 

Any inputs will help. The ESXi running in 6.0 Update3 and powercli is 6.5

Out of 85 data stores, it succeeds on 74 and fails on 11 with above error

vSAN 環境で vSphere Docker Volume Service(vDVS)をためしてみる。

$
0
0

今回は vSphere Docker Volume Service(vDVS)を利用して、

vSAN データストアから Docker Volume を作成してみます。

 

Docker では基本的にコンテナを削除すると一緒にデータも削除されてしまいます。

しかし、Docker  Volume とよばれる仕組みを利用することで

コンテナの削除にかかわらずデータを永続化することができます。

 

そして vSphere 環境での Docker にその永続的なストレージ(Volume)を提供する

Project Hatchway というプロジェクトがあります。

Hatchway では Docker / Kubernetes などに、vSphere のデータストアの仮想ディスクを Volume として提供します。

Project Hatchway by VMware®

 

VMware の Storage Hub にもドキュメントがあります。

 

Introduction to Project Hatchway

https://storagehub.vmware.com/t/vsphere-storage/project-hatchway/

 

それでは vDVS をセットアップしてみます。

vDVS では、Docker ホストになる VM と ESXi との両方での対応が必要です。

  1. ESXi への VIB のインストール。
  2. Docker ホスト(ゲスト OS)への Docker Plugin のインストール。

 

今回の環境。

すでに vCenter 6.5 U1 / ESXi 6.5 U1 の環境を構築ずみです。

 

このクラスタの ESXi は、vSAN / NFS データストアを接続しています。

  • vSAN データストア: vsanDatastore-04
  • NFS データストア: ds-nfs-vmdk-01

vdvs-vsan-01.png

 

ESXi は 2台(hv-n41 / hv-n42)あり、それぞれ VM が1台ずつ(vm41 / vm42)起動しています

vdvs-vsan-02.png

 

今回の ESXi は、6.5 U1 です。

[root@hv-n41:~] vmware -vl

VMware ESXi 6.5.0 build-7388607

VMware ESXi 6.5.0 Update 1

 

Docker ホスト(ゲスト OS)は、Photon OS 2.0 にしました。

root@vm41 [ ~ ]# cat /etc/photon-release

VMware Photon OS 2.0

PHOTON_BUILD_NUMBER=304b817

root@vm41 [ ~ ]# uname -r

4.9.90-1.ph2-esx

 

今回の Docker バージョンは 17.06 です。

root@vm41 [ ~ ]# docker version

Client:

Version:      17.06.0-ce

API version:  1.30

Go version:   go1.8.1

Git commit:   02c1d87

Built:        Thu Oct 26 06:33:23 2017

OS/Arch:      linux/amd64

 

Server:

Version:      17.06.0-ce

API version:  1.30 (minimum version 1.12)

Go version:   go1.8.1

Git commit:   02c1d87

Built:        Thu Oct 26 06:34:46 2017

OS/Arch:      linux/amd64

Experimental: false

 

ESXi への VIB のインストール。

ESXi ホストそれぞれ(今回は 2台)で、VIB をインストールします。

 

vDVS の VIB は、Bintray からダウンロードできます。

最新版は下記の URL から入手できます。

https://bintray.com/vmware/vDVS/VIB/_latestVersion

 

今回は、Version 0.21.1 を利用します。

https://bintray.com/vmware/vDVS/VIB/0.21.1

https://bintray.com/vmware/vDVS/download_file?file_path=VDVS_driver-0.21.1-offline_bundle-7812185.zip

 

VIB のオフラインバンドルは、vSAN データストアに配置しました。

vSAN データストアに osfs-mkdir コマンドでディレクトリを作成して・・・

[root@hv-n41:~] /usr/lib/vmware/osfs/bin/osfs-mkdir /vmfs/volumes/vsanDatastore-04/vib

 

VIB のオフラインバンドル「VDVS_driver-0.21.1-offline_bundle-7812185.zip」を配置してあります。

VIB の名前は「esx-vmdkops-service」になっています。

[root@hv-n41:~] ls /vmfs/volumes/vsanDatastore-04/vib

VDVS_driver-0.21.1-offline_bundle-7812185.zip

[root@hv-n41:~] esxcli software sources vib list -d /vmfs/volumes/vsanDatastore-04/vib/VDVS_driver-0.21.1-offline_bundle-7812185.zip

Name                 Version             Vendor  Creation Date  Acceptance Level  Status

-------------------  ------------------  ------  -------------  ----------------  ------

esx-vmdkops-service  0.21.c420818-0.0.1  VMWare  2018-02-13     VMwareAccepted    New

 

esxcli でインストールします。

[root@hv-n41:~] esxcli software vib install -d /vmfs/volumes/vsanDatastore-04/vib/VDVS_driver-0.21.1-offline_bundle-7812185.zip

Installation Result

   Message: Operation finished successfully.

   Reboot Required: false

   VIBs Installed: VMWare_bootbank_esx-vmdkops-service_0.21.c420818-0.0.1

   VIBs Removed:

   VIBs Skipped:

 

インストールした VIB の情報です。

[root@hv-n41:~] esxcli software vib get -n esx-vmdkops-service

VMWare_bootbank_esx-vmdkops-service_0.21.c420818-0.0.1

   Name: esx-vmdkops-service

   Version: 0.21.c420818-0.0.1

   Type: bootbank

   Vendor: VMWare

   Acceptance Level: VMwareAccepted

   Summary: [Fling] ESX-side daemon supporting basic VMDK operations requested by a guest

   Description: Executes VMDK operations requested by an in-the-guest application.

   ReferenceURLs:

   Creation Date: 2018-02-13

   Depends: esx-version >= 6.0.0

   Conflicts:

   Replaces:

   Provides:

   Maintenance Mode Required: False

   Hardware Platforms Required:

   Live Install Allowed: True

   Live Remove Allowed: True

   Stateless Ready: False

   Overlay: False

   Tags:

   Payloads: vmdkops

 

hostd を再起動しておきます。

[root@hv-n41:~] /etc/init.d/hostd restart

watchdog-hostd: Terminating watchdog process with PID 67445

hostd stopped.

hostd started.

 

Docker ホスト(ゲスト OS)への Docker Plugin のインストール。

Dcoker ホストそれぞれ(今回は 2台)に、Docker Plugin をインストールします。

 

はじめは、Docker Plugin が登録されていない状態です。

root@vm41 [ ~ ]# docker plugin ls

ID                  NAME                DESCRIPTION         ENABLED

root@vm41 [ ~ ]#

 

Docker Plugin は、Docker Store からインストールします。

root@vm41 [ ~ ]# docker plugin install --grant-all-permissions --alias vsphere vmware/vsphere-storage-for-docker:latest

latest: Pulling from vmware/vsphere-storage-for-docker

05da47b7b6ce: Download complete

Digest: sha256:a20bcdfef99ebf017bf3cabd815f256430bf56d8cb7881048150e7c918e0c4c6

Status: Downloaded newer image for vmware/vsphere-storage-for-docker:latest

Installed plugin vmware/vsphere-storage-for-docker:latest

 

Plugin がインストールされました。

root@vm41 [ ~ ]# docker plugin ls

ID                  NAME                DESCRIPTION                           ENABLED

5bb66ae50bbd        vsphere:latest      VMWare vSphere Docker Volume plugin   true

 

設定ファイルを作成しておきます。

内容は主にログ出力設定ですが、このファイルがなくてもとりあえず vDVS はログ出力します。

cat << EOF > /etc/vsphere-storage-for-docker.conf

{

    "Driver": "vsphere",

    "MaxLogAgeDays": 28,

    "MaxLogFiles": 10,

    "MaxLogSizeMb": 10,

    "LogPath": "/var/log/vsphere-storage-for-docker.log",

    "LogLevel": "info",

    "GroupID": "root"

}

EOF

 

Volume を利用してみる。

1台目の Docker ホスト「vm41」で、Docker Volume を作成してみます。

 

docker volume create コマンドで vsphere ドライバを指定して、ボリュームを作成します。

今回の環境では vSAN と NFS のデータストアがありますが、特にデータストアを作成しない場合は

vSAN データストアにボリュームが作成されました。

root@vm41 [ ~ ]# docker volume create --driver=vsphere --name=vol01 -o size=1gb

vol01

root@vm41 [ ~ ]# docker volume ls

DRIVER              VOLUME NAME

vsphere:latest      vol01@vsanDatastore-04

 

NFS データストアを指定して、ボリュームを作成してみます。

root@vm41 [ ~ ]# docker volume create --driver=vsphere --name=vol02@ds-nfs-vmdk-01 -o size=1gb

vol02@ds-nfs-vmdk-01

root@vm41 [ ~ ]# docker volume ls

DRIVER              VOLUME NAME

vsphere:latest      vol01@vsanDatastore-04

vsphere:latest      vol02@ds-nfs-vmdk-01

 

それぞれのデータストアにファイルが作成されています。

[root@hv-n41:~] ls -l /vmfs/volumes/vsanDatastore-04/dockvols/_DEFAULT/

total 8

-rw-------    1 root     root          4096 Apr 16 16:52 vol01-1d8b22d5cfe8e28a.vmfd

-rw-------    1 root     root           586 Apr 16 16:52 vol01.vmdk

[root@hv-n41:~] ls -l /vmfs/volumes/ds-nfs-vmdk-01/dockvols/_DEFAULT/

total 33436

-rw-------    1 root     root          4096 Apr 16 17:06 vol02-256a299ab1ad11b3.vmfd

-rw-------    1 root     root     1073741824 Apr 16 17:06 vol02-flat.vmdk

-rw-------    1 root     root           558 Apr 16 17:06 vol02.vmdk

 

vSAN データストアに作成された Volume を接続して、コンテナを起動してみます。

コンテナ イメージは Docker Hub オフィシャルの Photon OS のイメージです。

ボリューム「vol01」は、/dir01 ディレクトリにマウントします。

root@vm41 [ ~ ]# docker container run -it -v vol01@vsanDatastore-04:/dir01 photon

root [ / ]# uname -n

e58556770d1b

root [ / ]# cat /etc/photon-release

VMware Photon OS 2.0

PHOTON_BUILD_NUMBER=304b817

 

コンテナの中での df コマンドで、ボリュームが接続されていことがわかります。

root [ / ]# df -h

Filesystem                                      Size  Used Avail Use% Mounted on

overlay                                          16G  420M   15G   3% /

tmpfs                                           122M     0  122M   0% /dev

tmpfs                                           122M     0  122M   0% /sys/fs/cgroup

/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:1:0 976M  2.5M  973M   1% /dir01

/dev/root                                        16G  420M   15G   3% /etc/resolv.conf

shm                                              64M     0   64M   0% /dev/shm

tmpfs                                           122M     0  122M   0% /sys/firmware

 

ためしに、Volume をマウントしたディレクトリ /dir01 に適当なデータを書き込んでおきます。

root [ / ]# echo "yo-soro-!" > /dir01/test.f

root [ / ]# cat /dir01/test.f

yo-soro-!

 

コンテナを抜けて、Docker ホストに戻ります。

Volume は、Docker ホストに /dev/sdb として接続されています。

root@vm41 [ ~ ]# LANG=C lsblk

NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT

sda      8:0    0   16G  0 disk

|-sda1   8:1    0    3M  0 part

`-sda2   8:2    0   16G  0 part /

sdb      8:16   0    1G  0 disk /var/lib/docker/plugins/5bb66ae50bbdd237a3205a6051e6f51c88042f1a71c44e49170fef601d9ab9ab/propagated-mount/vol01@vsanDatastore-04

sr0     11:0    1 1024M  0 rom

 

VM から見ても、Docker Volume にあたる「ハード ディスク 2」が

vSAN データストアにあることがわかります。

「仮想マシン ストレージ ポリシー」は Volume 作成時に

指定していないので空欄になっています。

vdvs-vsan-03.png

 

別の Docker ホストでの Volume 利用。

コンテナを削除しても、Volume が消えない様子を確認してみます。

 

まず、これまで Volume を利用していた Docker コンテナを削除します。

root@vm41 [ ~ ]# docker ps

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

e58556770d1b        photon              "/bin/bash"         11 minutes ago      Up 11 minutes                           determined_khorana

root@vm41 [ ~ ]# docker container rm -f e58556770d1b

e58556770d1b

root@vm41 [ ~ ]# docker ps

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

root@vm41 [ ~ ]#

 

Docker ホストの VM から、Volume にあたる「ハード ディスク 2」が切断されました。

vm41 には、「ハード ディスク 1」だけが残っています。

vdvs-vsan-04.png

 

そして、別の ESXi で起動している Docker ホストで

先ほどの Volume「vol01」を接続したコンテナを起動してみます。

Docker ホスト「vm42」でも、vDVS の Docker Plugin はインストールずみです。

root@vm42 [ ~ ]# docker plugin ls

ID                  NAME                DESCRIPTION                           ENABLED

3e561e744610        vsphere:latest      VMWare vSphere Docker Volume plugin   true

 

この Docker ホストでも、vm41 で作成した Volume が見えます。

root@vm42 [ ~ ]# docker volume ls

DRIVER              VOLUME NAME

vsphere:latest      vol01@vsanDatastore-04

vsphere:latest      vol02@ds-nfs-vmdk-01

 

Volume「vol01」を接続して、コンテナを起動します。

Volume はコンテナとは独立してデータが永続化されているので、

先ほど Volume に作成したファイルが残っています。

root@vm42 [ ~ ]# docker container run -it -v vol01@vsanDatastore-04:/dir01 photon

Unable to find image 'photon:latest' locally

latest: Pulling from library/photon

d3603a6287f0: Pull complete

Digest: sha256:9cdad7d78710eed3dd4fc5e565bf783aec99ece689d8ab9771b629fd4e5d0ed1

Status: Downloaded newer image for photon:latest

root [ / ]# uname -n

cd2ba7d55444

root [ / ]# cat /dir01/test.f

yo-soro-!

 

このように vDVS では、コンテナで利用するデータを永続化して、

別の ESXi で起動する Docker ホスト(の VM)でもデータを利用することができます。

そして VMFS / NFS / vSAN といった、どのデータストアでも同様に Docker Volume を作成できます。

 

以上、vDVS を使用してみる話でした。


VMWare Fusion 10.0.1 (6754183) crashes iMac

$
0
0

Hi,

 

I use High Sierra and VMWare Fusion 10.0.1 (6754183) on an iMac (Retina 5K, 27 Zoll, End 2015; 4 GHz Intel Core i7; 64 GB RAM; AMD Radeon R9 M395X 4 GB). I have 2 Virtual Machines with Windows 10 (1709). The iMac part runs perfectly if I do not use VMware. (See infos below) When I run the machines after a while there are two scenarios

 

1.) I got a kind of "drop out" on the iMac, which means the screen is blanked and I have to reenter my password to enter macOS. The Virtual Windows 10 is totally black. No action possible

2.) I got a kernel panic and have to reboot the iMac

 

What I did to solve the issue.

1.) Clean install with Sierra - No effect

2.) Clean install with High Sierra - No effect

3.) Downgrade to VMWare Fusion 8.5.1

4.) Reinstall Windows 10, clean, no apps, no effect

5.) Change CPU 1 or 2, no effect

6.) Change RAM, 4GB or 8GB, no effect

 

What I realized and analyzed in the meantime. It seems, that the VMs has an high effect on the system and CPU temperature, but not on the internal fan. I can do a burn in test on iMac the temperature is fine and the fans are running like expected. No crash no drop out. When I run the VMs the temperature and CPU load increases but the fans keeps 1200 rpm, which seems to lead to the two effects.

 

Would be great for a hint. Thank you in advance.

Constant Mac Crashes (10.1.1)

$
0
0

Hi,

 

Since upgrading to VMWare Fusion 10 from 8.5, my Mac crashes constantly.

 

The problem:

When switching to VMWare Fusion after not using it for a while, my Mac (Macbook Pro 15, 2011, 16GB RAM) screen goes black and goes to the lock screen and I have to re-enter my password. Afterwards all my apps restart, but VWWare stays black and unresponsive and I need to close it completely and start it over.

 

What causes the problem:

Not sure really, but it only happens when having VMWare open, and when switching to it. I use it in full screen mode.

Originally I though my Mac was going bad with graphic issues. But the symptoms don't point to graphic issues really, and it has only happened when using VMWare.

 

VMWare setting:

- Accelerate 3D is off.

- 2 processor cores.

- 6GB RAM.

- Enable hypervisor applications is on.

 

Besides this options I think everything else is as default.

 

What can I try to fix this problem?

100% CPU usage after Windows 10 guest update

$
0
0

Okay I am stumped here, after a little while running I have a guest that appears to max out the CPU showing 100% usage however it's not really utilizing 100% because it will show 92% system idle or something like that.

once the change occurs, rebooting always shows 100 usage

Win Version is 1703 build 15063.608

 

The HOST is showing 100% usage as well

 

I was able to revert to previous snapshot and things were working fine again but then the same problem appears after a day or 2

I do notice that there are actual problems that occur besides just the utilization counter so I do need to get this fixed

has anyone ever else ever run across this?

 

i do have passthrough enabled for 2 cards but have tried it both ways and still have the same result.

No Operating System found?

$
0
0

After creating a boot camp vm, not an imported boot camp volume, and the starting it up I get "EFI VM virtual IDE Hard Drive... unsuccessful", "EFI VM virtual SATA CDROM Drive... unsuccessful" then "No Operation System found"(along with "check your start up disk in the virtual machine settings".

After this it goes to a boot option menu.

 

I have a MacBook Pro with a windows boot partition that was made with boot camp, and I am running macOS High Sierra(10.13.4) and I have the latest version of Fusion?

FUJITV iOS Is Available! Watch Japan net TV on iPhone, iPad and iPod touch

$
0
0

Apirl 12, 2018, With all the warm anticipation, FUJITV iOS Release! All Japanese overseas can now watch Live Japan TV channels across the Globe on your iOS devices with FUJITV. To run FUJITV on iOS devices, all you need to do is simply free download a third-party player named FJPlayerTV and begin watching japan TV live on iPad/iPhone/iPod Touch. FJPlayerTV app is available at Apple App Store.

 

2.jpg

Obvious! FUJITV Live has been the focus and favorite of many overseas Japanese since its release. As the best, high quality and stable Japan TV Live service for Android mobile/TV/TV Box, iPhone/iPad/iPod Touch and PC/Mac users. FUJITV can not only beat the homesickness of overseas Japanese, but also help Japanese learners to improve their Japanese language proficiency.

 

Focus! FUJITV iOS Giveaway!

Please Join FUJITV Live Reddit and share this post for getting 7 days free!

About NSX Manager CLI Login hpet 1: lost 5 rtc interrupts

$
0
0

NSX Version:6.4 and 6.3

 

Hello.

 

When logging in to NSX Manager with CLI, the following log is displayed.

 

[6102953.422082] hpet 1: lost 5 rtc interrupts

 

I think that there is no problem because it is also displayed in another NSX Manager.

However

I am two-sided.

 

① What is the meaning of this log?

② Even if this log is displayed, is NSX Manager operating normally?

 

NSX Time Sync

$
0
0

We are on a closed network in a test lab and don' t have a valid time source or NTP Server.

We are running ESXi 6.5 on two servers and 6.5 U1 VCSA.

 

We are looking at NSX for DFW (Micro-Segmentation) as a POC and may later move to a full implementation ofNSX.

 

For VCSA time sync, I am using one of the 6.5 ESXi Servers.

I have VCSA using LDAP for AD authentication running on a VM (DC) on a 6.0 ESXi host.

 

Since at this time, we are not deploying the full NSX system (controllers,,),

 

Is there a way to set up NSX DFW to use ESXi host time?

 

or

 

do we need some sort of internal time source on a VM?

 

 

thanks


Kernel panic in OSX guest on Fusion 10.0.1 when configured for 3 or more CPU's

$
0
0

I have had a lot of kernel panics when booting OSX guests if I configure 3 or more cpus.  It never panics with 1 or 2 cpus, rarely happens with 3 cpus, and almost always happens with 4 or more cpus.  Stragely it does eventually boot after many failed attempts.

 

I don't see the problem on an ESXi server so I'm guessing it may point to a hardware problem on my laptop (Late 2016 15" MBP w/ touchbar), but the machine passes all Apple hardware diagnostics.  Is it possible this a known problem with Fusion?  Is it possible to set cpu affinity to prevent Fusion from using certain cpu cores?  I believe I have a directory full of boot failure logs but I forget where they are written.  Should I attach a bunch of them?

 

Thanks in advance!

 

Edit:  The same behavior occurs on Fusion 10.1.0 as well as 10.1.1.  It appears that several other people experience the same issue, so it may indeed be a bug in Fusion.

After migration to VMFS6 Snapshot not possible anymore

$
0
0

After migration from VMFS5 to VMFS 6.81 I am not able to create a snapshot of a specific mashine anymore.

 

The mashine is a Microsoft Server 2012 R2 Standard on an ESXi 6.5.

The compatiblity level is set to ESXi 6.5 and later (VM version 13) and 10309 Tools are installed.

 

The mashine has two vmdk's. The first one has 40GB with MBR. The second one has 2.25TB with GPT.

The VMFS6 storage has a size of 3TB.

Convert EML to PDF

$
0
0

If you have any query related to how to convert EML to PDF then you need ultimate solution and for that you can download Mailsware EML to PDF Converter. The software convert and print EML emails to PDF format along with attachments. Use of this tool is very easy even non-technical person also make the conversion easily. The best part of this tool is that it convert EML file with its exact formatting without any change.

 

It has comes up with more advanced features such as:-

 

  • Convert multiple EML files in bulk
  • Supports to transfer Non-English message of EML
  • Different file naming option to save EML into PDF
  • Maintains metadata properties while converting EML files
  • Browse and save converted file to the desired path
  • Supports all EML format supportive application
  • Operable with all the version of Windows OS

 

Know more about this: https://steemit.com/eml/@scarlettjones/how-to-convert-eml-file-to-pdf

Any problems for virtual 32-bit under future MacOS?

$
0
0

MacOS 10.13.4 warns that 32-bit processes will no longer run in a future MacOS.  Will that have any negative effect on running a 32-bit client OS like WinXP under 64-bit VMware Fusion?  I wonder whether the critical factor is the underlying Intel chip support for 32-bit execution, or whether that would no longer be accessible once MacOS nerfs 32-bit apps.

vra-command list-nodes still shows localhost.localdom for VA (vRealize Automation Appliance)

$
0
0

Hi,

 

I am trying to install vRA 7.3 using Silent Installation as described in Silent Installation of vRA 7.2 – A How To Guide - VMware Cloud Management .

After filling up ha.properties file, when I execute below command

 

# pwd

/usr/lib/vcac/tools/install

vra:/usr/lib/vcac/tools/install # bash vra-ha-config.sh

[2018-04-16 10:09:04] [root]  [INFO] EULA is accepted.

[2018-04-16 10:09:04] [root]

[2018-04-16 10:09:04] [root]  **************************************************

[2018-04-16 10:09:04] [root]  [INFO] Start initial data verification

[2018-04-16 10:09:04] [root]  **************************************************

[2018-04-16 10:09:04] [root]

[2018-04-16 10:09:04] [root]  [INFO] Check if vRA Component host or LOAD BALANCER addresses are reachable

[2018-04-16 10:09:04] [root]  [ERROR]  vRA access point not accessible -   <-------------!!!

[2018-04-16 10:09:04] [root]  [INFO] Check if remote host is provided.

[2018-04-16 10:09:04] [root]  [INFO] nslookup Remote Host: VRAIAAS.mylab.local

[2018-04-16 10:09:04] [root]  [INFO] nslookup Remote Host: nslookup exit code: 0

[2018-04-16 10:09:04] [root]  [INFO] Remote host resolved successfully.

[2018-04-16 10:09:04] [root]  [INFO] Check if remote host is provided.

[2018-04-16 10:09:04] [root]  [INFO] nslookup Remote Host: VRAIAAS.mylab.local

[2018-04-16 10:09:04] [root]  [INFO] nslookup Remote Host: nslookup exit code: 0

[2018-04-16 10:09:04] [root]  [INFO] Remote host resolved successfully.

[2018-04-16 10:09:04] [root]  Disabled

[2018-04-16 10:09:04] [root]  ntp                       0:off  1:off  2:off  3:on   4:off  5:on   6:off

[2018-04-16 10:09:05] [root]  Shutting down network time protocol daemon (NTPD)..done

[2018-04-16 10:09:05] [root]  Starting network time protocol daemon (NTPD)..done

[2018-04-16 10:09:05] [root]  Waiting for the ntp settings

[2018-04-16 10:10:05] [root]  [INFO] Verify connection to: vRA VA host: vra.mylab.local

[2018-04-16 10:10:05] [root]  [INFO] Check if remote host is provided.

[2018-04-16 10:10:05] [root]  [INFO] Checking up local host - not needed

[2018-04-16 10:10:35] [root]  [ERROR] localhost.localdom has trouble connecting to vra.mylab.local  <--------------------!!!!

[2018-04-16 10:10:35] [root]  [INFO] Verifying IaaS node is registered to the primary VA: VRAIAAS.mylab.local

[2018-04-16 10:10:35] [root]  [INFO] IaaS iaas_node_id is successfully populated.

vra:/usr/lib/vcac/tools/install # [2018-04-16 10:10:35] [root]

[2018-04-16 10:10:35] [root]  [INFO] Input validation failed! 2 validation errors found!

 

It looks like, in spite of chaning hostname using /opt/vmware/share/vami/vami_config_net, Nodename reported in "vra-command list-nodes" remain same as localhost.localdom.

 

vra:/usr/lib/vcac/tools/install # vra-command list-nodes

Node:

  NodeHost: localhost.localdom   <----------------------- How to change this to reflect vra.mylab.local ???

  NodeId: cafe.node.309634814.8894

  NodeType: VA

Node:

  NodeHost: VRAIAAS.mylab.local

  NodeId: 3D635930-0F3C-4AF0-9313-BD85EBA9FE17

  NodeType: IAAS

vra:/usr/lib/vcac/tools/install # cat /etc/hosts

# Not showing default IPv6 entries

# VAMI_EDIT_BEGIN

# Generated by Studio VAMI service. Do not modify manually.

127.0.0.1    localhost localhost.localdom

15.129.91.34  vra.mylab.local vra localhost.localdom

# VAMI_EDIT_END

 

Question:

How to change/update NodeHost entry for VA to the new name "vra.mylab.local" ?

 

Thanks in advance

Velu

Viewing all 180329 articles
Browse latest View live