Bits, Bytes, & Radio Waves

A quiet journey through discovery and understanding.

Converging to VCF 9.0.1 from VCF 5.1

Overview

You’re probably raising an eyebrow at the title—and that’s fair. If you’re already running VCF 5, the obvious and entirely reasonable question is: Why not just upgrade to VCF 9 using Lifecycle Manager and call it a day? Honestly, I agree with you.

In a perfect world, upgrades are clean, prechecks are green, and infrastructure behaves exactly as designed. In the real world, however, things occasionally go sideways. Sometimes the environment isn’t as healthy as it should be. Sometimes past decisions come back like a boomerang. And sometimes you inherit a system that makes you quietly whisper, “Huh… that’s interesting.”

This page exists less as a how-to guide and more as an answer to a different question: Can this be done at all? It’s an exploration, not a recommendation—meant to document what’s possible when the “normal” upgrade path isn’t quite so normal.

Along the way, it also raises an important (and often uncomfortable) question: At what point does accumulated technical debt stop being something you upgrade through and start being something you rebuild away from?


⚠️ Reality Check (a.k.a. Read This Before You Get Ideas)

This is not a production playbook.

Everything documented here is explored in a lab environment—where snapshots exist, rollback plans are theoretical until they aren’t, and the blast radius is intentionally limited.

The goal is to answer “Can this be done?” rather than “Should this be done before a long weekend?

If you’re looking for a calm, predictable upgrade with executives on the bridge, follow official Lifecycle Manager guidance. If you’re curious what happens when the environment isn’t cooperative, this is where we break things on purpose and document the results for the rest of us plebs.


RTFM

Deploy Components by Using VCF Installer to Complete the Converging to VCF Process


Let’s Do This Thing!

1. Navigate to the VMware Cloud Foundation Installer appliance and log in with the admin@local user and password that was provided during the appliance deployment.

https://vcf.installer.appliance.fqdn

2. Under Deploy, select Deployment Wizard and then VMware Cloud Foundation.

3. Assuming there is no existing VCF fleet, select Deploy a new VCF fleet.

4. Since this post is focusing on a VCF to VCF convergence, I am going to assume that a VMware vCenter and NSX Managers are in use. Select VMware vCenter and NSX Manager.

5. On the General Information page, verify Version 9.0.1.0 (or later, depending on when this is read).

Leave Other unchecked as well.

Specify a VCF Instance Name and a Management Domain Name.

Select the Simple (Single-node) deployment model. In a lab, this keeps things lightweight; even in production, this model is acceptable as it still lays the groundwork for high availability.

Leave Password creation unchecked for now. If this proves problematic later in the process, this step can be revisited and updated.

6. On the VCF Operations page, provide the deployment details for the VCF Operations appliance.

It’s assumed at this point that the underlying networking is already in place. This includes valid DNS records for each fully qualified domain name requested during the workflow—specifically the VCF Operations Appliance, Fleet Management Appliance, and Operations Collector Appliance.

Remember that Password creation checkbox from the previous step? This is where that decision either makes life very simple—or just slightly more complex. Given that this is a VCF post, we’re already comfortable operating in mild to moderate complexity, so this shouldn’t be too painful.

7. VCF Automation is not required to validate this concept, so select I want to connect a VCF Automation instance later.

8. And now, for the (hopefully) easy part: enter the existing infrastructure details.

After clicking Next, the Certificate Thumbprint window may appear. Verify the displayed thumbprints and confirm to proceed.


First Break

Okay… so we made it this far.

This lab is built using the default Bill of Materials (BOM) versions from VCF 5.1, and that turns out to matter. The error message at this point indicates that the existing vCenter Server is running at a lower version than what this workflow expects.

In other words, we’ve reached a natural stopping point—not because anything is broken, but because version alignment is now the gating factor. This is one of those moments where the lab politely reminds us that convergence still has rules, even when we’re intentionally wandering off the happy path.

I am going to take time out to upgrade vCenter to a newer version to see if we can get past this.

Alright, we’re back—with a meaningful change.

The vCenter Server has now been upgraded to 8.0.3 (build 24853646). At this point, it is decidedly out of compliance with the VCF 5.1 Bill of Materials, and that’s intentional. For the purposes of this exercise, BOM compliance with the original VCF environment is no longer a constraint.

Since the objective here is to converge this environment into VCF 9.0.1, what matters is forward compatibility—not historical alignment. This step helps reinforce the core question behind this post: Does it matter that this environment was previously managed by VCF at all?

If this workflow proceeds successfully, it strengthens the case that prior VCF state is largely irrelevant, and that the environment can be treated—functionally—as existing infrastructure being onboarded into a new VCF deployment.


Back to the VCF Installer…

9. On the SDDC Manager page, enter the deployment details for the new SDDC Manager appliance.

It’s important to call out that these details do not apply to the existing VCF 5.1 SDDC Manager. This configuration is for the new SDDC Manager instance that will be deployed as part of VCF 9.0.1.

10. Almost there. On the Review page, download the JSON specification to keep a record of all deployment inputs.

11. At this point, all that’s left is to run the validations and hope that everything… well… actually validates.


Second Break

Okay—another pause in the process, but an expected one.

This outcome was partly anticipated given that the lab started with the default VCF 5.1 component versions. For the VCF Installer to successfully converge the existing infrastructure, certain components must meet minimum version requirements.

Those components were deliberately left untouched. The goal was not to preemptively upgrade anything, but to allow the VCF Installer itself to surface version incompatibilities as part of the workflow. In other words, this wasn’t an oversight—it was a way to validate that the installer correctly identifies and reports version gaps rather than silently proceeding or failing later in less obvious ways.

At this point, the process is doing exactly what it should: enforcing guardrails before convergence continues.

So, let’s assess these, one at a time.

For this walkthrough, validations are re-run after each individual remediation. This is intentional. Making a single change at a time reduces the risk of introducing additional variables and avoids masking root causes behind cascading configuration updates. It also aligns closely with a scientific method approach: change one variable, observe the outcome, then proceed.

All screenshots in this section will be captured directly from the lab environment to reflect the actual UI and validation results observed during testing.


The Evacuate Offline VMs upgrade policy is currently set in vSphere Lifecycle Manager.

To review or modify the setting, log in to vCenter Server and navigate to Lifecycle ManagerSettingsCluster LifecycleImages.

As seen from the image, the setting is configured as Do not migrate powered off and suspended VMs to other hosts in the cluster.

Enable the Migrate powered off and suspended VMs to other hosts in the cluster setting.

At this point, you can select Re-run validations to confirm that the previously reported issue has been resolved. If successful, the validation message will no longer appear under Existing Components.


Upgrading NSX away from the VCF 5.1 Bill of Materials will, unsurprisingly, make the existing VCF 5.1 SDDC Manager unhappy. That outcome is expected and accepted for the purposes of this exercise.

For this post, manually upgrade the NSX appliances to a version explicitly called out by the VCF Installer as compatible with the convergence workflow. At this stage, compatibility with VCF 9.0.1 takes priority over maintaining alignment with the legacy VCF 5.1 environment.

This step further reinforces the premise of this post: once convergence is the goal, historical BOM compliance becomes informational rather than authoritative.

Before the upgrade.

After the upgrade.

At this point, you can select Re-run validations to confirm that the previously reported issue has been resolved. If successful, the validation message will no longer appear under Existing Components.


Continuing…

All that is left to do at this point is click the Deploy button and wait to see the process follow through. All the hard work has been (hopefully) accomplished.

Start time: 9:57 AM

Completed time: 12:47 PM

Total time: 2 hours 50 minutes

This lab environment includes legacy elements—such as spinning disks and nested virtualization—so completion times will naturally vary depending on hardware and topology. That variability is expected. What matters more is the outcome: this process does work. The convergence is viable, even under less-than-ideal conditions.

That alone makes this a meaningful result. Not because it was fast or elegant, but because it proves the path is possible; and that’s worth documenting.


Finishing Up

On the Completion page, download the final copy of the JSON specification and review the generated passwords. This applies whether credentials were automatically created or manually entered. Capturing and validating this information early helps minimize access issues when logging into the newly deployed appliances later.

This is also a good point to connect the new VCF Operations to an offline depot, if one is required for your environment. Establishing this early helps avoid delays when lifecycle operations are needed later.

Log in to VCF Operations, expand Fleet Management, and select Lifecycle.
Select the Depot Configuration tab, then choose Configure under Offline Depot.

In this lab, the offline depot is hosted on Photon OS 5 using an Apache web server (VMware Cloud Foundation (VCF), VCF Download Tool (v 9.0.1.0) and Offline Depot with Photon OS 5). Choose the OfflineDepotType that best fits your own infrastructure.

Configure the Repository URL (using https://) and Username. When prompted for the Password, either select an existing credential or create a new one.

Finally, select I accept the imported certificate to complete the configuration.

Verify the Offline Depot is connected and active.

Repeat the process for the new instance as well.

You may encounter the Secure protocol communication error if the server has not had certificates added. This is common on a freshly deployed system.

To tail the logs on the new SDDC Manager appliance:

tail -f /var/log/vmware/vcf/lcm/lcm-debug.log | grep -i 'Secure protocol communication error'

Make an ssh connection to the SDDC Manager appliance with the vcf user.

Switch to the root user with su.

Obtain the Offline Depot certificate.

openssl s_client -connect <offline-depot-host>:443 -showcerts </dev/null 2>/dev/null | openssl x509 -outform PEM > /tmp/offline-depot.crt

Use the Java keytool to import the certificate into the trust chain.

keytool -import -trustcacerts -file /tmp/offline-depot.crt -keystore /usr/lib/jvm/openjdk-java17-headless.x86_64/lib/security/cacerts -alias <offline-depot-fqdn>

Enter the keystore password as changeit. (Yes, that is the actual password!) When asked to Trust this certificate? [no], replace with yes.

Restart the SDDC Manager services.

/opt/vmware/vcf/operationsmanager/scripts/cli/sddcmanager_restart_services.sh

Log back into VCF Operations and try to add the Offline Depot to the Instance again. If you are still having trouble, be sure to tail the log on the SDDC Manager. Another common issue may be bad credentials. The log should guide you in the direction needed.


Additional Details

If you are still able to log into the VCF 5.1 SDDC Manager, you might notice the hosts are still actively managed.

If you log into VCF Operations, you might notice that the hosts are also actively managed here and are aware of the hosts based on existing environment version information.


Conclusion: Convergence, Confirmed

The VCF-to-VCF convergence has completed successfully. The offline depot is connected, lifecycle operations are available through the new VCF Operations, and the infrastructure is now being managed by VCF 9.0.1.

At this point, the original VCF 5.1 Bill of Materials is no longer the governing constraint. What matters now is forward compatibility and operational alignment with the current VCF instance. Once the required component versions were met and validated, the installer treated the environment based on its present state—not its history.

That outcome reinforces the core premise of this post. This was not about prescribing a standard upgrade path, nor about bypassing best practices. It was about answering a narrower, more practical question: Can an existing VCF environment—imperfect, drifted, or unintentionally noncompliant with its original BOM—be successfully converged into a new VCF deployment?

In this case, the answer is yes.


Leave a Reply

Your email address will not be published. Required fields are marked *