Tag: VMware

  • 🖥️ Path to Become an Infrastructure Engineer (Ivy Falls)

    From Customer Service Rep to PC Specialist, Network Engineer, System Administrator, DevSecOps, and now Infrastructure Engineer — a journey built on faith, discipline, dedication, and gratitude.

    Introduction: The Path Is the Practice

    My story didn’t begin with servers or certifications.
    It began at All Electronics Corporation in Van Nuys, California, where I worked full-time from 6:30 A.M. to 3:00 P.M., taking two Metro buses and walking a block from the station — rain or shine — from December 1990 to late 1995.

    I woke as early as 4 A.M. to catch the first bus at Western and 3rd Street in Los Angeles, sometimes heading straight to my evening shift at the Taco Bell drive-thru in Glendale.
    Those were humble, exhausting days that taught me discipline and grit — lessons that would shape every part of my career.

    At All Electronics, I became fascinated by the IC — Integrated Circuit, the heart of every desktop computer. I wanted to understand it, not just sell it.

    Back in my Koreatown apartment, I turned curiosity into calling.
    No Google. No YouTube. No AI.
    Just library books and endless nights of self-study. I intentionally crashed my computers and rebuilt them until every fix became muscle memory.

    Once confident, I started offering free repairs and computer lessons to friends, relatives, and senior citizens — setting up printers, fixing networks, and teaching email basics. Those acts of service opened the door to my first full-time IT job at the University of Southern California (USC) as a PC Specialist.

    I still remember waiting at the bus stop in the dark, dreaming of the day I wouldn’t have to ride in the rain. Years later, those same dreams became reality — not through luck, but through faith, discipline, dedication, and gratitude.
    The rides changed — from buses to a BMW, an Audi, and now a Tesla — but what never changed was the purpose: to keep moving forward while staying grounded in gratitude.


    Season of Refinement

    While working full-time at USC, I entered what I call my season of refinement.
    By day I supported campus systems and users; by night I was a full-time student at Los Angeles City College (LACC) and a weekend warrior at DeVry University, studying Management in Telecommunications.

    It was during this time that Microsoft introduced the MCSE (Microsoft Certified Systems Engineer) program.
    One of my professors at LACC encouraged me to earn it, saying, “Once you have that license, companies will chase you.”
    He was right — that MCSE became my ticket to GTE (now Verizon), my first step into enterprise-scale IT.

    My tenure at GTE was brief because Aerospace came calling with a six-figure offer just before Y2K — an opportunity too good to refuse.
    After Aerospace, I founded my own consulting firm — Ahead InfoTech (AIT) — and entered what I now call my twelve years of plenty.

    One of my earliest major clients, USC Perinatal Group, asked me to design and implement a secure LAN/WAN connecting satellite offices across major hospitals including California Hospital Medical Center, Saint Joseph of Burbank and Mission Hills, and Hollywood Presbyterian Hospital.
    We used T1 lines with CSU/DSU units and Fortinet firewalls; I supplied every workstation and server under my own AIT brand.

    Through that success I was referred to additional projects for Tarzana and San Gabriel Perinatal Groups, linked by dedicated frame-relay circuits — early-era networking at its finest.
    Momentum led to new partnerships with The Claremont Colleges and the City of West Covina, where I served as Senior Consultant handling forensic analysis and SMTP/email engineering.

    Word spread. One attorney client introduced me to an opportunity in American Samoa to help design and build a regional ISP, and later to a contract with Sanyo Philippines.
    During this period Fortinet was still new, and I became one of its early resellers.
    Refusing to rely on mass-produced systems, I built AIT servers and workstations from the ground up for every environment.
    DSL was just emerging, yet most clients still relied on dedicated T1s — real hands-on networking that demanded precision and patience.

    Those were the twelve years of plenty — projects that stretched from local hospitals to overseas data links, from LAN cables to international circuits.
    By the time AWS arrived in 2006 and Azure followed in 2010, I had already been building and managing distributed networks for years.

    When I returned to Corporate America, my first full-time role was at Payforward, where I led the On-Prem to AWS migration, designing multi-region environments across US-East (1a and 1b) and US-West, complete with VPCs, subnets, IAM policies, and full cloud security.
    That’s when I earned my AWS certifications, completing a journey that had begun with physical servers and matured in the cloud.

    Education, experience, and certification merged into one lesson:
    Discipline comes first. Validation follows.
    Degrees and credentials were never my starting line — they were the icing on the cake of years of practice, service, and faith.


    My Philosophy: One Discipline, Many Forms

    Whether in Martial Arts, IT, or Photography, mastery comes from repetition, humility, and curiosity.
    As Ansel Adams wrote:

    “When words become unclear, I shall focus with photographs. When images become inadequate, I shall be content with silence.”

    Everyone can take a photo; not everyone captures a masterpiece.
    Everyone can study tech; not everyone understands its rhythm.
    Excellence lives in awareness — the moment when curiosity meets purpose.


    The Infrastructure Engineer Path

    1️⃣ Foundations

    Learn the essentials: Windows Server, Active Directory, DNS/DHCP, GPOs, Networking (VLANs, VPNs), Linux basics, and PowerShell.
    Free Resources:

    2️⃣ Cloud Platforms

    Start with AZ-104 Azure Administrator.
    Use free tiers to lab: Azure | AWS | GCP.
    Courses:

    3️⃣ Automation & DevOps

    Learn IaC (Terraform/Bicep), Docker, Kubernetes, and CI/CD.
    Watch TechWorld with Nana.

    4️⃣ Labs & Simulators

    No hardware? Try:

    5️⃣ Portfolio

    Document every lab, build diagrams, post scripts on GitHub, and write short lessons learned.


    Final Reflection

    From bus stops to boardrooms, from fixing desktops to deploying clouds — the principles never changed: serve first, learn always, and build things that last.
    This blog will continue to evolve as technology changes — come back often and grow with it.


    🪶 Closing Note

    I share this story not to boast, but to inspire those still discovering their own path in technology.
    Everything here is told from personal experience and memory; if a date or detail differs from official records, it’s unintentional.
    I’m grateful for mentors like my LACC professor, who once told me to look up a name not yet famous — Bill Gates — and earn my MCSE + I.
    He was right: that single decision opened countless doors.

    I don’t claim to know everything; I simply kept learning, serving, and sharing.
    My living witnesses are my son, my younger brother, and friends who once worked with me and now thrive in IT.
    After all these years, I’m still standing — doing what I love most: helping people through Information Technology.


    ⚖️ Legal Disclaimer

    All events and company names mentioned are described from personal recollection for educational and inspirational purposes only. Any factual inaccuracies are unintentional. Opinions expressed are my own and do not represent any past or current employer.

    © 2012–2025 Jet Mariano. All rights reserved.
    For usage terms, please see the Legal Disclaimer.

  • Cloning a VM with PowerShell and VMware PowerCLI


    Intro

    When you need to quickly spin up a test or lab machine, cloning an existing VM can save hours compared to building from scratch. VMware PowerCLI brings the full power of vSphere management into PowerShell. Here’s a simple walkthrough.


    Step 1 — Install VMware PowerCLI

    Open PowerShell as administrator and run:

    Install-Module -Name VMware.PowerCLI -Scope CurrentUser
    Import-Module VMware.PowerCLI
    

    This installs the official VMware module and loads it into your session.


    Step 2 — Connect to vCenter

    You’ll need credentials for your vCenter server.

    Connect-VIServer -Server <vcenter-server.domain> -User <username> -Password '<password>'
    

    Step 3 — Clone an Existing VM

    Pick the source VM, target VM names, host, and datastore. Example:

    # Define source VM
    $sourceVM = "Base-Win10-VM"
    
    # Clone to new VM
    New-VM -Name "Test-VM01" -VM $sourceVM `
           -VMHost (Get-VMHost -Name <target-host>) `
           -Datastore (Get-Datastore -Name <datastore-name>) `
           -Location (Get-Folder -Name "VMs")
    
    • -VM points to the existing machine you’re cloning.
    • -VMHost pins the new VM to a specific ESXi host.
    • -Datastore chooses where to store the VM’s disks.
    • -Location defines the vCenter folder for organization.

    Step 4 — Power On the New VM

    Start-VM -VM "Test-VM01"
    

    Final Reflection

    PowerCLI makes cloning fast, repeatable, and scriptable. Instead of clicking through vSphere UI screens, you can prepare test VMs with a single command.


    © 2012–2025 Jet Mariano. All rights reserved.
    For usage terms, please see the Legal Disclaimer.

  • Ops Note — Picking the best vSAN host with one PowerCLI check

    Excerpt
    Quick, repeatable way to see CPU/RAM/vSAN headroom across hosts and choose where to place the next VM. Today it pointed us to vsan2.


    Intro
    Before cloning a new Windows VM, I ran a fast PowerCLI sweep across three vSAN hosts to compare free CPU, free memory, and vSAN free space. All three had identical vSAN capacity; vsan2 had the most free RAM, so that’s the landing spot.


    Straight line (what I did)
    • Pulled CPU and memory usage per host (MHz/MB) and calculated free.
    • Queried each host’s vSAN datastore(s) and summed free/total GB.
    • Printed a compact table to compare vsan1/2/3 at a glance.
    • Chose the host with the highest Mem_Free_GB (tie-break on vSAN free).


    Command (copy/paste)

    # Hosts to check (redacted)
    $hosts = 'vsan1.example.local','vsan2.example.local','vsan3.example.local'
    
    $report = foreach ($h in $hosts) {
      try {
        $vmh    = Get-VMHost -Name $h -ErrorAction Stop
        $cpuTot = $vmh.CpuTotalMhz;  $cpuUse = $vmh.CpuUsageMhz
        $memTot = $vmh.MemoryTotalMB; $memUse = $vmh.MemoryUsageMB
    
        $vsan      = $vmh | Get-Datastore | Where-Object { $_.Type -eq 'vsan' }
        $dsCapGB   = ($vsan | Measure-Object CapacityGB  -Sum).Sum
        $dsFreeGB  = ($vsan | Measure-Object FreeSpaceGB -Sum).Sum
        $dsFreePct = if ($dsCapGB) { [math]::Round(100*($dsFreeGB/$dsCapGB),2) } else { 0 }
    
        [pscustomobject]@{
          Host          = $vmh.Name
          CPU_Free_GHz  = [math]::Round(($cpuTot-$cpuUse)/1000,2)
          CPU_Total_GHz = [math]::Round($cpuTot/1000,2)
          CPU_Free_pct  = if ($cpuTot) { [math]::Round(100*(($cpuTot-$cpuUse)/$cpuTot),2) } else { 0 }
          Mem_Free_GB   = [math]::Round(($memTot-$memUse)/1024,2)
          Mem_Total_GB  = [math]::Round($memTot/1024,2)
          Mem_Free_pct  = if ($memTot) { [math]::Round(100*(($memTot-$memUse)/$memTot),2) } else { 0 }
          vSAN_Free_GB  = [math]::Round($dsFreeGB,2)
          vSAN_Total_GB = [math]::Round($dsCapGB,2)
          vSAN_Free_pct = $dsFreePct
        }
      } catch {
        [pscustomobject]@{ Host=$h; CPU_Free_GHz='n/a'; CPU_Total_GHz='n/a'; CPU_Free_pct='n/a';
          Mem_Free_GB='n/a'; Mem_Total_GB='n/a'; Mem_Free_pct='n/a';
          vSAN_Free_GB='n/a'; vSAN_Total_GB='n/a'; vSAN_Free_pct='n/a' }
      }
    }
    
    $report | Format-Table -AutoSize
    
    # Optional: pick best host by RAM, then vSAN GB
    $best = $report | Where-Object { $_.Mem_Free_GB -is [double] } |
            Sort-Object Mem_Free_GB, vSAN_Free_GB -Descending | Select-Object -First 1
    "Suggested placement: $($best.Host) (Mem free: $($best.Mem_Free_GB) GB, vSAN free: $($best.vSAN_Free_GB) GB)"
    

    Result today
    • vsan2 showed the most free RAM, with CPU headroom similar across all three and identical vSAN free space.
    • Suggested placement: vsan2.


    Pocket I’m keeping
    • Check host headroom before every clone—30 seconds now saves hours later.
    • Prefer RAM headroom for Windows VDI/worker VMs; CPU is usually similar across nodes.
    • Keep a one-liner that prints the table and the suggested host.


    What I hear now
    Clone to vsan2, power up, then let DRS/vMotion rebalance after the build window. Repeat this check whenever adding workloads or after maintenance.

    © 2012–2025 Jet Mariano. All rights reserved.
    For usage terms, please see the Legal Disclaimer.

  • 🌥️ The Cloud Above Us

    PIMCO (Newport Beach HQ, CA) 🌍 — Global financial services supporting regions in NA, EMEA, APAC.
    Church (Riverton Office Building, UT) ⛪ — Worldwide infrastructure with 200k employees and over 80k missionaries.
    Monster Energy (Corona HQ, CA) ⚡ — Global enterprise IT operations across NA, EMEA, APAC.
    City National Bank (Downtown LA, CA) 🏙️ — U.S. banking systems at scale.

    A journey across scales: national (CNB), global (PIMCO & Monster Energy), and worldwide (The Church).


    Every IT career tells a story, and mine has moved through three different scales of impact:

    Company-Level Foundations → At PayForward, I migrated an entire OnPrem environment into AWS. That meant setting up VPCs, building HA Exchange clusters with load balancers, and proving the power of cloud for a fast-moving startup.

    Regional / Global Scale → At Monster Energy and PIMCO, the work stretched across North America, EMEA, and APAC. The systems never slept. VMware clusters and M365 tenants had to function as one, even though users were scattered across time zones and continents.

    Worldwide Reach → At the Church, the scale expanded beyond regions. Over 200,000 employees and over 80,000 missionaries, connected by systems that had to reach every corner of the globe, demanded both technical precision and spiritual responsibility.

    This journey shows that the “cloud above us” isn’t just AWS, Azure, or GCP — it’s the ability to design, secure, and sustain systems at every possible scale.

    A colleague once told me: “Automate, or eliminate.” In IT, that isn’t just a clever saying — it’s survival. At the scale of hundreds or even thousands of VMs, EC2 instances, or mailboxes, doing things manually is not just unrealistic — it’s risky. What automation can finish in under 10 minutes might take days or weeks by hand, and even then would be prone to errors.

    That’s why Python, PowerShell, Bash, and automation frameworks became part of my daily toolkit. Not to flaunt, but because without automation, no single engineer could handle the demands of environments as large as PIMCO, Monster Energy, or the Church.


    Snippet 1: AWS (My PayForward Days)

    import boto3
    
    # Connect to AWS S3
    s3 = boto3.client('s3')
    
    # List buckets
    buckets = s3.list_buckets()
    print("Your AWS buckets:")
    for bucket in buckets['Buckets']:
        print(f"  {bucket['Name']}")
    

    From racks of servers to a few lines of Python—that’s the power of AWS.

    Snippet 2: PowerShell + Azure (My Church Years, CNB)

    Connect-AzAccount
    Get-AzResourceGroup | Select ResourceGroupName, Location
    

    One line, and you can see every Azure resource group spread across the world. A task that once required data center visits and clipboards is now just a command away.

    Snippet 3: PHP + GCP (Expanding Horizons)

    use Google\Cloud\Storage\StorageClient;
    
    $storage = new StorageClient([
        'keyFilePath' => 'my-service-account.json'
    ]);
    
    $buckets = $storage->buckets();
    
    foreach ($buckets as $bucket) {
        echo $bucket->name() . PHP_EOL;
    }
    

    Snippet 4: VMware + M365 (Monster Energy, PIMCO, and Beyond)

    # Connect to vCenter and list VMs across data centers
    Connect-VIServer -Server vcenter.global.company.com -User admin -Password pass
    Get-VM | Select Name, PowerState, VMHost, Folder
    
    # Quick check of licensed users in M365 (global tenants)
    Connect-MgGraph -Scopes "User.Read.All"
    Get-MgUser -All -Property DisplayName, UserPrincipalName, UsageLocation |
        Group-Object UsageLocation |
        Select Name, Count
    

    One script, and suddenly you’re seeing footprints of users spread across the globe — NA, EMEA, APAC, or even worldwide. That’s the reality of modern IT infrastructure.


    The “cloud above us” is both a literal technology — AWS, Azure, and GCP that I’ve worked across — and a metaphor. It represents resilience, scalability, and unseen support. Just as automation carries workloads we could never handle by hand, life has storms we cannot carry alone.

    From startups making their first move to the cloud, to global financial institutions, to worldwide organizations with hundreds of thousands of users, the lesson is the same: we are not meant to fight every battle manually.

    We are given tools, teammates, and even unseen strength from above to keep moving forward. The same way a script can manage thousands of servers or accounts without error, trust and preparation help us navigate the storms of life with less fear.

    ☁️ Above every storm, there’s always a cloud carrying potential. And above that cloud, always light waiting to break through.

    Before my cloud journey, I also spent nine years in forensic IT supporting law enforcement — a grounding reminder that technology isn’t only about systems and scale, but about accountability and truth.

    © 2012–2025 Jet Mariano. All rights reserved.
    For usage terms, please see the Legal Disclaimer.

error: Content is protected !!