Applied AI in my work

After departing Verizon at the end of 2024, I took a brief sabbatical to reflect, recharge and build. I created the portfolio website I always wanted, took on consulting work and filed paperwork for an LLC (more on that to come). Oh, and I developed a lightweight AI agent that applies to jobs on my behalf, a personal experiment in agentic automation broken down here.

2025: Roadmaps, reinforcement and real-time scoring

Recently, I teamed up with an enterprise video communication platform. Its product suite was evolving to include more AI features, and I helped define a shared product vision that positioned AI prominently within broader vertical and horizontal go-to-market narratives.

One initiative involved migrating to a more efficient speech-to-text (STT) model and integrating it with noise suppression software trained on real-world customer videos.

Another initiative: a machine learning– and computer vision–powered video scoring system that assessed both technical quality and adherence to industry workflows. Using a hybrid of open-source and proprietary models trained to detect blur, content elements, framing and motion artifacts, the system enabled customizable and scalable visual intelligence. It was being enhanced to support shake detection, PII blurring, profanity filtering and reinforcement learning from users’ datasets.


2024: GenAI in the API stack

As the senior-most product manager within Verizon’s API First Initiative, I led efforts to enhance a new developer platform with AI capabilities. One future-facing project involved collaboration with the GenAI and Quantum Computing team to operationalize Gorilla, a large language model (LLM) built at UC Berkeley.

Gorilla was tapped to power two core features…

  • an API discovery assistant that recommended integration opportunities based on application use cases, and

  • a documentation enhancement engine that drew on governance standards to suggest improved taxonomy, metadata and reference descriptions.

In both cases, Gorilla was intended to act as a retrieval-augmented (RAG) model, continuously trained on internal documentation repos and fine-tuned to respond with accuracy and context sensitivity.

These implementations were aimed not only at making API First’s API catalog more discoverable and helpful, they also demonstrated how generative AI could support enterprise developers without compromising on standards or scalability.


2022: Making the case for ML at scale

Following hands-on implementation work with Verizon Cloud, I was tapped to co-author and lead-edit a technical paper for Verizon’s Global Network & Technology division: Machine Learning: Transformative, Data-Driven Decision Making at Scale.

The paper outlined how supervised, unsupervised and reinforcement learning models were being applied across platforms and products to enhance operations, improve retention and reduce fraud.

Use cases spanned anomaly detection, demand forecasting, content personalization and identity verification.

  • In fintech and consumer wireless, ML powered real-time fraud scoring.

  • In Verizon Cloud, it enabled frictionless content organization and recovery through smart tagging and visual analysis.

The paper emphasized the need for robust data infrastructure and ongoing investment, illustrating how investment can yield outsized value as systems mature.


2019–2022: Turning Storage into Storytelling

My AI journey began in earnest while leading product for Verizon Cloud, where cross-functional teams I led evolved from offering basic backup to delivering smart, personalized photo and video experiences. Over several years, I worked closely with engineers, other PMs, designers, copywriters and data scientists to introduce machine learning features that became central to the product’s value proposition.

These features were grounded in computer vision techniques, models that could interpret visual content as humans might, tag images, detect people and objects, evaluate quality, and find meaningful connections across billions of media files.

This was achieved through...

  • Image recognition and tagging using CNNs

  • Gallery generation and face clustering using FaceNet and K-Means

  • Similarity- and quality-scoring based on embedded vectors, blur detection and user engagement signals

  • Content personalization using feedback loops, online learning and region-specific model tuning

Two marquee features, Flashbacks and Stories, combined metadata, vector similarity and heuristic scoring to auto-generate albums that surfaced timely, emotionally-resonant memories. Whether it was “New Year’s Eve 2020” or “Hot Fun in the Summertime” (R.I.P., Sly Stone), users received curated collections that reflected not just content, but context…enhanced by clever, human-written copy that added a layer of delight.

The impact of both features was measurable: app ratings climbed from 4.3 to 4.6, and user engagement surged as the product shifted from a passive utility into an intelligent curator of personal memories that automatically organized content into meaningful collections.

And, behind the scenes, the Cloud team explored how ML could power operational insights like ingest anomaly detection and performance optimization across a micro-services-based infrastructure. These efforts were aimed at reducing false positives during high-traffic periods (like holidays), while helping development teams stay ahead of performance dips and potential outages.


Closing inference: patterns with purpose

For the last seven years of my career, AI has been more than a capability. It’s been a through-line, a technical and creative lever I’ve pulled to help deliver smarter systems, sharper tools and more responsive experiences.

I don’t approach AI as a magic wand.

I treat it like a product in its own right: trainable, tunable and most powerful when guided by clear intent, thoughtful design and ethical grounding.

 
Next
Next

Advice for launching your biz