Usage Intelligence Platform

HubSpot

TL;DR:
In 5 months, we built a scalable usage intelligence layer that improved data reliability, reduced critical usage gaps by 40%, and enabled teams to proactively monitor customer behavior without engineering support.

HubSpot Usage Intelligence Platform dashboard overview
Team
  • 1 Product Designer
  • 4 Developers
  • 1 Product Manager
  • 1 Data Analyst
Discipline
  • UXR, Design Strategy
  • System Design, UX, UI
Duration
  • 5 months

Project Overview

As retention and sustainable usage became core business priorities, customers needed a reliable and scalable way to interpret product usage beyond surface-level metrics. This initiative established a durable usage intelligence layer that enabled teams to define consistent usage events, observe meaningful behavioral patterns, and act on them with confidence.

The work focused on building a robust foundation rather than shallow reporting—improving data quality, enabling usage insights across roles, product areas, and customer teams, and explicitly supporting adoption, informed decision-making, and long-term customer value.

Impact

Reduced critical usage gaps from 8.5% → 5.1%, enabling faster detection, ownership clarity, and proactive intervention at scale.

The Problem

How might we enable users to efficiently create reliable and actionable usage metrics to support their company and to drive usage and user retention?

Decision: Explicitly reject building another analytics surface in favor of establishing usage ownership and governance.
Why: Existing tools optimized for exploration, not accountability or action.
Tradeoff: Less flexibility for ad-hoc analysis in favor of consistency and trust.

Goals

  • Establish a reliable foundation for usage instrumentation:
    • Build a dedicated platform that allows teams to securely create, store, and manage usage events within the product ecosystem, reducing dependence on external systems for operational usage signals.
  • Improve data reliability and reduce critical usage risks:
    • Decrease event-related critical situations from 8.5% by introducing improved, robust, and resilient data infrastructure that strengthens event integrity, consistency, and auditability.
  • Shorten feedback loops for product and customer teams:
    • Enable faster identification of, and response to, usage issues by making core usage signals immediately accessible, supporting day-to-day decision-making without replacing existing analytics tools.
  • Design for scale and long-term ownership across teams:
    • Ensure the platform can support growing product complexity, multiple teams, and evolving use cases—positioning usage intelligence as a durable capability rather than a point solution.

Why was this needed?

While customers could already create usage events, meaningful interpretation and action depended on external analytics platforms such as Amplitude and Looker. This introduced fragmentation, slower turnaround times, and additional operational overhead—particularly for teams needing to respond quickly to usage risks.

For HubSpot, focusing on SMB customers (who might not even have analytical tools), this dependency limited the ability to deliver a seamless, end-to-end experience around usage intelligence. The goal was not necessarily to replace dedicated analytics tools, but to establish a framework that supports faster instrumentation, immediate visibility, and quicker action while continuing to coexist with deeper analytical workflows where present.

By bringing core usage processing closer to the product experience, teams could function without external tooling for day-to-day decisions, shorten feedback loops, and address critical situations before escalation—without disrupting the existing analytics ecosystem.

Decision: Build a complementary usage intelligence layer instead of replacing existing analytics tools.
Why: Replacement would disrupt established workflows and slow adoption across teams.
Tradeoff: Limited exploratory depth compared to full analytics platforms.

Existing user action flow before the Usage Intelligence Platform

No data reliability, instrumentation protection, or reporting in place.

Challenges I encountered

  • Migration of legacy events:
    • Transitioning existing events to a new database while preserving data integrity, historical continuity, and live customer workflows.
  • Designing a scalable system architecture:
    • Defining technical capabilities that support flexibility, performance, and future growth without over-engineering.
  • Establishing a clear usage event framework:
    • Documenting a shared usage library covering event creation, data capture, and processing to ensure consistency and ease of adoption.

Decision: Prioritized a flexible event schema over exhaustive upfront standardization.
Why: Teams needed to move fast without heavy governance blocking early adoption.
Tradeoff: Increased risk of inconsistency early on.
Mitigation: Introduced validation, auditing, and alerting as safety nets.

Design Strategy

Insight: SMBs lack a single, trusted way to govern, monitor, and understand their events—forcing them to react to data issues only after they’ve already caused downstream risk.

  • Interview

    Interviewed 8 SMB businesses to learn about their experience with data and ways we could improve it. We identified a few critical points among others:

    • Event ownership and governance gaps:

      Event logic lived in code files with unclear ownership. While this enabled flexibility, it also meant changes could be made by anyone without visibility, accountability, or shared understanding.

    • Limited visibility into important events:

      Users struggled to easily identify and monitor the events that mattered most. There was no clear way to surface key events or understand how they were evolving over time.

    • Lack of trend awareness:

      Users wanted lightweight ways to view trends for critical events without relying on external analytics tools or complex workflows.

    • No transparency around event changes:

      There was no reporting or notification when events were created, modified, or deleted, making it difficult to track changes or diagnose downstream issues.

    • Data protection and risk exposure:

      Deleting a code file resulted in permanent data loss with no immediate signal or safeguard. Issues often resurfaced only after propagating into analytics tools, increasing response time and risk.

    • No detection of event irregularities:

      The system lacked foundational support for anomaly detection. Teams could not generate alerts for unexpected changes in event volume or behavior, limiting proactive intervention.

  • Workshop

    A cross-functional workshop was conducted with key stakeholders to map the current system state, surface technical constraints, and align on feasible solution directions. This helped establish a shared understanding of platform limitations, clarify non-negotiables, and ground early ideation in real architectural constraints—reducing risk and rework downstream.

  • Stakeholder alignment and choice of direction

    Following the workshop, stakeholders aligned on the technical constraints, ownership model, and decision principles. The group agreed to move forward with a usage framework focused on fast feedback loops and operational reliability, while explicitly positioning it as a complementary layer to existing analytics tools. This alignment clarified scope, reduced ambiguity, and enabled teams to proceed with execution confidently.

  • Final designs and metric improvements

    Delivered an end-to-end user journey by introducing an interface that made event creation and customer activity tracking dramatically easier. By doing so, we improved our metrics beyond expectations.

  • Outcome 1

    Critical usage situations dropped from an expected 8.5% to 5.1% (40% improvement), exceeding our original target.

Expected 8.5% → Actual 5.1%
Critical-situation reduction
Usage Intelligence Platform dashboard
  • Outcome 2

    An intuitive way to create events without relying on manual coding.

No-code
Easier event creation, faster tracking launch
No-code event creation experience
  • Outcome 3

    Learn about issues and act on them quickly.

Alert signals
Teams notice anomalies earlier, fewer production incidents
Alerting experience for event anomalies
  • Outcome 4

    Identify ownership and shared responsibility by tracking important events.

Auditing
Increased trust in data, faster decision-making

Next actions I would have taken

  • Configurable event anomaly detection:
    • Enable users to define event-specific volume thresholds and anomaly rules, giving teams control over what constitutes normal versus critical behavior for their context.
  • Seamless data flow across experimentation and analytics:
    • Allow usage data to flow bi-directionally between A/B testing and analytical tools, reducing duplication and enabling faster validation of product and growth decisions.
  • Integrated behavioral signals:
    • Introduce contextual insights such as heat maps, rage clicks, and session indicators directly within the experience, supporting efficient diagnosis without requiring tool switching.

Conclusion

This project established usage intelligence as a durable platform capability rather than an afterthought or external dependency. By focusing on reliability, visibility, and speed to action, the team moved from reactive investigation to proactive usage monitoring, reducing critical situations and enabling faster, more confident decisions across product and customer teams.

More importantly, the work led to a durable foundation that scales with product complexity and organizational change instead of relying on individual ownership or manual processes. The system establishes accountability, safeguards, and clarity, supporting long-term adoption and consistent outcomes. This shift positioned usage data as an operational asset that teams can trust, act on, and evolve over time.

This work reinforced my belief that reliable systems, clear ownership, and fast feedback loops matter more than sophisticated analytics when teams need to act with confidence at scale.