Querying data

Last updated:

|Edit this page

This page provides a high-level overview of how internal queries run when creating insights in PostHog.

If you want to run your own queries inside PostHog, check out SQL insights.

Insights counting unique persons

PostHog determines unique persons in insights like trends and funnels by counting the total number of unique person_id's on events that match your filters.

As an example, let's say that we have the following list of events:

IDEventperson_id
1viewed pageuser1
2viewed pageuser2
3viewed pageuser1

Note: This isn't exactly how person_id's are stored within the events table, but it helps us to keep things simple.

If we ran a query asking for the number of unique users who viewed a page, we would get a result of 2, as our table contains 2 unique person_id's.

What happens when users are merged?

In this example case, we have a user Alice who viewed the page on day 1 from her mobile phone.

On day 2, Alice decides to view the homepage from her desktop where she isn't logged in. This results in the pageview event being associated with a newly created Person (user2).

DayEventdistinct_idperson_id
1pageviewAliceuser1
2pageviewanon-1user2

A unique persons query for pageviews would result in 2 unique users.

Let's assume Alice then on day 3 logs in to her account, which sends an identify event that merges user2 into user1.

DayEventdistinct_idperson_id
1pageviewAliceuser1
2pageviewanon-1user1
3identifyAlice (anon-id = anon-1)user1

A unique persons query for pageviews now results in 1 unique user.

Filtering on person properties

This section covers how PostHog filters out events based on person properties.

Note: If you signed up before June 2024, you can configure in settings to use the faster queries mode instead of joining with the person table for current properties. This is the default for users signed up after June 2024.

Since all the properties for a person are stored on each event, the process is straightforward.

Let's walk through a simple example to see how this works in practice. Let's say we have ingested the following events:

User IDEventSubscription plan (property on each person)
1clicked loginpremium
2refreshed tablepremium
3viewed docsfree
3upgraded planenterprise
3viewed dashboardenterprise
4logged outfree

Note: This isn't exactly how person properties are stored within the events table, but it will help us to keep things simple. For detailed information, check out our data model.

In this case, let's say we only want to see events from users while they were on the premium or enterprise plans.

To achieve this, we filter based on the Subscription plan person property, which would match the following events.

User IDEventSubscription plan (Property on each person)
1clicked loginpremium
2refreshed tablepremium
3upgraded planenterprise
3viewed dashboardenterprise

You may have noticed that user 3 actually upgraded from the free plan to the enterprise plan over this period. Despite this, the event they sent for when they viewed the docs still reflects that they were on the free plan at the time, and is thus filtered out.

In most cases, this is exactly what we want, as it means that we can update the properties for a person without worrying about messing up our past data points. However, if instead you do want to filter based on a person's current properties, you can do so by using person.current_properties.<property> in HogQL or creating a cohort.

Filtering with cohorts

To show this, let's say we want to get all events for users who are currently on enterprise or premium plans.

To do this, we'll create a cohort called 'Paid users' that matches all persons who have their 'plan' property set as either premium or enterprise.

On the insight, we can then filter by the cohort, which would match the following events.

User IDEventSubscription plan (Property on each person)
1clicked loginpremium
2refreshed tablepremium
3viewed docsfree
3upgraded planenterprise
3viewed dashboardenterprise

Filtering on group properties

Group properties work the same way as person properties, and are stored on each event.

Questions?

Was this page useful?

Next article

Session replay ingestion

Overview We use rrweb to collect "snapshot data" from the browser. This data is gathered by the capture API and sent to ingestion. Because we use Kafka for ingestion, there is a maximum allowed message size and snapshot data is often larger than that. Replay has its own ingestion infrastructure with larger limits. We store recording snapshot data in blob storage and aggregated session information in the session_replay_events table. As a result in capture ( posthog-recordings deployment…

Read next article