Time Range on PostgreSQL and Django


Posted on 2025-08-03, by Racum. Tags: PostgresSQL Django

When developing a system that needs to handle time, it is very common to need to track not only “when” something happens, but also “for how long”, for example: event calendars, taxi rides, etc. The most common patterns to solve this problem are start+end and start+duration, but both require two fields in your schema.

Fortunately, PostgreSQL can solve this problem using a range field, a special type of field that encodes the start and end of a numerical series (including time), enabling the use of special functions and operations. The Django framework, trying to be full-feature in relation to PostgreSQL, supports those fields natively.

This is an introduction on how to use those fields via Django’s ORM or directly via SQL.

Data Definition

Let’s assume a simplified Django model like this:

from django.db import models

class TestStartEnd(models.Model):
    start = models.DateTimeField()
    end = models.DateTimeField()

This is a very familiar pattern; you probably saw something similar in a system already. Now, consider an equivalent model that uses a single range field to map both start and end times:

from django.db import models
from django.contrib.postgres.fields import DateTimeRangeField

class TestRange(models.Model):
    period = DateTimeRangeField()

As you can see, it comes from django.contrib.postgres, something built-in to the framework, not requiring anything extra if you already have Django and the PostgreSQL driver installed.

If you save both models above, after a migration, they get interpreted as tables like this:

postgres=# \d data_teststartend
                              Table "public.data_teststartend"
 Column |           Type           | Collation | Nullable |             Default
--------+--------------------------+-----------+----------+----------------------------------
 id     | bigint                   |           | not null | generated by default as identity
 start  | timestamp with time zone |           | not null |
 end    | timestamp with time zone |           | not null |
Indexes:
    "data_teststartend_pkey" PRIMARY KEY, btree (id)

postgres=# \d data_testrange
                        Table "public.data_testrange"
 Column |   Type    | Collation | Nullable |             Default
--------+-----------+-----------+----------+----------------------------------
 id     | bigint    |           | not null | generated by default as identity
 period | tstzrange |           | not null |
Indexes:
    "data_testrange_pkey" PRIMARY KEY, btree (id)

Notice that, in both cases, we are using the “timezone aware” version of the fields, that is both a Django default behavior and a good practice when using time on a database. You can disable that, but only if you have a really good reason for it.

Data Manipulation

You know… “CRUD” operations, but without the “D” (delete), since there is nothing different from any other field type.

Insert

This is how you insert a time range: you need to instantiate a DateTimeTZRange object:

from datetime import datetime, timezone
from django.db.backends.postgresql.psycopg_any import DateTimeTZRange

test_range = TestRange.objects.create(period=DateTimeTZRange(
    datetime(2025, 7, 22, tzinfo=timezone.utc),
    datetime(2025, 7, 23, tzinfo=timezone.utc),
))

The insertion above gets rendered in SQL as something similar to this:

insert into data_testrange values (2, '[2025-08-01 12:00, 2025-08-02 12:00)');

This '[2025-08-01 12:00, 2025-08-02 12:00)' is a special literal value: notice that inside the string, it starts with a square bracket (meaning “inclusive”), and it ends with a parenthesis (meaning “exclusive”).

This is the same literal used by psql to render the values of this type:

select * from data_testrange;
 id |                       period
----+-----------------------------------------------------
  1 | ["2025-07-22 00:00:00+00","2025-07-23 00:00:00+00")
  2 | ["2025-08-01 12:00:00+00","2025-08-02 12:00:00+00")
(2 rows)

Select

In this example, the field period is a Range object with many range-related properties, but for most cases, the most useful ones will be lower() and upper().

>>> test_range.period.lower
datetime.datetime(2025, 7, 22, 0, 0, tzinfo=datetime.timezone.utc)

>>> test_range.period.upper
datetime.datetime(2025, 7, 23, 0, 0, tzinfo=datetime.timezone.utc)

Those map one-to-one with PostgreSQL functions; this is how you work with those values directly in SQL as if they were separated:

select id, lower(period), upper(period) from data_testrange;
 id |         lower          |         upper
----+------------------------+------------------------
  1 | 2025-07-22 00:00:00+00 | 2025-07-23 00:00:00+00
  2 | 2025-08-01 12:00:00+00 | 2025-08-02 12:00:00+00
(2 rows)

Update

Unfortunately, Range objects are immutable; you need to instantiate a new DateTimeTZRange if you want to update just one of the boundaries, copying the value of the one you want to keep.

test_range.period = DateTimeTZRange(
    test_range.period.lower,
    datetime(2025, 7, 25, tzinfo=timezone.utc),
)
test_range.save()

Same goes for direct SQL updates:

update data_testrange set period = tstzrange(lower(period), '2025-07-25') where id = 1;

Query Operations

Starts / ends with

When crafting your queryset, you can use the startswith and endswith filters to match only the start or end of the range:

TestRange.objects.filter(period__startswith=datetime(2025, 8, 1, 12, tzinfo=timezone.utc))
TestRange.objects.filter(period__endswith=datetime(2025, 8, 2, 12, tzinfo=timezone.utc))

The SQL version uses the same lower() and upper() functions, but in the where clause:

select * from data_testrange where lower(period) = '2025-08-01 12:00';
select * from data_testrange where upper(period) = '2025-08-02 12:00';

Remember: Django querysets can have filters chained with multiple "dunders" (__) if the operation is supported by the database, so, a filter like period__startswith__gte works as expected.

Contains

This is where the developer ergonomics of range fields start to shine: you no longer need to chain comparison operators (like when >= start and when < end), this is very error-prone! But, instead, you have a dedicated contains function that works both with single points in time or when comparing with another time range.

This is how contains works with a single element:

TestRange.objects.filter(period__contains=datetime(2025, 8, 2, tzinfo=timezone.utc))

In SQL, that translates as the @> operator:

select * from data_testrange where period @> '2025-08-02'::timestamp at time zone 'utc';

The specific timezone casting here is needed. I’m using UTC for convenience, but you can see all the timezones supported by your version of PostgreSQL by checking the built-in pg_timezone_names table.

And this is how contains works with another time range:

TestRange.objects.filter(period__contains=DateTimeTZRange(
    datetime(2025, 8, 1, 20, 0, tzinfo=timezone.utc),
    datetime(2025, 8, 1, 20, 5, tzinfo=timezone.utc),
))

In SQL: same @> operator, but against another range (I'm using a literal here):

select * from data_testrange where period @> '[2025-08-01 20:00, 2025-08-01 20:05)';

Contained-by

This is the reverse operation from contains, for example: when you have a bigger period of time and want to match all time ranges fully contained inside it.

The function for this is contained_by, and it only works with another Range object:

TestRange.objects.filter(period__contained_by=DateTimeTZRange(
    datetime(2025, 8, 1, tzinfo=timezone.utc),
    datetime(2025, 8, 3, tzinfo=timezone.utc),
))

In SQL, the operator now points in the other direction: <@:

select * from data_testrange where period <@ '[2025-08-01, 2025-08-03)';

Overlaps

Very similar to contained_by, but matches all the time ranges “touching” the period, not needing to be fully contained:

The function for this is overlap, and it also requires another Range object:

TestRange.objects.filter(period__overlap=DateTimeTZRange(
    datetime(2025, 8, 2, tzinfo=timezone.utc),
    datetime(2025, 8, 5, tzinfo=timezone.utc),
))

In SQL, this operator is &&:

select * from data_testrange where period && '[2025-08-02, 2025-08-05)';

More

PostgreSQL supports many other query operators, useful for checking emptiness, adjacency, full "less/greater than", etc; most of those have queryset filter operations in Django to match.

Performance

Range fields are not only ergonomic, depending on your case they could also be more performant.

⚠️ Warning: this is an isolated example, results may vary with your database! Always measure before optimizing!

So, given the two models described in the beginning of the article: one with the old-style start/end time fields, and another with a single range time field; let’s create the same one-million entries on each:

# Creates 1M entries of 1 minute, every 5 minutes:
test_start = datetime(2020, 1, 1, tzinfo=timezone.utc)
for i in range(1_000_000):
    start = test_start + timedelta(minutes=5 * i)
    end = start + timedelta(minutes=1)
    TestStartEnd.objects.create(start=start, end=end)  # Start/end time fields
    TestRange.objects.create(period=DateTimeTZRange(start, end))  # Time range field

For the start/end model, let’s use a BRIN index, since it is very efficient with time values that grow over time; and after that, run the query with explain analyze to check its Query Plan and the time taken:

create index idx_teststartend on data_teststartend using brin ("start", "end");

explain analyse -- List entries from April 2025:
select * from data_teststartend where "start" >= '2025-04-01' and "end" < '2025-05-01';

Query Plan:

 Bitmap Heap Scan on data_teststartend  (...)
   Recheck Cond: (("start" >= '2025-04-01') AND ("end" <= '2025-05-01'))
   Rows Removed by Index Recheck: 11456
   Heap Blocks: lossy=128
   ->  Bitmap Index Scan on idx_teststartend  (...)
         Index Cond: (("start" >= '2025-04-01') AND ("end" <= '2025-05-01'))
 Planning Time: 0.153 ms
 Execution Time: 3.105 ms

So, for this scenario, the number to beat is about 3 milliseconds.

Now, let’s do the same for the model with the time range field; the only difference is the type of index used: PostgreSQL’s documentation suggests the use of a GIST index:

create index idx_testrange on data_testrange using gist (period);

explain analyse -- List entries from April 2025:
select * from data_testrange where period <@ '[2025-04-01, 2025-05-01)';

Query Plan:

 Bitmap Heap Scan on data_testrange  (...)
   Recheck Cond: (period <@ '["2025-04-01", "2025-05-01")')
   Heap Blocks: exact=65
   ->  Bitmap Index Scan on idx_testrange  (...)
         Index Cond: (period <@ '["2025-04-01", "2025-05-01")')
 Planning Time: 0.333 ms
 Execution Time: 2.196 ms

This went down from about 3 to 2 milliseconds.

Going deeper

In this article, I focused on time ranges because I used it recently in a system, but PostgreSQL supports other types of ranges:

  • Integers (both 32 and 64 bits).
  • Floats.
  • Time without timezone.
  • Date (day only, without hours and minutes).

They all have respective Django fields, and they all support the same operators described here for the time range.

I suggest you consider ranges of any type for your systems. They can make your data model mode concise and resilient, your code more idiomatic, and have overall better performance if you can pick the right indexes.